id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2307.06746
Chiral magnetovortical instability
We demonstrate that in a chiral plasma subject to an external magnetic field, the chiral vortical effect can induce a new type of magnetohydrodynamic instability which we refer to as the {\it chiral magnetovortical instability}. This instability arises from the mutual evolution of the magnetic and vortical fields. It can cause a rapid amplification of the magnetic fields by transferring the chirality of the constituent particles to the cross helicity of the plasma.
Shuai Wang, Xu-Guang Huang
2023-07-13T13:37:10Z
http://arxiv.org/abs/2307.06746v1
# Chiral magnetovortical instability ###### Abstract We demonstrate that in a chiral plasma subject to an external magnetic field, the chiral vortical effect can induce a new type of magnetohydrodynamic instability which we refer to as the _chiral magnetovortical instability_. This instability arises from the mutual evolution of the magnetic and vortical fields. It can cause a rapid amplification of the magnetic fields by transferring the chirality of the constituent particles to the cross helicity of the plasma. **Introduction --** Magnetic fields and fluid vortices coexist in many physical systems, encompassing the electromagnetic (EM) plasmas in stellar and planetary objects, the quark-gluon plasma (QGP) formed in heavy-ion collisions, and the electroweak plasmas in supernovas and early Universe. Notably, the magnetic field and fluid vorticity can induce anomalous, parity-breaking, transport phenomena when the plasma is chiral, that is, when the constituent particles exhibit asymmetries between their left-handed and right-handed species. Two prominent examples are the chiral magnetic effect (CME) [1; 2; 3] and the chiral vortical effect (CVE) [4; 5; 6; 7], which lead to electric currents along the magnetic and vortical fields, respectively. They both emerge from the underlying chiral anomaly, which connects fermion chirality with the topology of EM and gravitational fields. In recent years, the CME and CVE have attracted considerable attention in theoretical and experimental research across many subfields of physics, including nuclear physics, astrophysics, cosmology, and condensed matter physics; see Refs. [8; 9; 10; 11] for reviews. In the hydrodynamic regime (i.e., low energy and long wavelength regime), the chiral plasma can be described by the so-called anomalous hydrodynamics or chiral magnetohydrodynamics (MHD) which extends the standard MHD by incorporating the electric currents from CME and CVE. New wave modes can appear in chiral MHD, such as the chiral magnetic wave [12], chiral vortical wave [13], chiral electric wave [14], chiral Alfven wave [15], and chiral heat wave [16]. Furthermore, the CME can induce a novel magnetic-field instability (and its variants) known as chiral plasma instability or chiral dynamo instability [17; 18], which activates a dynamo mechanism (i.e., the amplification of a weak seed magnetic field) similar to the \(\alpha\)-dynamo but without requiring the presence of finite mean kinetic helicity. This has profound implications for our understanding of magnetic field formation and evolution in various contexts, such as the early Universe [19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34], supernovas and neutron stars [26; 27; 28; 29; 30; 31; 32; 33; 34], QGP in heavy-ion collisions [35; 36; 37; 38; 39], and even Weyl/Dirac semimetals [40; 41; 42]. Various other CME-leading instabilities were also discussed in literature [43; 28; 44]. Unlike the CME, the influence of CVE on the evolution of chiral plasma remains relatively unexplored. This might be because the fluid vorticity is usually considered weak comparing to the magnetic field (unless the system is in a strong kinetic-helicity dominated turbulence). However, this may not be the case when the CVE can cause or catalyze plasma instabilities. Furthermore, there are instances, such as the QGP in heavy-ion collisions, where extremely strong vorticity can appear even in laminar flow [45; 46; 47]. In this paper, we demonstrate that the CVE can indeed induce a new MHD instability, which we refer to as the _chiral magnetovortical instability_ (CMVI), in a magnetized chiral plasma. Before going into the detailed calculation, let us provide an intuitive understanding of the CMVI. Suppose a chiral plasma is situated in a background magnetic field \(\mathbf{B}_{0}\) along the \(z\) direction. Let us consider a sine-shaped perturbation of the fluid velocity \(\mathbf{v}\) perpendicular to \(\mathbf{B}_{0}\), as depicted in Fig.1. In the absence of electric resistivity \(\eta\), such a perturbation would cause a bending of the magnetic field line (according to Alfven's frozen-in theorem), resulting in the generation of a perturbed magnetic field \(\mathbf{b}\) perpendicular to \(\mathbf{B}_{0}\). However, the presence of a finite \(\eta\) would eventually dampen \(\mathbf{b}\) due to magnetic diffusion. This scenario changes when the CVE is taken into account. The perturbation in the fluid velocity induces alternating CVE currents along the \(y\) direction and positioned along the \(z\) axis. These electric Figure 1: Illustration of the arising of the CMVI. currents generate additional magnetic fields that add to the perturbed magnetic field \(\mathbf{b}\). When the CVE is sufficiently strong, the perturbed field \(\mathbf{b}\) in regions like the one marked in the figure is amplified instead of being damped, which in turn leads to an amplification of the velocity \(\mathbf{v}\), leading to the emergence of an instability. We now give an analysis using chiral MHD. We set \(c=\hbar=k_{B}=1\) and also \(\varepsilon_{0}=\mu_{0}=1\), the charge is \(q=1\). **Chiral magnetovortical instability in chiral MHD --** We consider a chiral plasma in which the electric current \(\mathbf{j}\) is given by the following constitutive relation, \[\mathbf{j}=n\mathbf{v}+\sigma(\mathbf{E}+\mathbf{v}\times\mathbf{B})+\mathbf{j}_{B}+\mathbf{j}_{\omega}, \tag{1}\] where \(\mathbf{j}_{B}=\xi_{B}\mathbf{B}\) and \(\mathbf{j}_{\omega}=\xi_{\omega}\mathbf{\omega}\) (\(\mathbf{\omega}=\mathbf{\nabla}\times\mathbf{v}\) is the vorticity) represent the CME and the CVE, respectively, with \(\xi_{B}\propto\mu_{5}\) and \(\xi_{\omega}\propto\mu_{5}\mu\) (\(\mu\) and \(\mu_{5}\) are the electric and chiral chemical potentials) the corresponding conductivities, \(\sigma>0\) is the usual electric conductivity, and \(n\) is the charge density. For the purpose of analysis, we focus on the non-relativistic limit as it offers a more transparent understanding of the CMVI, although a similar analysis can be adapted for relativistic case as well. The governing equations in the non-relativistic limit are given by the following set of chiral MHD equations (See Appendix - for a derivation): \[\rho\left(\partial_{t}+\mathbf{v}\cdot\mathbf{\nabla}\right)\mathbf{v}=-\mathbf{ \nabla}P+\left(\mathbf{\nabla}\times\mathbf{B}\right)\times\mathbf{B}, \tag{2}\] \[\partial_{t}\mathbf{B}=\mathbf{\nabla}\times\left(\mathbf{v}\times\mathbf{B} \right)+\eta\nabla^{2}\mathbf{B}+\eta\xi_{b}\mathbf{\nabla}\times\mathbf{B}+\eta\xi_{ \omega}\mathbf{\nabla}\times\mathbf{\omega}, \tag{3}\] accompanied by the solenoidal conditions for velocity \(\mathbf{v}\) (incompressibility) and magnetic field \(\mathbf{B}\) (Gauss law): \[\mathbf{\nabla}\cdot\mathbf{v} =\mathbf{0}, \tag{4}\] \[\mathbf{\nabla}\cdot\mathbf{B} =\mathbf{0}. \tag{5}\] In the above equations, \(\rho\) is the mass density, \(P\) is the pressure, and \(\eta=1/\sigma\) is the electric resistivity. In Eqs. (2)-(3), we have neglected the viscous terms for the sake of simplicity. However, the effects of viscosity can be readily taken into account. Note that the electric field is not dynamical in MHD due to the screening effects (i.e., the timescale of MHD processes is much longer than the screening time of electric field), but determined by the constitutive relation (1). The wave modes and possible instabilities arising from the CME have been discussed extensively. Therefore, our focus here is on the CVE. For clarity, we deactivate the CME and assume constant pressure and mass density (and thus constant \(\eta,\xi_{\omega}\)) for the moment. We examine the behavior of small fluctuations around a static equilibrium state in the presence of a background magnetic field \(\mathbf{B}_{0}\), i.e., \(\mathbf{v}=0+\mathbf{v}\) and \(\mathbf{B}=\mathbf{B}_{0}+\mathbf{b}\) with \(\mathbf{v}\) and \(\mathbf{b}\) counted as of order \(\delta\ll 1\). We keep terms linear in \(\delta\) in Eqs. (2)-(3) and obtain \[\rho\partial_{t}\mathbf{v}=\mathbf{B}_{0}\cdot\mathbf{\nabla}\mathbf{b}-\mathbf{\nabla }(\mathbf{B}_{0}\cdot\mathbf{b})\, \tag{6}\] \[\partial_{t}\mathbf{b}=\mathbf{B}_{0}\cdot\mathbf{\nabla}\mathbf{v}+\eta\nabla^{ 2}\mathbf{b}-\eta\xi_{\omega}\nabla^{2}\mathbf{v}. \tag{7}\] Contracting \(\mathbf{B}_{0}\) with Eq. (6) implies \(\partial_{t}(\mathbf{B}_{0}\cdot\mathbf{v})=0\), indicating that the longitudinal velocity fluctuation is not dynamical. Therefore, we pay our attention on the transverse velocity fluctuation by assuming \(\mathbf{v}\cdot\mathbf{B}_{0}=0\). With this and the solenoidal conditions (4) and (5), Eqs. (6)-(7) further imply \(\partial_{t}(\mathbf{B}_{0}\cdot\mathbf{b})=0\), meaning that the longitudinal magnetic field fluctuation is not dynamical. Hence, we assume \(\mathbf{b}\cdot\mathbf{B}_{0}=0\) in our analysis. To find the eigen modes of Eqs. (6)-(7), we substitute the plane-wave form of the fluctuations, \[\mathbf{v}=\mathbf{f}_{v}e^{i\left(\mathbf{k}\cdot\mathbf{x}-\omega t\right)},\mathbf{b}=\mathbf{f}_{ b}e^{i\left(\mathbf{k}\cdot\mathbf{x}-\omega t\right)}, \tag{8}\] where \(\mathbf{f}_{v,b}\) are amplitude vectors, and obtain \[\rho\omega\mathbf{f}_{v} =-(\mathbf{B}_{0}\cdot\mathbf{k})\mathbf{f}_{b}\, \tag{9}\] \[(\eta\mathbf{k}^{2}-i\omega)\mathbf{f}_{b} =(\eta\xi_{\omega}\mathbf{k}^{2}+i\mathbf{B}_{0}\cdot\mathbf{k})\mathbf{f}_{v}. \tag{10}\] We obtain immediately the following equation for dispersion relations: \[\omega^{2}+i\eta\mathbf{k}^{2}\omega+i\eta\xi_{\omega}\mathbf{k}^{2}\frac{\mathbf{B}_{0} \cdot\mathbf{k}}{\rho}-\frac{(\mathbf{B}_{0}\cdot\mathbf{k})^{2}}{\rho}=0, \tag{11}\] whose solutions are given by \[\omega =\omega_{\pm} \tag{12}\] \[\equiv-i\frac{\eta}{2}\mathbf{k}^{2}\pm\sqrt{\frac{(\mathbf{B}_{0}\cdot \mathbf{k})^{2}}{\rho}-\frac{\eta^{2}\mathbf{k}^{4}}{4}-i\eta\xi_{\omega}\mathbf{k}^{2} \frac{\mathbf{B}_{0}\cdot\mathbf{k}}{\rho}}\] (13) \[\approx\pm\frac{\mathbf{B}_{0}\cdot\mathbf{k}}{\sqrt{\rho}}-i\frac{\eta}{ 2}\left(1\pm\frac{\xi_{\omega}}{\sqrt{\rho}}\right)\mathbf{k}^{2}+O(\mathbf{k}^{4}), \tag{14}\] where the last line is valid when \(|\mathbf{k}_{\perp}|,|\mathbf{k}_{\perp}|\ll|\mathbf{B}_{0}|/(\eta|\xi_{\omega}|)\) and \(|\mathbf{B}_{0}|/(\eta\sqrt{\rho})\). The first term, \(\omega=\pm\mathbf{B}_{0}\cdot\mathbf{k}/\sqrt{\rho}\), represents the usual Alfven wave modes propagating along and opposite to \(\mathbf{B}_{0}\). Without CVE, the presence of the electric resistivity always induce dissipative diffusion of the magnetic field. However, when the CVE is turned on in the parameter region \(|\xi_{\omega}|\gtrsim\sqrt{\rho}\), one of the Alfven wave modes becomes unstable. This is the CMVI, which amplifies the magnitudes of the magnetic-field and velocity fluctuations. We also note that in another parameter region, namely, when \(|\mathbf{k}|\gg|\mathbf{B}_{0}|/(\eta|\xi_{\omega}|)\) and \(|\mathbf{B}_{0}||\xi_{\omega}|/(\eta\rho)\) (but \(|\mathbf{k}|\ll 1/\eta\) should be always satisfied in order for the hydrodynamic analysis being applicable), we have \[\omega\approx\pm\frac{\xi_{\omega}}{\rho}\mathbf{B}_{0}\cdot\mathbf{k}+O(\mathbf{k}^{-2}). \tag{15}\] The same dispersion relation was derived in Ref. [15] and was referred to as the the chiral Alfven wave. However, In Ref. [15], the magnetic field was considered non-dynamical, and as a consequence, it was found that the dispersion relation (15) holds for \(|\mathbf{k}|\to 0\). It is interesting to note that in the presence of a dynamical EM field, the situation is changed, and the \(|\mathbf{k}|\to 0\) dispersion relation is actually given by the last line of Eq. (12). Some comments are in order: (i) The CMV1 appears when \(|\tilde{\varepsilon}_{\omega}|>\sqrt{\rho}\), regardless of whether we make the small wave number expansion, as we did in the last line in Eq. (12). (ii) For \(\mathbf{k}\perp\mathbf{B}_{0}\), we only have one dissipative mode \(\omega_{-}=-i\eta\mathbf{k}^{2}\) for magnetic field diffusion (see Eq. (7)), while the velocity fluctuation does not propagate. (iii) The appearance of CMV1 indicates that during the evolution of the system, the chirality of the constituent particles should decrease in order for \(\xi_{\omega}\) to decrease and eventually cease the instability. The continuity equation for helicity thus implies that the magnetic and/or flow helicities would increase, thereby triggering a dynamo action. We will analyze this possibility in the following. **Fate of CMV1** -- The CMV1, once it takes place (i.e., when \(\xi_{\omega}>\sqrt{\rho}\), assuming \(\xi_{\omega}>0\)), cannot last forever. The system would evolve towards a state where \(\xi_{\omega}<\sqrt{\rho}\), thus terminating the CMV1. Due to the conservation of electric charge, we expect that the decrease of \(\xi_{\omega}\) would be mainly due to the decrease of \(\mu_{5}\). To analyze how this occurs, we examine the chiral anomaly equation, \[\partial_{t}\beta_{5}^{0}+\mathbf{\nabla}\cdot\mathbf{j}_{5}=C\mathbf{E}\cdot\mathbf{B}, \tag{16}\] where \(C\) is a constant representing the strength of the chiral anomaly (for the usual EM plasma, \(C=1/2\pi^{2}\)), \(\mathbf{j}_{5}\) is the chiral current, and \(\beta_{5}^{0}=n_{5}+k_{B}\mathbf{v}\cdot\mathbf{B}+\kappa_{\omega}\mathbf{v}\cdot\mathbf{\omega}\) with \(n_{5}\) the chiral density of constituent particles, \(k_{B}\propto\mu\) and \(\kappa_{\omega}\propto T^{2}\) the conductivities of chiral separation effect (CSE) [48; 49] and axial CVE, respectively. Assuming a homogeneity of the system and writing \(n_{5}=\chi_{5}\mu_{5}\), with \(\chi_{5}\propto T^{2}\) denoting the chiral susceptibility, we can derive the following evolution equation for \(\mu_{5}\): \[\chi_{5}\partial_{t}\mu_{5}=-\frac{C}{2}\partial_{t}\mathcal{H}_{b}-\kappa_{ \rho}\partial_{t}\mathcal{H}_{c}-\kappa_{\omega}\partial_{t}\mathcal{H}_{ \gamma}-\Gamma\chi_{5}\mu_{5}, \tag{17}\] where \(\mathcal{H}_{b}=\langle\mathbf{A}\cdot\mathbf{B}\rangle\), \(\mathcal{H}_{c}=\langle\mathbf{v}\cdot\mathbf{B}\rangle\), and \(\mathcal{H}_{v}=\langle\mathbf{v}\cdot\mathbf{\omega}\rangle\) are the average magnetic, cross, and kinetic helicities, respectively, with \(\langle\cdots\rangle\equiv V^{-1}\int d^{3}\mathbf{x}(\cdots)\). Using \(\mathbf{A}=\mathbf{B}_{0}\times\mathbf{x}/2+\mathbf{a}\) (with \(\mathbf{a}\) is the fluctuating vector potential), and the conditions \(\langle\mathbf{a}\rangle=\mathbf{0}=\langle\mathbf{v}\rangle\), one finds that \(\mathcal{H}_{b}=\langle\mathbf{a}\cdot\mathbf{b}\rangle\) and \(\mathcal{H}_{c}=\langle\mathbf{v}\cdot\mathbf{b}\rangle\). We have also introduced the chirality relaxation rate \(\Gamma\) in order to account for the chirality-flipping process due to, e.g., massiveness of the particles [29]. To proceed, we expand the fields in their Fourier modes, \[\mathbf{v}(t,\mathbf{x}) =\int_{\mathbf{k}}\mathbf{v}(t,\mathbf{k})e^{i\mathbf{k}\cdot\mathbf{x}}, \tag{18}\] \[\mathbf{b}(t,\mathbf{x}) =\int_{\mathbf{k}}\mathbf{b}(t,\mathbf{k})e^{i\mathbf{k}\cdot\mathbf{x}}, \tag{19}\] where \(\int_{\mathbf{k}}\equiv\int d^{3}\mathbf{k}/(2\pi)^{3}\). For each Fourier mode, we further expand it in helicity basis with \(\mathbf{e}_{3}(\mathbf{k})=\hat{\mathbf{k}}\equiv\mathbf{k}/|\mathbf{k}|\) and \(\mathbf{e}_{\pm}(\mathbf{k})\) as the right-hand and left-hand helicity basis vectors. They satisfy the following properties: \(\hat{\mathbf{k}}\times\mathbf{e}_{\pm}(\mathbf{k})=\mp i\mathbf{e}_{\pm}(\mathbf{k})\), \(\hat{\mathbf{k}}\cdot\mathbf{e}_{\pm}(\mathbf{k})=0\), and \(\mathbf{e}_{\pm}(\mathbf{k})\cdot\mathbf{e}_{\pm}^{*}(\mathbf{k})=1,\mathbf{e}_{\pm}(\mathbf{k})\cdot \mathbf{e}_{\pm}^{*}(\mathbf{k})=0\). The solenoidal conditions for \(\mathbf{v}\) and \(\mathbf{B}\) imply that \(\mathbf{v}(t,\mathbf{k})=\sum_{s=\pm}v_{s}(t,\mathbf{k})\mathbf{e}_{s}(\mathbf{k})\) and \(\mathbf{b}(t,\mathbf{k})=\sum_{s=\pm}b_{s}(t,\mathbf{k})\mathbf{e}_{s}(\mathbf{k})\). Using this helicity expansion and focusing on the long-wavelength modes, we can rewrite Eqs. (6)-(7) as \[\partial_{t}z_{1\pm}(t,\mathbf{k}) =-i\omega_{+}z_{1\pm}(t,\mathbf{k}), \tag{20}\] \[\partial_{t}z_{2\pm}(t,\mathbf{k}) =-i\omega_{-}z_{2\pm}(t,\mathbf{k}). \tag{21}\] Here, \(\mathbf{z}_{1,2}=\sum_{s=\pm}z_{1,2s}\mathbf{e}_{s}(\mathbf{k})\) are the CVE-modified El-sasser fields [50], given by \[z_{1\pm} \approx\Big{(}1-\frac{i\eta\xi_{\omega}^{\prime}\mathbf{k}^{2}}{2\bm {B}_{0}^{\prime}\cdot\mathbf{k}}\Big{)}v_{\pm}-\Big{(}1-\frac{i\eta\mathbf{k}^{2}}{2 \mathbf{B}_{0}^{\prime}\cdot\mathbf{k}}\Big{)}b_{\pm}^{\prime}, \tag{22}\] \[z_{2\pm} \approx\Big{(}1-\frac{i\eta\xi_{\omega}^{\prime}\mathbf{k}^{2}}{2\bm {B}_{0}^{\prime}\cdot\mathbf{k}}\Big{)}v_{\pm}+\Big{(}1+\frac{i\eta\mathbf{k}^{2}}{2\bm {B}_{0}^{\prime}\cdot\mathbf{k}}\Big{)}b_{\pm}^{\prime}, \tag{23}\] with the primed quantities being scaled as \(\mathbf{B}_{0}^{\prime}=\mathbf{B}_{0}/\sqrt{\rho},\mathbf{b}^{\prime}=\mathbf{b}/\sqrt{\rho}\), and \(\xi_{\omega}^{\prime}=\xi_{\omega}/\sqrt{\rho}\). Writing in helicity basis, the average kinetic energy per unit mass, magnetic energy per unit mass, and various helicities are expressed by \(\mathcal{E}_{v}=\langle\mathbf{v}^{2}\rangle/2=(1/2V)\int_{\mathbf{k}}(|v_{+}|^{2}+|v_{- }|^{2})\), \(\mathcal{E}_{b}=\langle\mathbf{b}^{\prime 2}\rangle/2=(1/2V)\int_{\mathbf{k}}(|b_{+}^{\prime}|^{2}+|b_{-}^{ \prime}|^{2})\), \(\mathcal{H}_{v}=(1/V)\int_{\mathbf{k}}|\mathbf{k}|(|v_{+}|^{2}-|v_{-}|^{2})\), \(\mathcal{H}_{b}=(1/V)\int_{\mathbf{k}}(|b_{+}|^{2}-|b_{-}|^{2})/|\mathbf{k}|\), and \(\mathcal{H}_{c}=(1/V)\int_{\mathbf{k}}(v_{+}b_{+}^{*}+v_{-}b_{-}^{*})\), respectively. When the chirality relaxation is negligible, \(\Gamma=0\), the coupled equations (17), (20), and (21) permit a state in which \(\mu_{5}\), various helicities, and kinetic and magnetic energies are stationary. Such a state satisfies the condition that \(\xi_{\omega}^{\prime}=1\). To see this, we first observe from Eq. (20) that the magnetic diffusion would eventually diminish \(z_{1\pm}\) and enforce \(v_{\pm}=b_{\pm}^{\prime}\) when \(\xi_{\omega}^{\prime}=1\). Then, Eq. (23) and Eq. (21) gives that \(v_{\pm}(t,\mathbf{k})=b_{\pm}^{\prime}(t,\mathbf{k})\propto e^{i\mathbf{B}_{0}^{\prime}\cdot\mathbf{ k}t}\) representing a pure Alfven wave (an Alfvenic state). In this case, \(\mathcal{E}_{v,b}\), and \(\mathcal{H}_{v,b,c}\) become time independent, which also implies the time independence of \(\mu_{5}\) according to Eq. (17). This suggests that when \(\Gamma=0\), the system will eventually evolve into a state such that \(\xi_{\omega}^{\prime}=1\), regardless of whether \(\xi_{\omega}^{\prime}\) is initially smaller or larger than \(1\), as confirmed by numerical calculation given in Fig. 2. When \(\xi_{\omega}^{\prime}\) is initially larger than \(1\), this provides a dynamo mechanism. It has been long believed that the Alfvenic state is favored in relaxation processes in the MHD plasmas [51]. Therefore, for a chiral plasma, the CMVI provides a mechanism for a fast reachment of such a Alfvenic state, in addition to other known effects [52]. Note that such a state maximizes the cross helicity \(\mathcal{H}_{c}\) for a fixed total energy per unit mass \(\mathcal{E}_{v}+\mathcal{E}_{b}\). When a finite \(\Gamma\) is present, \(\mu_{5}\) is constantly driven to zero. But this process can be very slow as usually \(\Gamma\) is small. As an illustration, in Fig. 2, we show the time evolution of \(\xi^{\prime}_{\omega}\), cross helicity \(\mathcal{H}_{c}\), and magnetic energy \(\mathcal{E}_{b}\), with an initial \(\xi^{\prime}_{\omega}=5\). To highlight the effect of CMVI, we have chosen an initial condition such that \(\mathcal{H}_{b}=\mathcal{H}_{v}=0\), which implies that they remain zero throughout the time evolution. Other parameters are chosen as follows: All the dimensionful quantities are in units of \(1/\eta\), the background magnetic field is \(|\mathbf{B}^{\prime}_{0}|=5\), the initial \(v_{\perp}(0)=b^{\prime}_{\perp}(0)\) are given as a Fermi-Dirac shape \(v_{0}/[\exp(10\eta|k_{z}|-100)+1]\) with \(v_{0}=0.1\). It is evident from Fig. 2 that at the early times, the CMVI drives both the magnetic helicity and magnetic energy to grow exponentially. After that \(\xi^{\prime}_{\omega}\) becomes smaller than \(1\), the system evolves slowly (quasi-stationarily) towards vanishing velocity and magnetic field due to the finite \(\Gamma\). In such a way, the CMVI provides a fast dynamo mechanism by transferring chirality of constituent particles to the cross helicity of the system. It is worth noting that such a CMVI-induced dynamo mechanism bears some analogy with the turbulent cross helicity dynamo [53, 54], in which the turbulent electromotive force \(\langle\mathbf{v}\times\mathbf{b}\rangle\) gains a term \(\propto\omega\) due to the mean cross helicity in the plasma. This cross-helicity dynamo has been shown to play significant roles in geophysical and astrophysical plasmas [53, 54]. However, it is important to note that our CMVI-induced dynamo has a completely different origin from the cross-helicity dynamo, although they could act together in a turbulent chiral plasma. **Inclusion of chiral magnetic effect --** In the above discussion, for the purpose of transparency, we have intentionally excluded the CME from the electric current. Upon restoring the CME, the linearized chiral MHD equations, expressed in terms of the reduced variables, become \[\partial_{t}\mathbf{v}=\mathbf{B}^{\prime}_{0}\cdot\mathbf{\nabla}\mathbf{b}^{ \prime}\, \tag{24}\] \[\partial_{t}\mathbf{b}^{\prime}=\mathbf{B}^{\prime}_{0}\cdot\mathbf{\nabla} \mathbf{v}+\eta\nabla^{2}\mathbf{b}^{\prime}+\eta\xi_{B}\mathbf{\nabla}\times\mathbf{b}^{ \prime}-\eta\xi^{\prime}_{\omega}\nabla^{2}\mathbf{v}. \tag{25}\] The dispersion relation of the eigenmodes of these equations are given by \[\omega =\omega_{\pm}^{\chi}\equiv-\frac{i\eta}{2}(\mathbf{k}^{2}-\chi\xi_{B} |\mathbf{k}|) \tag{26}\] \[\pm\frac{1}{2}\sqrt{4(\mathbf{B}^{\prime}_{0}\cdot\mathbf{k})^{2}-\eta^{2 }(\mathbf{k}^{2}-\chi\xi_{B}|\mathbf{k}|)^{2}-4i\eta\xi^{\prime}_{\omega}\mathbf{k}^{2} \mathbf{B}^{\prime}_{0}\cdot\mathbf{k}}\] (27) \[\approx\pm\mathbf{B}^{\prime}_{0}\cdot\mathbf{k}-i\frac{\eta}{2}\left(1 \pm\xi^{\prime}_{\omega}\right)\mathbf{k}^{2}+i\chi\frac{1}{2}\eta\xi_{B}|\mathbf{k}|+ O(|\mathbf{k}|^{3}), \tag{28}\] where \(\chi=\pm\) corresponds to two different helicities in \(\mathbf{b}\). Therefore, we observe that, assuming \(\xi^{\prime}_{\omega},\xi_{B}>0\), when \(\xi^{\prime}_{\omega}>1\), there is always an unstable mode corresponding to \(\omega_{-}^{\chi=+}\). Even when \(\xi^{\prime}_{\omega}<1\), the usual chiral plasma instability is catalyzed in a way that the modes with \(|\mathbf{k}|<\xi_{B}/(1-\xi^{\prime}_{\omega})\) are unstable, meaning that the unstable region in the wave number is enlarged from \(|\mathbf{k}|<\xi_{B}\)[17, 18] to \(|\mathbf{k}|<\xi_{B}/(1-\xi^{\prime}_{\omega})\). **Discussion --** To summarize, we have demonstrated that the presence of the CVE can induce a new type of plasma instability, the CMVI, in the presence of a background magnetic field. While the condition for CMVI to occur, \(\xi^{\prime}_{\omega}>1\), is stringent, we have discussed that other mechanisms, such as turbulence-induced cross helicity [53, 54], may facilitate the onset of CMVI. Additionally, the combined effects of CVE and CME can broaden the kinematic region for the occurrence of chiral plasma instability, thus leaving a trace of CMVI. The CMVI can have interesting implications, e.g., it may lead to a new dynamo action and affect the evolution of the magnetic, cross, and kinetic helicities in chiral plasma. Possible applications include the electromagnetic plasma in astrophysical objects, the primordial electroweak plasma in early Universes, the quark-gluon plasma in heavy-ion collisions, and the electron plasma in Dirac and Weyl semimetals. **Acknowledgments --** This work is supported by the Natural Science Foundation of China (Grant No. 12147101, No. 12225502 and No. 12075061), the National Key Research and Development Program of China (Grant No. 2022YFA1604900), and the Natural Science Foundation of Shanghai (Grant No. 20ZR1404100). ## Appendix A Derivation of the MHD equations For completeness, we provide a derivation of the chiral MHD equations (2) and (3) in this Appendix. A similar derivation for the conventional MHD equations can be found in textbooks such as [55]. In the one-fluid description of the chiral plasma, the equation of motion for flow velocity \(\mathbf{v}\) is given by (where \(P,\rho,n\), and \(\mathbf{j}\) are pressure, mass density, charge density, and electric current, respectively) \[\rho\left(\partial_{t}+\mathbf{v}\cdot\mathbf{\nabla}\right)\mathbf{v}=-\mathbf{ \nabla}P+n\mathbf{E}+\mathbf{j}\times\mathbf{B}, \tag{29}\] coupled with the Maxwell equations, \[\mathbf{\nabla}\cdot\mathbf{E}=n, \tag{30}\] \[\mathbf{\nabla}\cdot\mathbf{B}=0,\] (31) \[\mathbf{\nabla}\times\mathbf{B}=\partial_{t}\mathbf{E}+\mathbf{j},\] (32) \[\mathbf{\nabla}\times\mathbf{E}=-\partial_{t}\mathbf{B}, \tag{33}\] and the continuity equations for \(\rho\) and \(n\), \[\partial_{t}\rho+\mathbf{\nabla}\cdot(\rho\mathbf{v})=0, \tag{34}\] \[\partial_{t}n+\mathbf{\nabla}\cdot\mathbf{j}=0. \tag{35}\] The constitutive relation for \(\mathbf{j}\) is \[\mathbf{j}=\mathbf{j}_{f}+\mathbf{j}_{\text{ohm}}+\mathbf{j}_{B}+\mathbf{j}_{\omega}, \tag{36}\] with \(\mathbf{j}_{f}=n\mathbf{v}\) the free current, \(\mathbf{j}_{\text{ohm}}=\sigma(\mathbf{E}+\mathbf{v}\times\mathbf{B})\) the Ohmic current, \(\mathbf{j}_{B}=\xi_{\mathbf{B}}\mathbf{B}\) the CME current, and \(\mathbf{j}_{\omega}=\xi_{\omega}\omega\) the CVE current. We have omitted the viscous terms in Eq. (29) for the sake of simplicity. However, the effects of viscosity can be readily taken into account. From Eq. (33), we have \(|\mathbf{E}|/|\mathbf{B}|\sim L/\tau\equiv u_{0}\) with \(L,\tau\) and \(u_{0}\) the characteristic length, time, and velocity scales of the plasma. In the non-relativistic limit, \(u_{0}\ll 1\), and one finds \[\frac{|\partial_{t}\mathbf{E}|}{|\mathbf{\nabla}\times\mathbf{B}|}\sim u_{0 }^{2}\ll 1, \tag{37}\] \[\frac{|n\mathbf{E}|}{|\mathbf{j}\times\mathbf{B}|}\sim\frac{|\mathbf{\nabla}\cdot \mathbf{E}||\mathbf{E}|}{|(\mathbf{\nabla}\times\mathbf{B})\times\mathbf{B}|}\sim u_{0}^{2}\ll 1,\] (38) \[\frac{|\mathbf{j}_{f}|}{|\mathbf{\nabla}\times\mathbf{B}|}\sim\frac{|\mathbf{ \nabla}\cdot\mathbf{E}||\mathbf{v}|}{|\mathbf{\nabla}\times\mathbf{B}|}\sim u_{0}^{2}\ll 1. \tag{39}\] Therefore, we can eliminate \(n\mathbf{E}\) from Eq. (29), \(\partial_{t}\mathbf{E}\) from Eq. (32), and \(\mathbf{j}_{f}\) from Eq. (36). Consequently, the electric field \(\mathbf{E}\) is no longer a dynamical quantity and is determined by Eq. (36), \[\mathbf{E}=-\mathbf{v}\times\mathbf{B}+\eta(\mathbf{j}-\mathbf{j}_{B}-\mathbf{j}_{\omega}). \tag{40}\] Substituting Eq. (32) into Eq. (29) and \(\mathbf{E}\) into Eq. (33), we obtain Eqs. (2)-(3) in the main text.
2305.01888
Fairness in AI Systems: Mitigating gender bias from language-vision models
Our society is plagued by several biases, including racial biases, caste biases, and gender bias. As a matter of fact, several years ago, most of these notions were unheard of. These biases passed through generations along with amplification have lead to scenarios where these have taken the role of expected norms by certain groups in the society. One notable example is of gender bias. Whether we talk about the political world, lifestyle or corporate world, some generic differences are observed regarding the involvement of both the groups. This differential distribution, being a part of the society at large, exhibits its presence in the recorded data as well. Machine learning is almost entirely dependent on the availability of data; and the idea of learning from data and making predictions assumes that data defines the expected behavior at large. Hence, with biased data the resulting models are corrupted with those inherent biases too; and with the current popularity of ML in products, this can result in a huge obstacle in the path of equality and justice. This work studies and attempts to alleviate gender bias issues from language vision models particularly the task of image captioning. We study the extent of the impact of gender bias in existing datasets and propose a methodology to mitigate its impact in caption based language vision models.
Lavisha Aggarwal, Shruti Bhargava
2023-05-03T04:33:44Z
http://arxiv.org/abs/2305.01888v1
# Fairness in AI Systems: Mitigating gender bias from language-vision models ###### Abstract Our society is plagued by several biases, including racial biases, caste biases, and gender bias. As a matter of fact, several years ago, most of these notions were unheard of. These are human creations that passed through generations, with much amplification leading to the scenario wherein several biases have taken the role of expected norms or required laws by certain groups in the society. One notable example is of gender bias. The notion about the role of a man, and that of a woman have taken different forms across the globe. Whether we talk about the corporate world, political scenarios or daily lifestyle, some generic differences are observed regarding the involvement of both the groups. This differential distribution, being a part of the society at large, exhibits its presence even in the recorded data. Now, the field of machine learning is completely dependent on the available data. The idea of learning from data and making predictions based on that assumes that the data defines the expected behaviour at large. This is a huge obstacle in the path for creating a world with equality and justice. This work studies and attempts to alleviate the gender bias issues from the task of image captioning. ## 2 Motivation and Related Work The motivation for this project stems from the recent paper T. et al. (2016) highlighting the rampant gender bias severely infecting most of the present day machine learning and natural language based models. They experimented with word analogies and found that due to the inherent bias present in the training data, relations were developed between certain kind of activities or objects and gender words (_'man'- 'doctor'_, _'woman'-'homenaker' etc.)_ leading to incorrect inferences. It is observed that this gender bias in datasets is a demonstration of societal behaviours, owing to the skewed data distributions in existing datasets, and also from the human annotators' prejudices, responsible for labelling the datasets [Selvaraju et al. (2016)][Veit et al. (2017)][van Miltenburg (2016)][Antol et al. (2015)]. These biased data result in biased models (for example the Word2Vec word embeddings) which are further used for more advanced models and tasksKafle and Kanan (2017) Zhang et al. (2015). ### Gender Bias via human annotation We manually analysed the MS-COCO dataset and found some disturbing ground-truth annotations for the captions. Several images, wherein there is a partially visible human who can't be distinguished as a man or woman, most human annotators use their bias about the expected (or more common) gender from the activity in the image. For instance, a human on a bike is labelled as'man', since driving a bike is considered uncommon for women. Instead of accepting the human as a gender neutral person, annotators tend to use their own bias. Following are selected examples where all human annotators labelled the human as male even though the gender is not clear from the image. There are many such instances, though we do not report it here due to space crunch. The dataset ground truth captions for these images are mentioned below each of them. The surprising point to note is that, for each of the images, all 4 human annotators labelled the person as'man' when it can be clearly seen that the gender is not distinguishable from the image. ### Gender Bias in the existing datasets In the figure below, we observe the gender bias present in some of the popular datasets, namely MS-COCO and imSitu Verb. It reflects that words like 'braiding','shopping', 'washing' occur much more frequently with females than males in the data whereas words like'repairing' and 'coaching' have been seen much more with males.Zhao et al. (2017) Burns et al. (2018) ### Gender Bias in some of the state-of-the-art models Next we analyse how this gender bias leads to the faulty performance of our systems relying on ML models. The figure below demonstrates Google Translate translating a text from the gender-neutral language Turkish to English, a gender aware language. It shows that the model assigns a male gender to a 'doctor' and a female gender to 'nurse' to an initially gender neutral sentence in Turkish. In another trained model for image captioning, we observe below that the model labels the subject as'man' giving the maximum attention to the computer screen. This again signifies the relation between a computer and man learnt inherently by the model from the training data. The image clearly contains a woman, but the model instead uses the prior associated with computers to make the gender decision.[Burns et al. (2018) ] ## 3 Our Approach To understand the gender bias present in Image captioning tasks, we decided to analyse the model of the paper Show, Attend and Tell [Xu et al. (2015)], it being a popular recent work on Image captioning. We first conducted experiments to determine how much and in what form gender bias is present in the dataset and the model. This led us to realize that often the incorrect captions are a result of the relations learnt between certain activity/object words and gender words. Since the dataset has a biased distribution of gender-activity pairs, the model learns this bias. As an approach to mitigate the effect of biased data distribution, we thought about techniques to make the data unbiased in order to avoid the model from observing any unwarranted relations. We thought that this could be done by evening out the samples for males and females for each activity. However, owing to the innumerable activities, and sparse occurrence of most scenes and activities, with certain rare activities having instances only with one gender; we realized that this would not be a feasible idea. Moreover, even if possible, this strategy of balancing the data would be very restrictive and require manual analysis and correction for every new dataset. So we decided to look for another direction. Our goal was essentially to prevent the model from making strong priors about the activity based on the gender observed or vice versa. This was equivalent to asking the model to be gender neutral and only observe the image while predicting the activity in the image, and to only look at the person to predict the subject's gender, and not just relating the two based on priors. Though, relational priors should be helpful in strengthening a guess based on the image, and considering that the test distribution is the same as training distribution, should give better results. However, preventing unwarranted conclusions from these priors (that neglect the image) would be very challenging in this way. Also, we believe that for a given test image in real-time, the model should not correlate such ideas, since women and men should be considered equally capable of doing most activities, and thus be identified irrespective of the activity. Hence, we decided to decouple these two decisions. So we decide to split our goal into two tasks, drawing from Misra et al. (2016). One, a gender neutral image captioning task, and the other being just gender identification task in the image. For the first goal, we make the Show Attend and Tell model gender neutral by removing all instances of gender from the training data. For this we replace gender words like man, woman, men, lady, guy, etc. with person or people such that no gender specific relations are learned. Thereafter, we train the Gender Agnostic Show Attend and Tell network. For the second goal, we used an existing popular model, trained to identify the gender of the humans present in the image. The results from both these tasks are then combined to obtain the final captions. Our proposed overall model is named - Show Attend and Identify - SAI, since it identifies the gender separately after obtaining a caption for the image. The above figure shows the top branch as the gender agnostic Show, Attend and Tell model and the bottom branch specialised in gender identification. We replaced the gender neutral words (person, people) with gender words (man, woman, men, women) based on the output of the Gender Identity net. ## 4 Datasets Our work is primarily on the MSCOCO dataset. To correctly evaluate the performance of our model, we split the data into subsets. The following 3 subsets are used for experiments and evaluation of our model and comparison with the existing Show, Attend and tell modelXu et al. (2015). * **MSCOCO Gender confident dataset**: Because of the rampant human annotator bias infecting the data, there was a severe need for a dataset which is accurate and for which the labels are consistent. Since the test data is composed of total 40K instances and we wanted to test on a reasonable sized data, manual selection was not possible. Hence, drawing from Burns et al. (2018), we relied on the fact that for a confident presence of a male or female in the image, all captions for that image should have a consistent mention of the gender. Hence, we picked up those image-caption instances for which all the captions had the same gender word (which can be either of man, woman, men, women, lady, ladies, guy, guys, etc.) This gave us a subset of 2036 images. * **MSCOCO Human dataset**: We wanted to analyse the performance of our model to test the quality of captions generated for human-activity and human-object pairs and how that gets affected. Hence, we filtered those images for which any of the captions contain a human identifier word (which can be either of person, people, player, skier, snowboarder, man, woman, men, women, lady, ladies, guy, guys, etc.). This is done by assuming that if a human being is present, atleast one of the captions would mention it. Also, if any of the captions mentions a human identity word, a human being must be present in that image. This gave us a dataset of around 19,051 images. * **MSCOCO Nature dataset**: We wanted to analyse the performance of our model on images without the presence of any human beings to determine the performance and the effects, if any. Hence, we filtered out images where no human being is present and we relied on the fact that if none of the captions for an image contain a human label (which can be either of person, people, player, skier, snowboarder, man, woman, men, women, lady, ladies, guy, guys, etc.) then the image does not contain any human being. This gave us a subset of around instances 21,453 images ## 5 Experiments and Results For the gender Identificiation task, we use Levi and Hassner (2015) in our experiments. We evaluated our model on the three datasets created and described above separately in order to observe the performance on each of the datasets. Since, our gender agnostic network provides predictions with person as the output, we could compare the overall quality of the captions predicted by the model by comparing the gender-neutral predictions. Hence, we also report results for this. For Show Attend and Tell, the predictions were made gender neutral (by replacing the words man, woman, girl, boy, men, etc with person or people). Ground-truth captions were also manually made gender neutral by replacing gender aware words with gender neutral ones. SAT is the Show Attend and Tell model. SAT-N denotes the comparison when the predicted captions of SAT are manually made gender neutral and compared with gender-neutral ground truth captions. SAI-Show, Attend and Identify is our complete model. SAI-N is the gender neutral component of our model. The following tables represent the BLEU, METEOR, ROUGE\({}_{L}\) and CIDEr scores obtained on the different datasets. \begin{table} \begin{tabular}{||c c c c c c c c||} \hline & Bleu1 & Bleu2 & Bleu3 & Bleu4 & METEOR & \(ROUGE_{L}\) & CIDEr \\ \hline \hline SAT & 0.59 & 0.38 & 0.25 & 0.17 & 0.20 & 0.49 & **0.44** \\ \hline SAI & 0.59 & 0.38 & 0.24 & 0.16 & 0.19 & 0.49 & 0.43 \\ \hline SAT-N & 0.62 & 0.42 & 0.28 & **0.20** & **0.22** & 0.52 & **0.44** \\ \hline SAI-N & **0.64** & **0.43** & **0.29** & 0.19 & **0.22** & **0.53** & **0.44** \\ \hline \end{tabular} \end{table} Table 1: Performance on confident dataset, both in gender aware and gender neutral settings (MSCOCO Gender Confident) \begin{table} \begin{tabular}{||c c c c c c c c||} \hline & Bleu1 & Bleu2 & Bleu3 & Bleu4 & METEOR & \(ROUGE_{L}\) & CIDEr \\ \hline \hline SAT & 0.62 & **0.41** & **0.27** & **0.18** & 0.20 & 0.49 & **0.55** \\ \hline SAI & 0.62 & 0.40 & 0.26 & 0.17 & 0.20 & 0.49 & 0.54 \\ \hline \end{tabular} \end{table} Table 2: Performance on images with no humans (MSCOCO Nature) \begin{table} \begin{tabular}{||c c c c c c c c||} \hline & Bleu1 & Bleu2 & Bleu3 & Bleu4 & METEOR & \(ROUGE_{L}\) & CIDEr \\ \hline \hline SAT-N & 0.60 & 0.38 & **0.25** & **0.17** & 0.20 & **0.48** & 0.47 \\ \hline SAI-N & **0.61** & 0.38 & 0.24 & 0.16 & 0.20 & 0.47 & 0.47 \\ \hline \end{tabular} \end{table} Table 3: Performance on images with humans (MSCOCO Human) A major limiting factor for the performance of our model is the accuracy of the gender identity CNN. It is a popular model with one of the best results reported. However surprisingly the model had a significantly low accuracy of around 50% on the MSCOCO Confident dataset. This poses as a major roadblock. The above example shows qualitative results of our model and the show attend and tell model. ## 6 Conclusion and Discussion This project has demonstrated that gender bias is clearly prevalent in existing datasets and models. We propose a technique to do away with the bias in image captioning. Though our work is limited to images where all humans need to belong to the same gender, however it demonstrates that developing a gender neutral model allows us to get rid of the gender biased relations. The results depict that this leads to an improvement in the overall caption quality, while the performance for non-human images is not affected. Also, we noticed that on removing man/woman from training data, our gender neutral model predictions saw a sudden emergence of the words'male' and 'female'. This depicts the close relation between the word embeddings for these. Overall, we believe that the approach to get unbiased models involves having the decoupling procedure to avoid any gender specifications and thus unwarranted relations from entering the model. ## 7 Problems and Future Work We started this project with the task of removing gender bias from VQA models, however VQA at present remains one of the most challenging problems, having been called the 'visual turing test' by some. Hence, most VQA models at present are too naive making the task all the more challenging. Hence, we worked on the removal of gender bias from Image captioning tasks in which decent results have been obtained by some of the state-of-the-art methods including Show, Attend and Tell which we use as our basenet. One could experiment with the use of debiased word embeddings to further remove gender bias from the model. One could learn a gender identification network which not only identifies male/female but also has a third category of person, which should be chosen in cases when the discrimination is not possible. More work on the gender network could lead to improvements for our overall model.
2307.01029
Advancing O-RAN to Facilitate Intelligence in V2X
Vehicular communications integrated with the Radio Access Network (RAN) are envisioned as a breakthrough application for the 6th generation (6G) cellular systems. However, traditional RANs lack the flexibility to enable sophisticated control mechanisms that are demanded by the strict performance requirements of the vehicle-to-everything (V2X) environment. In contrast, the features of Open RAN (O-RAN) can be exploited to support advanced use cases, as its core paradigms represent an ideal framework for orchestrating vehicular communication. Although the high potential stemming from their integration can be easily seen and recognized, the effective combination of the two ecosystems is an open issue. Conceptual and architectural advances are required for O-RAN to be capable of facilitating network intelligence in V2X. This article pioneers the integration of the two strategies for seamlessly incorporating V2X control within O-RAN ecosystem. First, an enabling architecture that tightly integrates V2X and O-RAN is proposed and discussed. Then, a set of key V2X challenges is identified, and O-RAN-based solutions are proposed, paired with extensive numerical analysis to support their effectiveness. Results showcase the superior performance of such an approach in terms of raw throughput, network resilience, and control overhead. Finally, these results validate the proposed enabling architecture and confirm the potential of O-RAN in support of V2X communications.
Eugenio Moro, Francesco Linsalata, Maurizio Magarini, Umberto Spagnolini, Antonio Capone
2023-07-03T14:01:24Z
http://arxiv.org/abs/2307.01029v2
# Advancing O-RAN to Facilitate Intelligence in V2X ###### Abstract Vehicular communications at high frequencies are envisioned to be a breakthrough application for the 6th generation (6G) cellular systems. Traditional Radio Access Networks (RANs) lack the flexibility to enable sophisticated control mechanisms that are demanded by the strict performance requirements of the dynamic vehicular environment. In contrast, the features of Open RAN (RAN) can be exploited to support advanced use cases. Indeed, the emerging paradigm of O-RAN represents an ideal framework for the orchestration of vehicular communication. Although the high potential stemming from their integration can be easily seen and recognized, the effective combination of the two ecosystems is an open issue. This article pioneers the integration of the two strategies for seamlessly incorporating vehicle-to-everything (V2X) control within O-RAN's ecosystem. We propose and discuss an enabling architecture that tightly integrates V2X and O-RAN. In the proposed solution, an O-RAN-based control plane operates at low frequencies to achieve reliable and efficient connectivity among autonomous vehicles at higher frequencies. The technological feasibility of this integrated architecture is investigated. A detailed case study is presented and analyzed to demonstrate the design of an Axpp to showcase a practical example of O-RAN solution for a specific V2X scenario. O-RAN, V2X, 6G, dynamic control ## I Introduction The current RAN paradigm does not easily support _network intelligence_ due to network components - mainly Base Station (BS) - being operated as monolithic and inflexible black-boxes [1]. To address this limitation and bridge the gap between real-world RAN deployments and cutting-edge network intelligence, a consortium of vendors, operators, and research institutions have proposed O-RAN as an architectural overhaul of RANs. O-RAN is a disaggregated and open architecture that separates the hardware and software components of the RAN, enabling interoperability, modularity, and flexibility [1]. RAN Intelligent Controllers (RANs) are one of the major innovations of O-RAN: softwarized control loops that enable data collection and dynamic control implemented as micro-services over large-scale and heterogeneous RAN deployments. Specialized O-RAN-based control solutions have been successfully applied to optimize different aspects of 5th generation (5G) cellular systems, confirming the disruptive effects of this architecture. However, O-RAN still supports only the most traditional of 5G deployments, where the network components are only BSs and User Equipments (UEs). There is a case to be made for advancing O-RAN to support emerging 6G use cases, where network intelligence plays an even stronger role [2]. In this work, we argue that Connected and Autonomous Vehicles (CAVs) exploiting high data-rate links represents one of these fundamental use cases, and we propose our vision on the matter. Vehicular communication is likely to be a key driving force in the future 6G wireless networks, as it will enable advanced vehicular mobility paradigms such as autonomous driving, coordinated sensing, and enhanced navigation. At the core of vehicular communications lies V2X, a communication technology that facilitates the interconnection among vehicles and infrastructure. V2X realizes direct and multi-hops links among CAVs, namely vehicle-to-vehicle (V2V) or Sidelink communications. Direct vehicular links reduce the network infrastructure involvement, facilitating communication even in out-of-coverage areas and considerably decreasing latency [3]. Moreover, since most of the CAVs use cases require high data rates, V2V will make use of higher carrier frequencies, such as millimeter wave (mmWave) - currently standardized in 5G as Frequency Range 2 (FR2) - or sub-THz, with the introduction of beam-based highly directive communication to counteract the considerable pathloss that characterizes propagation in these bands [3, 4]. The unique challenges posed by CAV scenario, the dynamic nature of the vehicular environment, the harsh propagation conditions at high frequencies, as well as the hybrid nature of V2X necessitate the development of sophisticated control mechanisms to ensure the success of this disruptive technology. Albeit currently being limited to traditional RAN deployments support, O-RAN represents the ideal candidate to enable management and orchestration in the challenging scenario mentioned above. This article focuses on integrating a next-generation O-RAN with V2X communications. It addresses the challenges and opportunities associated with this integration towards the ultimate goal of enhancing the performance, reliability, and adaptability of RAN-based vehicular communications. To this end, we identify fundamental key challenges of V2X, elaborating on how these can be successfully addressed through O-RAN-based solutions to present some potential research directions yet to be explored in the literature. In addition, this article proposes a next-generation O-RAN architecture that, dfor the first time, embodies a tight integration of V2X within the O-RAN concepts. Through a proper extension of the O-RAN interfaces, we show how it is possible to support the additional network components of V2X and let them be managed by the O-RAN control loops. Most notably, we allow the communication stack of connected vehicles to be part of the entire O-RAN architecture. The result is a comprehensive vehicular communication solution where O-RAN RICs acts as the orchestrator of a hybrid network where vehicles are connected both to the RAN and among themselves. Reliable and pervasive low frequency RANs (i.e., incumbent 5G Frequency Range 1 (FR1) deployments) are used to support a control plane where O-RAN messages can be exchanged between vehicles and the RIC. This, in turn, will manage high-frequency V2V links to effectively create a high-performance data plane, namely a Vehicular Ad-hoc Network (VANET), to cater to the communication needs of autonomous driving, infotainment, and other vehicular communication applications. Finally, a case study is presented, where we use the proposed architecture to demonstrate the impact of a next-generation O-RAN in addressing a specific V2X challenge. In particular, we design an O-RAN micro-service (i.e., an xApp) and test it in a simulated environment to showcase the capabilities and benefits of leveraging O-RAN in solving real-world V2X problems. Numerical results in terms of network connectivity and control overhead are provided to demonstrate the superior performance of the xApp-controlled V2X network compared to the unmanaged solution. At the same time, we positively confirm the feasibility of the proposed next-generation architecture. ## II An O-RAN approach to the V2X challenges The O-RAN architecture features the possibility of applying centralized control to the RAN through the so-called RICs, as exemplified in Fig. 1. These functional components can implement arbitrary data collection and control logic by communicating with the network infrastructure (i.e., BSs) thanks to open and standardized interfaces. In particular, O-RAN introduced a Near Real-time RAN Intelligent Controller (near-RT RIC), which operates on a \(1\,\mathrm{ms}\) to \(1\,\mathrm{s}\) time scale and, thus, it is capable of operating under stringent V2X latency requirements. Arbitrary data collection and control mechanisms are then implemented through the so-called xApps, which are network microservices that run on the primitives exposed by the near-RT RIC. Additionally, O-RAN has also standardized the Non-Real-Time RIC (non-RT RIC), which is a centralized control loop operating on a slower time scale, but with broader network visibility. As such, it enables large-scale orchestration and policing mechanisms implemented as network applications called rApps. When applied to V2X, these two control loops can potentially unlock significant optimization and orchestration gains with respect to the current architecture. We now discuss a key set of fundamental V2X challenges, where O-RAN-based solutions are expected to have a disruptive impact. ### _Beam selection and management at mmWave_ In the context of V2X, mmWave provides the communication capabilities required to support most of the core concepts of Intelligent Transportation Systems (ITS) [5]. To establish effective directional communication at mmWave, beams have to be aligned both at the transmitter and the receiver, as shown in Fig. 0(a). This critical operations are costly in terms of beam training overhead and become even more challenging due to the relative mobility of the vehicles [6] that requires tight beam tracking. As such, traditional beam management mechanisms are considered inadequate and V2X-tailored solutions are required instead. Data-driven approaches are proven to be effective in providing fast beam alignment and tracking for vehicular communication. Locally sourced data coming from on-board sensors can assist the vehicle in autonomously identifying ideal beam direction candidates. However, centralized solutions based on a fusion of reported vehicle positions and planned path, blockage prediction, urban layout information, and past successful beam alignment show the potential of orchestrating large-scale beam alignment to improve performance and reduce interference [7]. While promising, such sophisticated solutions require access to a large amount of fresh network data coming from heterogeneous sources, which is hardly practical for the traditional RAN architecture. With the capability of abstracting the physical equipment idiosyncrasies and enabling large-scale data collection, O-RAN represents an ideal enabler for these solutions. By tapping into the wealth of up-to-date data that a near-RT RIC can expose, an xApp could effectively host a well-informed beamforming management function based on arbitrary algorithms, i.e., Machine Learning (ML)-based, that operate down Fig. 1: V2X challenges that can benefit from O-RAN architecture intelligent and programmable control: a) optimal beam selection and b) augmented network connectivity to the ms timescale. Concurrently, an rApp running in the non-RT RIC can update beam management policies to fine-tune the overall objective of the beam management solution according to specific policies. In particular, the rApp can select a particular beam-width and codebook size to further reduce alignment overhead (fewer larger beams) or reduce interference and increase single-link performance (more but narrower candidate beams). Overall, the potential of an O-RAN-based beam management has been proven in the most general settings [8]. Nonetheless, there is still a lack of V2X-dedicated studies on this matter where the capabilities of O-RAN are applied to beam management and massive Multiple Input, Multiple Output (MIMO) for vehicular communications. ### _Radio resource management._ ITS are characterized by a large set of diverse services that present extremely challenging communication requirements. Efficiently managing the scarce radio resources in the RAN represents a critical challenge. 5G foresees the use of network slices, which can be briefly described as bundles of virtualized resources dedicated to providing specific connectivity services to a subset of network users. Owing to the possibility of activating flexible and service-tailored slices, network slicing is considered a natural enabler of the diverse V2X use cases [9]. However, physical resources still need to be efficiently allocated so that slices can support the communication requirements, and slice isolation needs to be guaranteed. Static resource partitioning is hardly viable due to the fast-changing state of the wireless network. Dynamic slice resource allocation based on continuous monitoring of the network parameters is to be preferred. This is all the more true for V2X due to the aforementioned challenging environment [3, 10]. Thanks to its extensive data collection and control capabilities, O-RAN is considered the fundamental enabler of slicing resource management [11]. Nonetheless, the problem of practically enabling the slicing for V2X in the real world has yet to be satisfactorily addressed. In this case, an O-RAN-based approach can fuse network measurements and external information (i.e., vehicle localization and planned path) to detect any potential criticality, realocate slice resources accordingly, and allow for more efficient spectrum use. For instance, an xApp could monitor vehicular traffic and proactively allocate slice resources in those cells that will soon be subject to increased vehicular activity. If such resources are unavailable, xApps can unload the receiving cells by triggering handovers, reducing foreign slices' allotments, or disconnecting some users in the extreme. Additionally, by dynamically controlling Semi-persistent Scheduling (SPS) and Configured Grant (CG), the xApp can guarantee resource-efficient low latency and reliable communications both at the slice and single vehicle levels, ultimately providing isolation for safety-critical services. Given the complexity of the problem, ML-based approaches are likely to be required. In this case, an rApp could monitor and fine-tune the mechanism put in place by the xApps and build or retrain appropriate ML models, further adapting the V2X slicing management to long-term environmental variations. The problem of resource allocation is also relevant for direct V2V connections. According to the 3rd Generation Partnership Project (3GPP) standard, a central entity (i.e., a BS or a Road Side Unit (RSU)) is expected to allocate the radio resources [10]. This mechanism is hindered by the limited perception of the central entity with respect to each V2V link condition and traffic requirement [12]. In this context, an xApp could gather data about the vehicle's position and mobility, as well as channel status and interference profile. This information can be processed to adapt the allocation strategies to the fast-varying vehicular environment. ### _Enhanced Vehicle-to-vehicle connectivity._ Fundamental ITS services such as cooperative awareness, augmented reality, and coordinated autonomous driving require extensive data exchange among CAVs that are in close proximity to each other [13]. However, relying on base stations to forward the entirety of this traffic is impractical due to inefficiency and increased burden on the traditional RAN infrastructure. High-frequency Sidelink communications enabled by 5G New Radio (NR) are thus inevitable for reliable low latency and high throughput V2V links. The challenges presented in the previous paragraphs interest V2V communications as well, making a case for addressing them through an O-RAN approach. Additionally, the VANET created through the activation of direct links needs to be properly managed in order to support the ITS services [13]. The high mobility of the vehicles and the harsh mmWave propagation creates a challenging twist on the traditional problems of Ad-hoc networks, which include link selection, network graph and routing optimization, and congestion control, as shown in Fig. 1b. By tapping into the control and monitoring capabilities of the O-RAN architecture, an xApp could select which V2V links to activate according to the expected channel quality and probability of blockage. The overall link selection strategy could optimize the underlying VANET graph to meet different objectives. For instance, different superimposed graphs - low latency graphs prioritizing short paths or high throughput graphs prioritizing high-quality links - can be precomputed and dynamically activated according to the instantaneous communication needs or the policies dictated by an rApp. ### _Programmable and up-to-date V2X digital twin._ The 6G V2X communications will exploit the cooperation among CAVs to augment environment perception and to enable the creation of a Digital Twin (DT) of the surrounding environments [14]. To obtain an accurate real-time digital reproduction of the physical environment, the envisioned digital twin-enabled V2X system has to use high-definition 3D maps and combine multi-modal sensory data from several vehicles' onboard sensor data, as well as a detailed description of the communication network state. The O-RAN architecture is well-positioned to source the network information required to build high-fidelity V2X DTs, reducing the amount of data that the network nodes should manage and exchange. At the same time, O-RAN applications can exploit the DT itself to run inference on the overall V2X scenario without causing communication overhead with the infrastructure. In particular, xApps can exploit the DT to obtain reasonably precise information on current and future vehicle positions without directly interrogating them. Proactive and optimized traffic forecasting capabilities from the V2X DT can also be exploited. rApps can retrain ML models on the virtual V2X environment recreated by the DT to ensure that the xApp data-driven approaches always employ up-to-date models. ## III A next-generation O-RAN architecture for V2X In the previous section, we have detailed how some relevant V2X challenges can be successfully addressed by exploiting the network programmability enabled by O-RAN. However, the specifications of the O-RAN are currently designed around traditional access networks. Due to the peculiarities of the V2X environment, some extensions to the O-RAN architecture are required such that all of these solutions can be practically realized. We now focus on this matter, proposing our vision of an enabling architecture where O-RAN and V2X are integrated to unlock the aforementioned opportunities. As shown in Fig. 2 for the case of using 5G as the Radio Access Technology (RAT), a typical O-RAN deployment includes a non-RT RIC and a near-RT RIC embodied as software components in the Edge Computing Cluster (ECC) [1]. A DT could also be deployed inside the same ECC, allowing for undisturbed data exchanges with the RICs. O-RAN apps running on top of both RICs communicate with the network infrastructure through open interfaces: the E2 interface for the non-RT RIC and the O1 and O2 interfaces for the near-RT RIC. In light of this, creating an O-RAN empowered V2X requires all the V2X devices to be reachable by the O-RAN RICs through these interfaces. BSs are normally equipped with interface terminations to enable data collection and control, as shown in Fig. 2; thus, no modifications are required from the communication standpoint. However, the interface definitions will likely require extensions to support the specifics of data collection and control required by vehicular use cases. RSUs are currently not supported by O-RAN specifications. Nonetheless, integrating the O1, O2, and E2 terminations in RSUs would be a relatively straightforward operation, as the communication between the devices and the RICs in the ECC could take place by using the already existing RSU control plane. As previously mentioned, proper extensions to the O-RAN interface definitions will enable RSUs to be subjected to data collection and control. On the other hand, direct communication between the RICs and the CAVs results in being more challenging. However, allowing the NR Sidelink stack of CAVs to be centrally controlled opens the opportunity of addressing a key issue in V2X networks: orchestrating the V2V communication. ### _O-RAN-based control plane for V2V_ In our integrated architecture proposition, we envision a VANET which still retains its decentralized nature, but it is supported by a next-generation O-RAN to enhance its performance to the point of supporting the stringent requirements of the safety-critical ITS services. As Fig. 2 shows in the exemplary scenario on the left, the data plane of the V2V ad-hoc network is represented by direct NR sidelink mmWave connections established among CAVs, offering high throughput without occupying BS resources. For the control plane, Below \(6\) GHz (sub-6GHz) links are employed, allowing for reliable and efficient communication between vehicles and the centralized controller through the existing RAN infrastructure. The use of sub-6GHz frequencies in the control plane offers a wider coverage and better penetration capabilities, leveraging on the ubiquitous coverage of modern cellular Fig. 2: Next-generation O-RAN architecture details. network deployments. The role of this out-of-band control plane is to relay O-RAN messages between the RICs and the interface terminations of the CAV1. According to this architecture, a CAV would require a 5G FR1 UE to connect to the 5G BS and an FR2 NR Sidelink UE to connect with other vehicles. After an FR1 connection is established with the BS, dedicated radio bearers for the O-RAN-based management plane are established. In particular, at least one is required to create a GPRS Tunneling Protocol (GTP) tunnel with a User Plane Function (UPF) co-located with the RIC in the ECC. As Fig. 2 shows in detail, the GTP tunnel can be then used to transparently connect the near-RT RIC and the E2 termination in the CAV. Thanks to the capability of GTP tunnels of maintaining IP endpoint connectivity through handovers, the high mobility of CAV is not expected to disrupt the proposed control plane. In other terms, the burden of managing the mobility of E2 terminations is left to the 5G connection, while O-RAN microservices obtain a reliable connection with the CAVs as they navigate through the coverage area. Footnote 1: In Fig. 2, only the E2 termination is included in CAVs for the sake of simplicity. ### _Technological feasibility_ The proposed architecture is feasible from a technological realization standpoint, with minimal modifications required. Integrating a 5G UE and a Sidelink UE into each vehicle allows for the establishment of both the FR1 connection with the 5G BS and the FR2 NR Sidelink connections with other vehicles. This integration can be achieved without significant challenges, as vehicles generally have flexible energy consumption constraints, making it feasible to accommodate the necessary communication modules. Moreover, the proposed architecture does not require substantial modifications to the existing 5G stack, as it relies on standard 5G communication modes. However, although realizable, the architecture's effectiveness and performance must be thoroughly analyzed. Factors such as latency and control plane overhead (i.e., BS resource utilization) must be carefully considered to ensure the architecture's viability. Such analysis should be conducted on a case-by-case basis, representing another open question in the context of O-RAN for V2X. In the following section, we conduct a preliminary analysis of the effectiveness and viability of addressing a challenging V2V goal through the proposed O-RAN-enabled architecture. ## IV O-RAN for V2V: a case study We conduct a case study based on a typical vehicular communication scenario where multiple CAVs traverse a busy and blind urban intersection. The scenario is compatible with the architecture proposed in the previous section. ### _Problem definition_ CAVs traversing the urban intersection in the scenario require uninterrupted V2V connectivity to exchange safety-critical information throughout the entire navigation area. However, due to the nature of mmWave propagation, V2V links are often interrupted by blockages caused by buildings, urban obstacles, and vehicles themselves. In case of Line of Sight (LoS) obstruction, RSUs and other CAVs can act as relays to maintain connectivity between two vehicles. However, a central coordinator is required to optimally chose the path and set up the new connection. ### _O-RAN solution_ To address the problem defined above, we propose an xApp that optimally selects relays (namely RSUs or other CAVs) among those available to establish a multi-hop path between two vehicles whose direct link is in a Non-Line of Sight (NLoS) condition. Fig. 3 shows how such an xApp can be implemented by detailing the exchange of messages between the involved components. The exchange starts with a V2V link failure report originating from the CAVs. This report travels through the low-frequency control plane and reaches the xApp, which is now tasked with finding an alternative relayed path. To do so, the xApp first needs to reconstruct the vehicular network graph based on the quality of all the V2V links that can be established at that moment. Such information can be obtained, for instance, by knowing the relative position of the vehicles and their communication characteristics to extract the channel quality according to an appropriate propagation model [13]. Alternatively, this information can be acquired through measures carried out by CAVs and periodically reported through the control plane. We now assume the presence of a DT that the xApp can interrogate to obtain an up-to-date and reliable representation of the vehicular scenario to also demonstrate the role of this technology. Once the data-plane graph is reconstructed, the xApp finds an alternative route between the two vehicles according to an arbitrary logic represented by the _New path computation_ in Fig. 3: xApp message diagram. The implementation of _New path computation_ is arbitrary. Fig. 3. Each vehicle involved in the path is then informed to establish a new V2V link through control messages from the xApp and delivered through the low-frequency control plane. Finally, this approach can be generalized to allow any two vehicles to communicate by requesting a route to the xApp when a direct link is unavailable. ### _Effectiveness_ With the goal of investigating the attainable performance of the proposed solution, we replicated the actions of the xApp in a simulated environment. We recreated the study scenario through a ray tracer simulator as in [13]. We simulated the action of a sample xApp by computing the alternative route between any pair of vehicles using a shortest path algorithm. We repeated the experiments by deactivating links to guarantee an increasingly stringent minimum Signal-to-Noise-Ratio (SNR). This was done to explore the capability of an xApp to select a path based on specific communication performance requirements. Results are shown in Fig. 4. The baseline approach, considering only the direct links, shows that no more than 25% of the vehicles can establish a connection throughout the entire time window. On the other hand, it is shown how the proposed xApp has the potential of guaranteeing full vehicular connectivity even for high levels of minimum guaranteed SNR. Please note that our intention here was not to propose a state-of-the-art solution but to evaluate the impact of a simplified, centralized approach with respect to an unmanaged V2V scenario. Fig. 4 also plots the average number of hops required to ensure vehicular connectivity. This measure shows a trade-off between minimum SNR - affecting the throughput - and path length - affecting the latency - suggesting that a more refined xApp should be capable of exploiting it. Overall, these results confirm how even such a simple xApp can be highly beneficial in addressing one of the fundamental challenges of V2X. ### _Control plane overhead_ In the previous analysis, we replicated the action of a sample xApp to confirm its effectiveness. We now focus on the feasibility of the proposed approach. According to the enabling architecture proposed in Sec. III, the RICs and the CAVs communicate through a low-frequency control plane that utilizes the resources of the existing 5G infrastructure. Consequently, a feasible xApp must not generate excessive traffic while operating. As shown in Fig. 3, the proposed V2V xApp requires two control messages in uplink and as many downlink control messages as the number of hops. Based on this, it is possible to measure the control traffic that the xApp would generate in the simulated scenario by counting the path computation events. Fig. 5 reports the results averaged over the 5 minutes time window and with a pessimistic O-RAN packet size of \(1\,\mathrm{Kb}\), showing that the downlink traffic constitutes the largest part of the control plane overhead, as expected. Nonetheless, the worst-case traffic of \(160\,\mathrm{Kb}\)s is negligible for the 5G RAT. This confirms the feasibility of the proposed xApp in terms of communication overhead. ### _Control latency_ A reactive xApp, as the one proposed, must be able to apply the control solution fast enough to be effective. The timescale is highly dependent on the application. For our case study, the Fig. 4: Network connectivity and average number of hops versus required SNR Fig. 5: Control plane traffic required by the V2X xApp Fig. 6: Vehicular link outage duration statistic. \(n\) represents the outage event count. xApp must be able to compute and establish an alternative route well before the LoS obstruction is cleared. It is possible to measure the control delay of the proposed xApp in the simulated scenario. We consider negligible the communication delay between the BS and the xApp, as it happens in the edge cluster. Additionally, due to the extreme computational efficiency of the shortest path algorithm, we again consider the alternative route computation negligible. Consequently, the control latency boils down to the latency of the 5G system supporting the control plane, which we fix to a conservative \(30\,\mathrm{ms}\). We then compare this value with the duration of the direct link outage events, whose statistic is reported in Fig. 6. Here we have filtered out outage events lasting less than \(1\,\mathrm{s}\) - supposed to be recovered by the beam tracking mechanism - and more than \(30\,\mathrm{s}\) - as they mostly represent outliers. Overall, the statistics show how the outage duration is centered around values going from \(3\) to \(10\,\mathrm{s}\). The comparison with the previously computed control delay shows the xApp's high feasibility and quick response to disruptions in the communication system. In other words, the xApp promptly adapts to changing conditions and maintains communication between vehicles when direct links are blocked. This highlights the xApp effectiveness in mitigating link failures and minimizing communication outages. ## V Concluding Remarks As the world moves towards a more connected and automated future, the need for reliable and efficient communication between vehicles and network infrastructure has become increasingly important. In this article, we focused on the use of the O-RAN architecture for 5G and beyond V2X communication. We highlighted how O-RAN has the potential to provide a more flexible, scalable, and cost-effective solution compared to current solutions for V2X systems. Also, we discussed integration points, proposing our envisioned architectural solution. A first set of simulations demonstrated numerically the benefits of a managed, controlled, and programmable V2X system compared with an unmanaged one. ## Acknowledgment This article was supported by the European Union under the Italian National Recovery and Resilience Plan (NRRP) of NextGenerationEU, partnership on "Telecommunications of the Future" (PE00000001 - program "RESTART", Structural Project 6GWINET).
2309.00843
Remote ID for separation provision and multi-agent navigation
In this paper, we investigate the integration of drone identification data (Remote ID) with collision avoidance mechanisms to improve the safety and efficiency of multi-drone operations. We introduce an improved Near Mid-Air Collision (NMAC) definition, termed as UAV NMAC (uNMAC), which accounts for uncertainties in the drone's location due to self-localization errors and possible displacements between two location reports. Our proposed uNMAC-based Reciprocal Velocity Obstacle (RVO) model integrates Remote ID messages with RVO to enable enhanced collision-free navigation. We propose modifications to the Remote ID format to include data on localization accuracy and drone airframe size, facilitating more efficient collision avoidance decisions. Through extensive simulations, we demonstrate that our approach halves mission execution times compared to a conservative standard Remote ID-based RVO. Importantly, it ensures collision-free operations even under localization uncertainties. By integrating the improved Remote ID messages and uNMAC-based RVO, we offer a solution to significantly increase airspace capacity while adhering to strict safety standards. Our study emphasizes the potential to augment the safety and efficiency of future drone operations, thereby benefiting industries reliant on drone technologies.
Evgenii Vinogradov, A. V. S. Sai Bhargav Kumar, Franco Minucci, Sofie Pollin, Enrico Natalizio
2023-09-02T06:42:46Z
http://arxiv.org/abs/2309.00843v1
# Remote ID for separation provision and multi-agent navigation ###### Abstract In this paper, we investigate the integration of drone identification data (Remote ID) with collision avoidance mechanisms to improve the safety and efficiency of multi-drone operations. We introduce an improved Near Mid-Air Collision (NMAC) definition, termed as UAV NMAC (uNMAC), which accounts for uncertainties in the drone's location due to self-localization errors and possible displacements between two location reports. Our proposed uNMAC-based Reciprocal Velocity Obstacle (RVO) model integrates Remote ID messages with RVO to enable enhanced collision-free navigation. We propose modifications to the Remote ID format to include data on localization accuracy and drone airframe size, facilitating more efficient collision avoidance decisions. Through extensive simulations, we demonstrate that our approach halves mission execution times compared to a conservative standard Remote ID-based RVO. Importantly, it ensures collision-free operations even under localization uncertainties. By integrating the improved Remote ID messages and uNMAC-based RVO, we offer a solution to significantly increase airspace capacity while adhering to strict safety standards. Our study emphasizes the potential to augment the safety and efficiency of future drone operations, thereby benefiting industries reliant on drone technologies. ## I Introduction As Advanced Air Mobility (AAM) evolves, Unmanned Aerial Vehicles (UAVs) and Electric Vertical Take-Off and Landing (eVTOL) aircraft are poised to significantly impact transportation and logistics [1]. According to Morgan Stanley [1], by 2050, AAM will reach up to $19 tn (10-11% of projected global Gross Domestic Product(GDP). However, the increased prevalence of UAVs introduces significant challenges, such as managing aerial congestion1 and ensuring safety, calling for urgent reconsideration of aerial conflict management procedures and safety norms [3, 4]. Footnote 1: The authors of [2] estimated that shifting 70% of all deliveries to the aerial means will have required 180,000 drone flights per hour in the metropolitan area of Paris by 2035. Indeed, as we witness emerging liability debates and regulatory frameworks for UAV and eVTOL integration into air traffic, the definition of safe separation distances becomes a critical aspect of aerial Conflict Management (CM). Traditionally, separation distances have been determined by methodologies tailored for manned aviation [5, 6, 7, 8, 9], an approach that leverages a century's worth of valuable experience. However, the emergence of civil UAVs - potentially autonomous or highly automated agents - offers a unique opportunity to re-evaluate and adapt these conventional assumptions to accommodate new players in our skies. In light of this, we explore the potential of Remote identification (Remote ID), a solution ensuring transparent UAV registration, flight permission issuing, and safe separation provision [10, 11]. Notably, many countries mandate UAVs to be equipped with Remote ID capabilities to access the airspace2. In this work, we propose and investigate the hypothesis that optimized separation distances can be achieved by augmenting Remote ID messages to include information on the aircraft's size, mobility, and onboard navigation equipment performance. Footnote 2: Japan has been in compliance with rules regarding Remote ID for drones since June 2022. Drone operators in the USA and the EU member states are required to use Remote ID starting from the 16th of September 2023 and the 1st of January 2024, respectively. This study contributes to the existing knowledge by: * Reviewing current methodologies for determining UAV separation distances. * Proposing a UAV Near Mid-Air Collision (uNMAC) volume that takes into account factors such as aircraft size, localization precision, UAV speed/velocity, and the capabilities of wireless technologies. * Analyzing the contribution of each component on the final uNMAC volume. * Comparing 5G NR sidelink, Wi-Fi, and Bluetooth wireless technologies for UAV-to-UAV and Remote ID exchange. * Adopting information contained in Remote ID messages for multi-agent collision-free navigation based on Reciprocal Velocity Obstacles (RVO). By developing a sophisticated yet computationally efficient method for calculating separation distances, this work aims at enhancing the operational efficiency and safety of UAV operations. The findings could significantly impact aerial conflict management norms in areas of high-density UAV traffic, thereby facilitating safer and more efficient integration of UAVs into our daily lives and paving the way for even more futuristic use cases of Advanced Air Mobility. The rest of the paper is organized as follows: In Section II, we review the relevant works related to Remote ID, aerial conflict management, and RVO-based multi-agent navigation, providing context for our study. In Section III, we elaborate on our system model that takes into account factors such as airframe size, localization error, UAV velocity, and update rates used in Section IV to introduce our proposed uNMAC definition. In Section V, we outline our approach to Remote ID-enabled RVO for multi-agent navigation. Section VI showcases the results from our simulations. Finally, in Section VII, we summarize the key findings and discuss the potential impacts and implications of our study. We conclude with some suggestions for future directions in UAV navigation research and improvements to the Remote ID system. ## II Related Works Given the multidisciplinary nature of this research, blending elements of telecommunications, aviation, and robotics, this section provides an overview of the three main components outlined in the title. Specifically, we will i) give a brief introduction to Remote ID, ii) explore various methodologies used to establish separation distances for UAVs, and iii) explain how the Reciprocal Velocity Obstacle approach can be applied for collision avoidance and navigation in multi-UAV environments. ### _State-of-the-Art Overview: Remote ID_ As of the time of writing, Remote ID is not mandatory, although the FAA and the European Union's Aviation Safety Agency (EASA) have made their ruling on Remote ID. Most drones operating within the US and EU airspace will be required to have Remote ID installed by September 2023 and January 2024, respectively, to have access to the national airspace of the US and all EU member states.3. Footnote 3: Several exceptions will be in place: in the United States, Remote ID broadcast will not be required for Visual Line of Sight (VLOS) operations conducted by educational institutions within specific areas. Similarly, in the European Union, Remote ID equipment will not be compulsory for drones that weigh less than 250 grams (including payload) and have no cameras or any other sensor capable of gathering personal data. Requirements for Remote ID are outlined in [11] as: * Remote ID messages must be directly broadcasted via radio frequency from the UAV. * Typical user devices such as mobile phones should be able to receive Remote ID messages. This imposes that LTE, 5G NR, Wi-Fi, or Bluetooth must be used4. Footnote 4: The FAA initially evaluated the use of ground infrastructure and ADS-B but dismissed these due to various issues as detailed in [12]. * The message should encompass i) UAV ID (either a serial number or the session ID), ii) UAV's geographic and velocity data, iii) Control Station's geographic data, iv) emergency status, and v) time stamp. * The UAV design should aim to maximize the broadcast range, albeit the actual range may differ. * The Remote ID broadcast cannot be disabled by the operator and must be self-tested to ensure functionality before take-off. **Beyond State-Of-The-Art:** We propose the incorporation of additional relevant data fields to the standard Remote ID message. In this research, we assess two candidates: * **Candidate 1:** Maximum airframe size, measured instantaneous localization error, and velocity. * **Candidate 2:** Actual airframe size, measured instantaneous localization error, and velocity. These candidates are evaluated against standard Remote ID messages (e.g., not sharing information about the drone size and GNSS data accuracy). ### _Aerial Conflict Management Terminology_ A Mid-Air Collision (MAC) is an event where two aircraft physically collide in flight. Following the definition given by EUROCONTROL [13], a Near Mid-Air Collision (NMAC) is said to occur when the horizontal separation \(d_{H}\) between two aircraft is less than 150 m (500 ft), and the vertical separation \(d_{V}\) is less than 30 m (100 ft). These thresholds have been foundational in determining significant (and larger) volumes and distances in aviation, such as Remaining Well Clear - RWC [12] or Detect-And-Avoid - DAA ranges. These NMAC parameters, i.e., 150 m and 30 m, have roots in the work [14] conducted in 1969 by the NMAC Study Group established by the Federal Aviation Administration (FAA). Though these dimensions have served manned aviation well for over half a century, the original study's methodology and data quality would be critiqued by modern standards. In particular, the original NMAC dimensions were computed using a statistical approach that relied on pilots' self-reported distances for approximately 4500 "near misses" that occurred in 1968. While this methodology was appropriate given the technological constraints at that time, the evolution of technology such as GPS and big data analytics have significantly improved data collection and accuracy standards. Importantly, while these NMAC parameters have been empirically validated for traditional aviation, their applicability to small UAVs, which can have a wingspan of 1 m or less, is questionable. Using a 150\(\times\)30 m volume to represent a hazardous situation involving two such UAVs can lead to an overestimation of the risk, thereby yielding overly conservative estimates of airspace capacity. This potentially has a negative effect on the economic viability of UAV use cases, particularly as we find the NMAC model being applied to small UAVs [5, 6, 7, 8]. ### _State-of-the-Art Overview: UAV Separation Distances_ Determining various separation distances, such as those based on NMAC, MAC, and RWC volumes, is critical for balancing UAV demand and capacity, and for the design of supporting wireless technologies. Consequently, this area has garnered significant attention from various actors, including FAA [5, 7], NASA [7], national security agency laboratories [6, 8, 15], and SESAR [9]. The corresponding contributions are summarized in Table I. The prevailing research [5, 6, 7, 8] aims at deriving RWC volumes based on NMAC, relevant mainly for UAV-to-manned (U2M) aircraft conflicts--a major concern during the early stages of UAV integration into the National Airspace (NAS). The examination of UAV-to-UAV (U2U) conflicts received attention somewhat later, as evidenced by works such as [9, 15], published in 2021 and 2022, respectively. The research [15] conducted by MIT Lincoln Laboratory provides a foundation for further separation distance calculations, introducing the concept of small NMAC (sNMAC) volume. In accordance with the interpretation of [14] used in [13]--where NMAC dimensions were defined around double the size of a typical manned aircraft--the authors of [15] recommend defining sNMAC based on the largest UAV wingspan (7.5 m) found in a specific database of UAV characteristics5. Footnote 5: [http://roboticsdatabase.auvsi.org/home](http://roboticsdatabase.auvsi.org/home) The BUBBLES project puts forward a method that assumes the Specific Operations Risk Assessment (SORA) risk model [16] but extends it to UAS operations. Unlike previous works [5, 6, 7, 8, 15], this model focuses on ensuring a minimal rate of fatal injuries to third parties on the ground per flight hour, rather than just reducing MAC probability. Separation estimates in BUBBLES account for both strategic and tactical conflict management [12, 17], facilitated by Air and Unmanned Aerial System Traffic Management (ATM and UTM) systems. This requires UAV operators to maintain communication with ground infrastructure and modify their behavior as suggested. This aspect introduces a human element, which could lead to potential errors and slows down system response times, thus impacting separation distances and airspace capacity. An essential feature of [9] is its accounting for various real-world operation errors, with GNSS-induced coordinate uncertainty being the most significant, contributing to a 40 m error out of a total 41 m. **State-Of-The-Art Limitations:** While U2M separation modelling has been thoroughly explored, U2U separation definitions are still under development. Current solutions are either tailored for non-cooperative UAVs [15] or require communication with ground infrastructure [9]. Furthermore, they employ several conservative assumptions. The BUBBLES project presents an intriguing, yet centralized approach requiring ground infrastructure (while UAVs are required to broadcast their Remote ID), which may lead to scalability issues and susceptibility to ground equipment malfunctions. As for distance, the sNMAC volume [15] is determined solely based on the sum of the maximum wingspans (approximately 15 m). Yet, it is known that location uncertainty plays a crucial role in defining separation [18], influenced by several factors like GNSS errors, UAV movement, and delays in location reporting. When considering all these variables, separation distances largely depend on onboard sensor and communication module performance. **Beyond State-Of-The-Art:** Based on our initial work [10], we propose a definition of uNMAC dimensions, assuming the exchange of relevant information. The minimum possible pairwise uNMAC is thus defined as the sum of the individual wingspans of the UAVs, where violation of this volume results in a MAC. The final uNMAC consists of i) airframe sizes, ii) reported localization errors, and iii) distance travelled by drones between two coordinate updates (i.e., Remote ID messages). Compared to the initial work, we deepen the uNMAC components analyses. Additionally, we use Remote ID and uNMAC as tools for ensuring collision-free multi-agent navigation. Our research puts forth a framework applicable to systems where safe UAV operations are guaranteed through separation distances computed autonomously onboard aligned with the vision of [19]. Such a solution will be required, for instance, by U3 phase of U-Space where UAVs are expected to benefit from assistance for conflict detection and automated detect and avoid functionalities. This cooperative U2U solution can serve as an emergency backup when UTM services become unavailable. ### _Reciprocal Velocity Obstacle for Multi-Agent Navigation_ The implementation of the Reciprocal Velocity Obstacle (RVO) model plays a pivotal role in preventing collisions and directing the navigation of multiple UAVs. To comprehend its functionality, we begin by introducing the concept of Velocity Obstacles (VO) and then expand on it to illustrate how the RVO model is implemented. #### Iii-D1 Velocity Obstacle, A Step Towards RVO The VO of a moving obstacle \(j\) with respect to an agent \(i\) comprises all velocities that could lead to a collision between the two entities at some point in time, given their current positions and Fig. 1: Demonstration of the Velocity Obstacle (VO) concept. From top to bottom: inclusion of more realistic UAV sizes results in a wider range of velocities leading to a collision. \begin{table} \begin{tabular}{c||c|c|c|c} \hline \hline Source & Reference & Applica- & Communi- & GNSS \\ & volume & bility & cation & support \\ \hline \multirow{2}{*}{ASSURE [5]} & NMAC & \multirow{2}{*}{U2M} & \multirow{2}{*}{NA} & \multirow{2}{*}{NA} \\ & (150x30 m) & & & \\ \hline SARP [6, 7] & NMAC & U2M & NA & NA \\ \hline MIT LL [8] & NMAC & U2M & NA & NA \\ \hline \multirow{2}{*}{BUBBLES [9]} & \multirow{2}{*}{MAC} & U2M & \multirow{2}{*}{via} & Upper \\ & & U2U & ground & bound \\ \hline MIT LL [15] & sNMAC & U2U & NA & NA \\ \hline \multirow{2}{*}{This work} & defined & \multirow{2}{*}{U2U} & \multirow{2}{*}{U2U} & \multirow{2}{*}{Actual} \\ & pairwise & & & \\ \hline \hline \end{tabular} \end{table} TABLE I: NMAC STATE OF THE ART OVERVIEW velocities. The concept is illustrated in Fig. 1 with increasing complexity. The top part shows two point agents on a collision course due to their current velocity \(v^{ij}\). It is evident that a collision will occur if the velocity maintains the present direction. However, for velocities falling outside of this VO, the agents would not collide. The middle section of the figure presents a more realistic scenario where \(j\) is depicted as a disc and \(i\) is a point. In this case, the VO comprises a range of velocities that could lead to tangential trajectories, resulting in a conical representation of velocities. The bottom part represents both agents as discs, and thus the VO cone is established by the Minkowski Sum of the two discs symbolizing the UAVs and, potentially, instrumental errors that can affect the probability of collision (e.g., non-perfect accuracy of the localization module). The trajectory of an agent can be traced using the following formula, \[\lambda(p,v)=p+tv,\ t>0. \tag{1}\] In this equation, \(t\) denotes time, an agent's position is denoted by \(p\) and its velocity by \(v\). For a collision to occur, the intersection of the agent's trajectory and the Minkowski sum (of two disks \(D^{i}\) and \(D^{j}\) must not be an empty set. Therefore, \[VO^{i}_{t}(v^{j})=v^{i}|\lambda(p^{i},v^{ij})\cap D_{j}\oplus(-D_{i})\neq \emptyset. \tag{2}\] If an agent detects that its present velocity falls within the VO, it will select a velocity outside the VO to avert the collision. This implies that every time when the algorithm is run, the VO is calculated for each agent in relation to every other agent's position and velocity data, enabling the navigation of the environment without collisions. #### Ii-B2 General algebraic collision avoidance constraints Each UAV is modeled as a disc-shaped robot moving in a single integrator system, given by \(\dot{x}^{i}=v^{i}_{x}\), \(\dot{u}^{i}=v^{j}_{u}\), where the dots represent the derivatives with respect to time. The position and velocity of robot \(i\) are represented as \(\mathbf{p}^{i}=(p_{x};p_{y})\) and \(\mathbf{v}^{i}=(v_{x};v_{y})\), respectively. When two UAVs, with respective radii \(R^{i}\) and \(R^{j}\) and velocities \(\mathbf{v}^{i}\) and \(\mathbf{v}^{j}\), find themselves on a collision course, the RVO algorithm helps to independently infer and calculate collision-avoiding velocities. The RVO approach has two main characteristics. First, it ensures that all UAVs use the same rotation direction (either clockwise or anticlockwise) to avoid collisions. The exact degree of rotation, or the collision-avoiding velocity, is calculated by a certain inequality that involves the UAV's current and new velocities, and the counterpart's velocity and relative position. The equation for the collision-avoiding velocity, represented as \(\mathbf{v}^{i}_{rvo}\), is defined by the following constraints [20, 21]: \[f^{RVO^{j}_{t}}(\mathbf{p}^{i},\mathbf{p}^{j},\mathbf{v}^{i},\mathbf{v}^{j}, \mathbf{v}^{ro},)\geq 0 \tag{3a}\] \[f^{RVO^{j}_{t}}(\cdot)=||\mathbf{r}^{ij}||^{2}-\frac{((\mathbf{r}^{ij})^{T}(2 \mathbf{v}^{i}_{rvo}-\mathbf{v}^{i}-\mathbf{v}^{j}))^{2}}{||2\mathbf{v}^{i}_{rvo }-\mathbf{v}^{i}-\mathbf{v}^{j}||^{2}}-(R^{ij})^{2}\] (3b) \[\mathbf{r}^{ij}=(\dot{x}^{i}-x^{j},\dot{y}^{i}-y^{j})^{T},R^{i,j}=R^{i}+R^{j}. \tag{3c}\] Here, \(\mathbf{r}^{ij}\) represents the relative position vector between two UAVs, and \(R^{ij}\) is the sum of their radii. This framework thus provides a way for each UAV to safely avoid collisions while maintaining their intended paths, contributing to efficient multi-agent navigation. **Beyond State-Of-The-Art:**[20] considers perfect knowledge of agents' locations. The probabilistic version of RVO in [21] considers that each agent estimates the locations of all other agents with an accuracy that can be described by certain probability distributions. Next, the errors are compensated by enlarging the disk sizes to ensure collision-free navigation. In our work, we leverage the U2U communication link to receive locations and other relevant information contained in Remote ID messages sent by drones involved in potential conflict. Note that the disk sizes vary during the mission as was suggested in [19] for Airborne Collision Avoidance Systems for small UAVs (ACAS sXU). For the sake of clarity, let us map the terminology used for RVO and UAV separation distances. When defining disk sizes, the aforementioned Minkowski sum can correspond to different volumes: 1. The disks represent the UAV airframes, causing the Minkowski sum to coincide with a MAC. 2. The disks represent the maximum UAV airframes, leading the Minkowski sum to coincide with sNMAC. 3. The disks represent entire areas where drones can potentially be. This area, derived from the combination of airframe size, localization uncertainty, and distance travelled by a drone between two location updates, leads the Minkowski sum to coincide with the defined pairwise uNMAC in our work. ## III System Model Consider a UAV of airframe size \(d_{AF}\) moving with speeds \(V\) in a certain direction. The aircraft is equipped with i) Self-localization (e.g., GPS) and ii) wireless communication modules. The GPS can identify the drone's coordinates with an error margin of \(\pm\epsilon\). We assume that the errors associated with the airframe and location are symmetrically distributed around the drone's center. It is assumed that the rate at which the location updates and communication broadcasts occur, denoted as \(\Delta t_{LOC}=\Delta t_{COM}\), is consistent and symbolized as \(\Delta t\). Essentially, an updated location is broadcast immediately. The UAV moves a distance of \(V\Delta t\) between two location updates. In the absence of information about the movement direction, the UAV could be anywhere in the area depicted in Fig.2, top. For multiple drones operating within the same airspace, safety is guaranteed only if the separation distance \(r_{sep}\) is such that the drones' uncertainty areas do not overlap (Fig. 2, bottom). In this study, we simplify the airspace by considering a single altitude slice and focusing on horizontal separation. Future work will extend this study to consider 3D scenarios. This study does not implement a collision avoidance technique, focusing instead on the impact of Remote ID. #### Iii-1 Airframe size: In accordance with the study conducted by [15], which compiled a database of UAV characteristics, we model the airframe size as a uniformly distributed random variable with a maximum limit of \(d_{AF}^{max}=7.5\) meters. #### Iii-C2 Localization error: Though diverse solutions for self-localization of UAVs exist (for instance, visual Simultaneous Localization and Mapping - SLAM [22]), this work considers GPS, being the most common solution at present. These conclusions can be extended to SLAM or other GNSS such as Galileo, GLONASS, and BeiDou by considering the range estimation errors reported by these systems. Table II is inspired by see [23] (Table 3.4-1). We provide directly 3\(\sigma\) to cover 99.7% of possible errors (corresponding \(\sigma\) are [1.9; 3.5, 4.85, 10] meters). In some cases (e.g., in [9]), the upper bound of the GPS positioning error is set to 40 meters. #### Iii-C3 UAV velocity: An aircraft's airspeeds, known as V-speeds, differ based on several factors. Cruise \(V_{C}\), the speed where the aircraft achieves optimal performance, and the maximum operating speed \(V_{max}\), were collected in [8] and categorized based on their maximum gross takeoff weights (MGTOW) in Table III. Assuming that most UAV operators use vendor-provided performance guidelines, the authors of [8] proposed modeling UAV airspeeds with a Gaussian distribution \(\mathcal{N}\left(\mu_{v},\sigma_{v}^{2}\right)\), where \(\mu_{v}=V_{C}\) and the standard deviation is defined as: \[\sigma_{v}=\frac{V_{max}-V_{c}}{3}. \tag{4}\] #### Iii-C4 Location update and wireless broadcasting rates: GPS module manufacturers offer various options with differing position update rates \(\Delta t\). Some high-end options can offer updates as frequently as 100 Hz (for instance, the TR-3N by Javad6), while consumer-grade modules typically offer rates between 0.2-8 Hz. Additionally, a comparison of various wireless communication technologies [24, 25, 26, 12, 27] is presented in Fig. 3. We observe that wireless technologies are able to offer broadcast rates corresponding to the coordinate update rates. Consequently, we assume that the location updates are broadcast immediately after their update. Footnote 6: [https://www.javad.com/product/tr-3n/](https://www.javad.com/product/tr-3n/) ## IV Proposed uNMAC definition We propose a definition for uNMAC based on localization and mobility-induced uncertainties. The uncertainty area around each UAV (denoted as UAV\({}_{i}\) and UAV\({}_{j}\)), considering the localization error and maximum relative speed, is modeled using the following equations. When the movement direction is unknown \[d^{i}=d^{i}_{AF}+2(\epsilon^{i}+v^{i}\cdot\Delta t), \tag{5}\] and when the movement direction is known \[d^{i}=d^{i}_{AF}+2\epsilon^{i}+\vec{v}^{i}\cdot\Delta t. \tag{6}\] For the total uNMAC, which includes the areas around both UAV\({}_{i}\) and UAV\({}_{j}\), we use these equations: \[d^{ij}_{uNMAC}=d^{i}_{AF}+d^{j}_{AF}+2(\epsilon^{i}+\epsilon^{j}+\Delta t \cdot(v^{i}+v^{j})). \tag{7}\] The separation distance to avoid a midair collision is computed as: \[r_{uNMAC}=\frac{d_{uNMAC}}{2}. \tag{8}\] Fig. 3: Comparison of the wireless technologies considered for Remote ID messaging. Fig. 2: Top: UAV location uncertainty area consists of i) Airframe, ii) Localization error, iii) Displacement between two location reports. Bottom: UAV Near Mid-Air Collision (uNMAC) and the correspondent separation distance A midair collision occurs if the distance between the centers of the two UAVs is smaller than the UAV airframe sizes: \[r_{MAC}=\frac{d_{AF_{1}}^{i}+d_{AF}^{j}}{2}. \tag{9}\] Safe operation of the UAVs requires the inter-UAV separation to satisfy the condition \(r_{sep}\geq r_{MAC}\). However, when we consider the location uncertainties related to GNSS performance and UAV displacements between the updates, we guarantee that the UAVs do not collide only if \(r_{sep}\geq r_{uNMAC}\). Three main factors contribute to these equations: airframe size, localization error, and UAV velocity. **MAC (Airframe sizes)** distances are modeled as a triangle distribution with the density function: \[f_{AF}(x)=\begin{cases}\frac{x}{AF_{max}^{2}/4},&\text{if }0<x<\frac{AF_{max}}{2} \\ \frac{AF_{max}^{2}-x}{AF_{max}^{2}/4},&\text{if }\frac{AF_{max}}{2}\leq x<AF_{max} \\ 0,&\text{otherwise}.\end{cases} \tag{10}\] Note that MAC is a component of uNMAC (the inner disk in Fig. 2. Let us describe the other components. GNSS **Localization Error**\(X\) is conventionally assumed to follow a Gaussian distribution \(\mathcal{N}(0,\sigma^{2})\). However, when we construct a safety volume around the drone ensuring no collision, we have to consider a circular area where the UAV can be. This area is described by a radius \(\epsilon=|X|\) following a Half-normal distribution, where: \[f(x,\sigma)=\frac{\sqrt{2}}{\sigma\sqrt{\pi}}\exp-\frac{x^{2}}{2\sigma^{2}}, \quad x\geq 0. \tag{11}\] Based on the Half-Normal distribution, we may derive the probability density function of \(\epsilon^{i}+\epsilon^{j}\) contributing to (8) as \[f(x)=\frac{1}{\sqrt{\sigma_{i}^{2}+\sigma_{j}^{2}}}\sqrt{\frac{2}{\pi}}\cdot \exp\Big{(}-\frac{x^{2}}{2(\sigma_{i}^{2}+\sigma_{j}^{2})}\Big{)}\times\] \[\Bigg{[}\text{erf}\Bigg{(}\frac{\sigma_{i}x}{\sqrt{2}\sigma_{j}\sqrt{\sigma_{i }^{2}+\sigma_{j}^{2}}}\Bigg{)}+\text{erf}\Bigg{(}\frac{\sigma_{j}x}{\sqrt{2} \sigma_{i}\sqrt{\sigma_{i}^{2}+\sigma_{j}^{2}}}\Bigg{)}\Bigg{]}, \tag{12}\] where \(\sigma_{i,j}\) are the standard deviations of the errors estimated by UAVs \(i\) and \(j\) respectively, and \(\text{erf}(\cdot)\) is the error function. **UAV Velocity**, or the maximum relative speed \(\nu_{rel}^{max}\), follows the Gaussian distribution \(\mathcal{N}(\mu_{v1}+\mu_{v2},\sigma_{v1}^{2}+\sigma_{v2}^{2})\), where \(\mu_{v1,v2}\) and \(\sigma_{v1,v2}^{2}\) are the speed distribution parameters for the UAVs involved in the potential conflict. **Location update and wireless broadcasting rates \(\Delta t\)** in (5) - (8) is linked to speed, the broadcasting/localization update rates influences the distribution of the mobility-induced uncertainty. As speeds are modeled by a normal random variable, the distribution of \(V\Delta t\) is also described by a Gaussian distribution with mean \(\mu=\Delta t(\mu_{v1}+\mu_{v2})\) and variance \(\Delta t^{2}(\sigma_{v1}^{2}+\sigma_{v2}^{2})\). By utilizing these concepts and equations, we can compute the uNMAC to avoid midair collisions and facilitate the safe operation of UAVs. ## V Remote ID Enabled RVO Incorporating various errors RVO algorithm enhances its realism and robustness. Localization and mobility-induced errors are modeled as a Gaussian distribution, defined as follows: \[\mathbf{p}^{i}\sim N(\mu_{p}^{i},\sigma_{p}^{i}) \tag{13}\] \[\mathbf{p}^{j}\sim N(\mu_{p}^{j},\sigma_{p}^{j}) \tag{14}\] where, \(\mu_{p}^{i}\), \(\sigma_{p}^{i}\), \(\mu_{p}^{j}\), and \(\sigma_{p}^{j}\) represent the mean and standard deviations of the UAVs' positions. The presence of these errors makes the RVO function, \(f^{RVO_{j}^{j}}\), a random variable. We can alternatively express the RVO equation [21, 28] as a probabilistic constraint, ensuring a minimum probability \(\eta\) of collision avoidance: \[P(f^{RVO_{j}^{j}}(\mathbf{p}^{i},\mathbf{p}^{j},\mathbf{v}^{i},\mathbf{v}^{j}, \mathbf{v}^{rvo},)\geq 0)\geq\eta, \tag{15}\] To find the solution space of the equation (15), we employ the Bayesian decomposition method as in [21]. This results in the following equation: \[\begin{split}& P(f^{RVO_{j}^{j}}(\mathbf{p}^{i},\mathbf{p}^{j}, \mathbf{v}^{i},\mathbf{v}^{j},\mathbf{v}^{rvo},)\geq 0)=\\ & P(f^{RVO_{j}^{j}}(.)\geq 0|p^{i}\in\mathbb{C}^{i},p^{j}\in \mathbb{C}^{j})\mathbb{C}_{j}^{i},\end{split} \tag{16}\] where \(\mathbb{C}^{i}\), \(\mathbb{C}^{j}\) are the uncertainty contours around each UAV caused by GPS and localization errors and \[\mathbb{C}_{j}^{i}=\int_{p_{j}\in\mathbb{C}_{j}}\int_{p_{j}\in\mathbb{C}_{j}}P( p^{j}|p^{i})P(p^{j})dp^{i}dp^{j}. \tag{17}\] Given the knowledge of the errors (via Remote ID), we can evaluate the right-hand side of equation (16) into a positive constraint. Integrating (15) into (16), we get: \[\begin{split}& P(f^{RVO_{j}^{j}}(\mathbf{p}^{i},\mathbf{p}^{j}, \mathbf{v}^{j},\mathbf{v}^{rvo},)\geq 0)\geq\eta\\ & P(f^{RVO_{j}^{j}}(.)\geq 0|p^{j}\in\mathbb{C}^{i},p^{j}\in \mathbb{C}^{j})\geq\frac{\eta}{\mathbb{C}_{j}^{i}}\end{split} \tag{18}\] The constraint in (18) now becomes deterministic and guarantees satisfaction with a probability of at least \(\frac{\eta}{\mathbb{C}_{j}^{i}}\). Each UAV then solves this constraint reactively for collision avoidance in multi-agent scenarios. ## VI Numerical Results ### _uNMAC and Separation Distances_ Firstly, we analyze the contribution of each uNMAC component (i.e., airframes, localization error, and mobility-induced error) to the final uNMAC size. The contribution of airframe sizes is straightforwardly described by the Triangular distribution in (10) with lower limit 0.1 m, upper limit 7.5 m and mode 3.7 m. Fig. 4 plots equation (12) for equal \(\sigma_{i}=\sigma_{j}\). While the upper bound error is 80 m [9], we can achieve the mean errors of 3 m, 5.6 m, 7.4 m, and 16 m for the AODs listed in Table II (the corresponded values of \(\sigma_{i}\) and \(\sigma_{j}\) are listed in the figure). For the aforementioned AODs, the localization error does not exceed [9.34, 17.3, 23.94, 49.5] meters with 99.9% probability. As it is pointed out in [10], the mobility-induced error is tightly linked to the broadcast rate \(\Delta t\). Fig 5 demonstrates this effect. In the figure, the safety disk expansion due to \(\Delta tV\) is analyzed against the broadcast rate \(\Delta t\). Note that the lines indicate the values which are not exceeded with 99.7% probability ensuring an appropriate level of safety. We compare UAVs of the categories listed in Table III. It is obvious that LoRa and FLARM may be used only when UAVs are separated by distances larger than several hundred meters. Bluetooth and WiFi SSID can accommodate much more dense aerial traffic by lowering the mobility-induced expansion to several meters which is comparable to the contribution of the airframe size and localization error. In general, we found that \(\Delta t=0.1\) s is an appropriate choice since it does not result in a large uNMAC while lowering the probability of Remote ID message interference [25]. **Takeaway 1:** we recommend broadcast Remote ID messages with the rate of \(\Delta t\leq 0.1\) s. This allows for lowering the separation distances without changing the current message format. However, including information on the localization error and airframe size can further lower the distances (as it is evident from Fig. 6) without compromising the safety levels. Based on this conclusion, we formulate two candidates for the enhanced Remote ID messages. Both candidate message formats contain the same information as in the standard Remote ID, however, we suggest additionally including * Candidate 1: Localization error measured by the onboard localization module. * Candidate 2: i) Localization error measured by the onboard localization module and 2) airframe size. In the following, we assess the candidates' performance by basing the RVO-based multi-agent navigation on the information contained in these messages. ### _Multi-Agent Navigation with RVO_ We investigate how different separation definitions affect the time required by drones to perform their missions while not colliding with each other. Note that the latter does not take into account the aforementioned errors. Consequently, directly using sNMAC for RVO can result in MACs. Fig. 4: Localization error for different Age of Data given in Table II. The actual error can be significantly lower than the conservative maximum error. Fig. 5: Mobility-induced contribution. Increasing the broadcasting rate can significantly reduce the error. Fig. 6: uNMAC sizes for different broadcasting rates and localization errors. When the location estimates are accurate and communicated frequently, the separation between UAVs can be reduced. #### V-A1 Simulation Environment The simulation environment in this study, implemented in MATLAB, is devised to closely emulate a real-world scenario involving multiple robots in a shared navigation space. Note that we consider that all UAVs fly at the same altitude. Scenario DescriptionTwo distinct initial UAV configurations are used in the simulation (see Fig 7). The first scenario (top figure) organizes 8 UAVs in a circular pattern. The second scenario (Fig 7, bottom) places 24 UAVs in a square formation. A static square obstacle is also introduced, which the UAVs are required to circumvent while avoiding collisions with each other. Every UAV is modeled as an instance of the RobotClass with attributes such as airframe size, maximum and cruise speeds, and payload performance (i.e., GNSS localization accuracy and wireless module broadcasting rate) modeled as described in Section III. In this work, we present results for the worst-case localization error (\(\sigma=10\) m) and the most inclusive Category 3 of UAVs (see Table III). The target destination of each UAV is located on the opposite side. An attribute \(k_{size}=\)400 meters is used to define the drones' range of vision, effectively setting the limit of how far each UAV can "see" in the simulation area. At each time step, the size of the safety disc around each drone can change accounting for varying errors and speeds. Data exchange and use in RVOWhile the UAVs are generated according to the approach presented in Section III (the exact parameters are listed in Table IV), four different ways of defining the RVO disk sizes (and their Minkowski sum) are considered: * **sNMAC**[15]: a fixed size representing the Minkowski sum of the two largest UAV airframes (15 m). * **Remote ID:** Sum of UAVs' i) Maximum airframe size, ii) upper bound localization error, and iii) reported velocity multiplied by the broadcast rate. * **Candidate 1:** Sum of UAVs' i) Maximum airframe size, ii) reported localization error, and iii) reported velocity multiplied by the broadcast rate. * **Candidate 2:** Sum of UAVs' i) reported airframe size, ii) reported localization error, and iii) reported velocity multiplied by the broadcast rate. RVO is run based on the data reported by UAVs (i.e., the reported coordinates contain errors) every \(\Delta t=100\) ms. This broadcast rate is selected following the results of the previous subsection. Note that MACs are still possible if the localization errors are not appropriately compensated by increasing safety disks around the drones. Execution LoopThe simulation iterates over discrete time steps, the duration of which is specified by the parameter \(\Delta t\). During each iteration, the RVO checks for any collision between UAVs and formulates an evasive maneuver. The iterative execution continues until all robots have reached their respective targets. The time taken for each robot to reach its target is calculated and logged after the simulation, serving as a performance metric. For each scenario, 500 runs have been performed for each separation definition (i.e., sNMAC, Remote ID, Candidates 1 and 2), resulting in 72000 individual UAV flights. Mid-air Collision DetectionRVO is run based on data reported by UAVs. However, the actual drone locations are different due to, for instance, localization errors. We log the actual positions of each UAV and check for MAC at every iteration. In the case of detecting a MAC as in (9), the involved UAVs are removed from further collision avoidance computations and the MAC counter is increased. The flexibility of this simulation environment lends itself to a comprehensive and realistic evaluation of multi-robot system dynamics in various navigation scenarios. This flexibility is further enhanced by the ability to adjust UAVs' characteristics, initial configurations, and error levels. #### V-A2 Simulation Results The simulation results, as illustrated in Fig 8, showcase how various definitions of safety disks around each drone influence the mission completion time. While the use of sNMAC results in the quickest mission Fig. 7: Two scenarios used for simulating collision avoidance performance: a circular pattern with eight UAVs and a square formation with 24 UAVs. In both scenarios, UAVs must avoid each other and navigate around a static square obstacle (only in scenario 2). execution, it comes with a non-zero collision probability of 4.3% -- a result of neglecting instrumental error effects. However, the situation changes significantly when we utilize the information received in Remote ID messages, be it standard or our proposed candidates. Using such data allows for guaranteed collision-free navigation. The mission execution time remarkably reduces nearly by half when UAVs' localization errors are incorporated into the calculations. Furthermore, when both localization accuracy and airframe size are factored into the RVO, we come close to the efficiency provided by sNMAC without any risk of collisions. To elaborate, for Scenario 1, the median mission execution times stand at [6.7; 11.3; 7.7; 6.9] seconds for sNMAC, Standard Remote ID, Candidate 1, and Candidate 2, respectively. For Scenario 2, the completion times average at [20.9; 50.8; 27.4; 22.2] seconds for the same set. As we can observe, the trend is the same for different UAV densities and mission complexities. **Takeaway 2:** Our study emphasizes the potential of Remote ID messages and RVO in ensuring collision-free navigation in multi-agent operations. However, the current Remote ID format, due to its lack of relevant information, results in longer mission execution times. This is primarily attributed to the need for conservative assumptions on localization accuracy and drone sizes. By incorporating these additional details into future Remote ID message formats, we can significantly boost airspace capacity while complying with stringent safety standards. The improvement will lead to more efficient and safer drone operations, thus benefiting industries relying on drone technologies. ## VII Conclusions Through our comprehensive research and numerous simulations, this study has addressed the crucial aspects of collision avoidance in UAV operations. We developed a mathematical model to provide an understanding of the key parameters that affect collision probabilities, leading us to propose enhancements to the current Remote ID system. Our results illustrate the significant impact of individual uNMAC components on the final uNMAC size. These components, including airframe sizes, localization error, and mobility-induced error, contribute differently to collision risk. We demonstrated that broadcasting Remote ID messages at a rate of \(\Delta t\leq 0.1\) s effectively reduces the separation distances without necessitating a change in the current message format. However, a further reduction in separation distances can be achieved by including data on the localization error and airframe size, enhancing safety levels. The proposed enhancements to the standard Remote ID messages, namely Candidate 1 and Candidate 2, contain added details on the localization error measured by the onboard localization module and the airframe size. These modifications allow for better use of the RVO-based multi-agent navigation system by improving the definition of safety disks around each drone. Our simulation results underscore the potential of these enhanced Remote ID message formats in ensuring collision-free navigation. We noticed that the mission execution time was significantly reduced when UAVs have localization errors and airframe sizes at their disposal. This led us to a significant finding: by considering both localization accuracy and airframe size, we can approach the performance offered by sNMAC while maintaining a zero-collision standard. In light of our findings, we strongly encourage aviation authorities and regulatory bodies to consider incorporating information on UAV localization error and airframe size within Remote ID messages. This could improve airspace safety and efficiency, fostering the growth of UAV applications. In conclusion, our study emphasizes the transformative potential of improving Remote ID messages to facilitate safer and more efficient UAV operations. We hope that these findings contribute towards the evolution of drone technology, paving the way for robust and scalable airspace traffic management systems. Future research can further explore these aspects, building upon the groundwork laid by our findings. For instance, wireless networking simulators (for instance, such as in [29]) may be used for more realistic communication modeling. Additionally, security-related issues [30] must be addressed in order to make Remote ID truly attractive to UAV practitioners.
2306.01300
Evolutionary dynamics with temporal higher-order interactions
Humans interact and cooperate in structured societies, which are often represented by complex networks. Previous explorations mainly focus on pairwise and static network structures, where each link connects two nodes permanently and never changes. However, empirical human collective interactions go beyond static time-invariant one-to-one relationships. Recently, researchers have made vital progress in modelling higher-order interactions using hypernetworks, where a link may connect more than two individuals. Here, we study collective cooperation through temporal hypernetworks, capturing the time-varying higher-order interactions in human societies. We find that temporal hypernetworks may promote cooperation compared with their static counterparts, and the traditional static pairwise network may underestimate the positive effect of local interactions on fostering cooperators. Moreover, temporal hypernetworks with sparse components and higher-order interactions can facilitate cooperation. Surprisingly, we report that when the scale of interactions is of the same order as population size, moderately small hyperlink sizes best facilitate cooperation. Our findings underscore the significant impact of real-world temporal higher-order interactions on the evolution of cooperation.
Xiaochen Wang, Lei Zhou, Zhenglong Tian, Aming Li
2023-06-02T06:52:39Z
http://arxiv.org/abs/2306.01300v2
# Temporal higher-order interactions facilitate the evolution of cooperation ###### Abstract Motivated by the vital progress of modeling higher-order interactions by hypernetworks, where a link connects more than two individuals, we study the evolution of cooperation on temporal hypernetworks. We find that temporal hypernetworks may promote cooperation compared with their static counterparts. Our results offer new insights into the impact of network temporality in higher-order interactions on understanding the evolution of cooperation, suggesting traditional networks based on pairwise or static interactions may underestimate the potential of local interactions to foster cooperation. ## I Introduction Higher-order interactions, which occur among multiple individuals, are prevalent in complex systems, such as co-authorship in academia [1], mesoscopic interactions among neurons in the nervous system [2], and competitions among species in ecosystems [3]. When these interactions give rise to social dilemmas, cooperation, which entails a personal cost and benefits others, is supposed to be outperformed by defection which free-rides in terms of both individual rationality and Darwin's notion of'survival of the fittest' [4]. However, cooperation evolves and persists in nature [5; 6; 7; 8; 9; 10]. Network reciprocity is one of the fundamental explanations for the evolution of cooperation [11], where networks impose local interactions and enable cooperators to form clusters to avoid exploitations by defectors [8; 9; 10; 11; 12; 13; 14; 15; 16; 17]. Most previous studies about higher-order interactions on networks focus on traditional pairwise network models, where nodes represent individuals and links interactions [18]. Following this framework, a widely adopted setting for capturing higher-order interactions is that each individual and all its neighbors form a group and interact together [12; 18; 19]. This means individuals interact with not only their neighbors but also their neighbors' neighbors. However, while changing behaviors, individuals can only imitate their neighbors rather than all interaction partners [12; 20], suggesting an incompatible definition of groups and the incapability of traditional networks to accurately describe higher-order interactions [16; 21]. Recent works point out that hypernetworks that use hyperlinks (i.e., sets of nodes, or generalized versions of 'links' that can connect more than two individuals) to describe groups are better and more natural network models to capture higher-order interactions than traditional networks [22; 23]. And it is shown that the presence of higher-order interactions, compared with pairwise interactions on traditional networks, has a fundamental impact on the evolution of cooperation [22]. Although hypernetworks provide a valid tool for studying the effects of higher-order interactions on the evolution of cooperation, current studies assume that these interactions exist permanently and never change. In reality, interactions among individuals build and dissolve over time, indicating that links only exist intermittently. For example, human interactions through digital or face-to-face communications usually last finitely in short durations [24] and ecological networks alter with the seasons [25; 26]. More importantly, it has been shown that such network temporality can significantly alter various dynamical processes on networks, such as the spreading of information [27], the evolution of cooperation [28], and controllability of systems [20]. Here, we systematically investigate the evolution of cooperation on temporal hypernetworks, where hypernetwork structures change over time. By combining numerical simulations and theoretical analysis on both synthetic and empirical temporal networks, we find that temporal hypernetworks with higher-order interactions and sparse components can strongly promote cooperation, and even outperform their static counterparts. Our results thus shed new light on the effect of temporal higher-order interactions on the evolution of cooperation. ## II Model Here, we model a temporal hypernetwork as a series of subhypernetworks. On each subhypernetwork, a hyperlink connects multiple individuals. As subhypernetworks change over time, hyperlinks can build or dissolve. We first construct empirical temporal hypernetworks using empirical datasets. The dataset can be modeled as a sequence of \(\mathcal{G}\) snapshots, where each contains all interactions during a non-overlapping time window \(\Delta t\) on a fixed population of \(N\) individuals (Fig. 1a). In each snapshot, every clique is represented by a hyperlink, in which individuals interact with each other (Fig. 1b). The snapshots within each time interval \(\tau\) are then subsequently aggregated, generating a sequence of \(\left[\mathcal{G}/(\tau/\Delta t)\right]\) subhypernetworks that form a temporal hypernetwork (Fig. 1c). When \(\tau=\mathcal{G}\Delta t\), we obtain the corresponding static hypernetwork (Fig. 1d). Moreover, we also construct synthetic temporal hypernetworks by using the Holme-Kim model Holme and Kim (2013), which generates a static network. To construct each snapshot for the synthetic temporal hypernetwork, the links of the static network are activated with a probability \(p\), and the resulting cliques are represented by hyperlinks. This process is repeated \(\mathcal{G}\) times to construct each snapshot. After that, a series of synthetic subhypernetworks are constructed and form the synthetic temporal hypernetwork via the above procedure introduced for the empirical temporal hypernetworks. On temporal hypernetworks, individuals within the same hyperlink play the public goods game, where each cooperator (\(C\)) pays a cost \(c\), and each defector (\(D\)) pays nothing. The total costs are then collected and multiplied by a synergy factor \(r\) to generate payoffs that are evenly distributed to all individuals within the hyperlink. The payoff of each defector and cooperator is \[\pi_{D}(n_{C})=rcn_{C},\] and \[\pi_{C}(n_{C})=rcn_{C}-c,\] where \(n_{C}\) is the number of cooperators within the hyperlink. In each round, each individual \(i\) accumulates the payoff \(P_{i}\) from all games she joins, namely, all hyperlinks she belongs to on the subhypernetwork. When it turns to strategy updating, all individuals decide whether to change their strategies or not. Individual \(i\) selects a random group and compares her payoff with a random neighbor \(j\) in the group. Here individual \(i\) imitates \(j\)'s strategy with probability given by the Fermi function \(1/\{1+\exp[-s(P_{j}-P_{i})]\}\), and \(s\) is the intensity of selection. Individuals play games with their neighbors \(g\) rounds on every subhypernetwork before it changes to the next one. To tell whether the evolution of cooperation is promoted, we focus on the critical value \(r_{C}\), where the frequency of cooperators is starting to be above zero. A smaller \(r_{C}\) means that cooperation is easier to emerge and the corresponding model settings (e.g., parameters, network structures) are said to better promote the evolution of cooperation. ## III Results ### Higher-order interactions and sparse components promote cooperation Figure 2a shows that when the time interval \(\tau\) of the subhypernetworks is constant, an increase in the time window \(\Delta t\) significantly promotes the frequency of the cooperators \(f_{C}\) and decreases the critical values \(r_{C}\). For a fixed \(\Delta t\), as shown in Fig. 2b, cooperation is facilitated when a subhypernetwork contains interactions within a moderate interval (i.e., moderate \(\tau\)). Intuitively, the window \(\Delta t\) and interval \(\tau\) shape the temporal hypernetwork series. Enlarging the window will allow more interactions to be encapsulated in the snapshots, making the sizes of hyperlinks larger and the degree (i.e., the number of hyperlinks each individual belongs to) higher. And large \(\tau\) can increase the degree and size of components on subhypernetworks, which will affect the aggregation of cooperators. For an intuitive explanation, we consider a degree-homogeneous uniform temporal hypernetworks, in which each individual has the same degree and all hyperlinks have the same size \(n\) in every subhypernetwork. We theoretically approximate the critical value \(r_{C}\) for cooperators to emerge using the replicator equation, \[\dot{x}_{D}= (1-\delta)\sum_{i=0}^{n-1}\binom{n}{i}\left(1-x_{D}\right){}^{i} x_{D}^{n-i}\frac{i(n-i)}{n(n-1)}f(\pi_{D}-\pi_{C})\] \[+\delta\sum_{i=0}^{n-1}\binom{n}{i}p^{i}(1-p)^{n-i}\frac{i(n-i)}{ n(n-1)}f(\pi_{D}-\pi_{C}), \tag{1}\] Figure 1: The construction of temporal hypernetworks. **a**. Social interactions between 4 individuals are indicated by solid circles with different colors. Each individual is depicted by the line, over which the corresponding circles will be connected at time \(t\), provided two players interact with each other during the time interval \((t-1,t]\). **b**. The snapshots are generated from all interactions during the window \(\Delta t\). Several individuals are seen in a hyperlink if they interact with each other during the window \((t-\Delta t,t]\). **c**. The snapshots within each time interval \(\tau\) are then aggregated into subhypernetworks. For example, the green and yellow nodes interact on the first and second snapshots, thus there is a hyperlink between them with weight 2 on the first subhypernetwork. **d**. All snapshots are aggregated into the static hypergraph. where \(x_{D}\) is the frequency of defectors, and \(f(\pi_{D}-\pi_{C})=1/\{1+\exp[-s(\pi_{D}-\pi_{C})]\}-1/\{1+\exp[-s(\pi_{C}-\pi_{D} )]\}\). capturing the net growth rate of cooperators. Parameter \(\delta\) is the similarity between consecutive subhypernetworks. As subhypernetworks switch every step, \(1-\delta\) of the hyperlinks are regenerated. \(p\) is the probability that an individual is a defector on the unchanged hyperlinks (see Supplemental Material for details). Solving \(\dot{x}_{D}=0\), we have \[r_{C}=\frac{1}{1+(n-1)(1-2q)\delta}, \tag{2}\] where \(q<1/2\) is a parameter related to the probability that neighbors are using different strategies during the evolution on the unchanged hyperlinks. As shown in Eq. (2), we demonstrate that a larger group (hyperlink) size \(n\) reduces the critical value \(r_{C}\), and thus promotes cooperation. Previous studies indicate that a sparser network leads to a more pronounced aggregation phenomenon, representing a smaller \(q\)[30; 31]. To better represent the connectivity between nodes, we use the concept of network density. First, we map the hypernetwork to an unweighted network. The traditional definition of network density is the ratio between the number of existing links and the number of possible links (i.e. \(N(N-1)/2\)). However, in a temporal hypernetwork, the subhypernetworks may not be connected, making the dynamics solely occur in each component. Here, we introduce the concept of connected density Figure 2: Higher-order interactions and sparse components promote cooperation. We plot the frequency of cooperators \(f_{C}\) as a function of the synergy factor \(r\) on empirical (**a, b**) and synthetic (**f, i**) temporal hypernetworks. For synthetic hypernetwork, we set the probability for links to be activated \(p\) to be \(0.05\) (**f**) and \(0.6\) (**i**). The cooperation emerges at a smaller \(r_{C}\) for a larger \(\Delta t\) and intermediate \(\tau\). **c**. We give an example of the definition of connected density. For the two networks with 4 nodes and 2 links, the traditional density is \(\rho^{\prime}=1/3\). However, connected density focuses on the components. In the top network, there are two components with densities equal to 1. Then the connected density of the whole network is 1. For the other network (bottom) with one component and one isolated node, it has a connected density of \(1/2\). We present the average connected density \(\langle\rho\rangle\) as a function of time interval \(\tau\) of empirical (**d**) and synthetic (**g, j**) hypernetworks. The unit of \(\tau\) is 1min for panel d. For empirical and synthetic hypernetwork with \(p=0.05\), the connected density drops sharply and then increases slowly when \(\tau\) increases. However, it increases monotonically on synthetic hypernetworks with \(p=0.6\). And we plot the average hyperlink size \(\langle n\rangle\) as a function of window \(\Delta t\) of the empirical hypernetwork (**e**). The size increases as \(\Delta t\) goes up. We also display distributions of the hyperlink size for synthetic hypernetwork (**f, k**). The synthetic hypernetworks with \(p=0.6\) have a higher size compared with the hypernetworks with \(p=0.05\). For the statics, the last subhypernetwork is ignored if \(\tau\nmid G\Delta t\). The simulations are performed \(10^{3}\) times, and within each time, the evolution is repeated \(10^{6}\) rounds. Every independent simulation starts with an equal number of cooperators and defectors. For all simulations, the round of interaction \(g=100\). The population size \(N=327\) for empirical hypernetworks and \(N=100\) for synthetic hypernetworks. We set \(\tau=5\)h for pannel **a**, and \(\Delta t=0.1\)h for pannel **b**. of networks to better capture the internal structure. For \(i\)th component with \(N_{i}\) nodes and \(L_{i}\) links, the density of the component is \[\rho_{i}=\frac{2L_{i}}{N_{i}(N_{i}-1)}, \tag{3}\] which spans from 0 to 1. \(\rho_{i}\) of an isolated node \(i\) is defined as 0. Then, the connected density of this subnetwork is defined as the corresponding average over each component, namely, \[\rho=\sum_{i}\frac{N_{i}}{N}\rho_{i}. \tag{4}\] For the temporal hypernetwork or traditional temporal network, the average density is \(\langle\rho\rangle\) defined over each subhypernetwork. As Fig. 2c shows, the purpose of this definition is to focus only on the connectivity within components, which is the extent to which the dynamics can ripple. And networks that are extremely sparse under the traditional definition can still be dense from the perspective of their connected density. We now explain our results by the connected density and hyperlink size induced by \(\tau\) and \(\Delta t\). As the time interval \(\tau\) of the subhypernetwork gets longer, the average density of the temporal hypernetwork decreases rapidly and then gradually increases. The connected density reaches the minimum at a moderate \(\tau\), and the results are displayed in Fig. 2d. That is, for a moderate \(\tau\), the hypernetwork is sparser, which can help cooperators to aggregate together. Such a phenomenon is actually a phase change process. If \(\tau\) is extremely small, an increase in \(\tau\) significantly increases the size of the components compared to the degree of connectivity between nodes within the components. However, once the entire network is close to fully connected, keeping increasing \(\tau\) will only enlarge the number of links, thus making the connected density larger. Meanwhile, we find that increasing \(\Delta t\) leads to an increase in hyperlink size (Fig. 2e). And Eq. (2) shows that increasing the size of hyperlinks will reduce the critical value \(r_{C}\), which is consistent with previous study [22]. To verify the generality of our conclusions, we perform additional simulations on synthetic temporal hypernetworks with \(p=0.05\) (Disconnected) and 0.6 (Connected). The temporal hypernetwork with \(p=0.05\) is disconnected for almost all individuals and has similar properties and performance to the empirical hypernetwork. The cooperation is promoted at a moderate \(\tau\) (i.e., sparser hypernetworks), as the connected density decreases rapidly and then turns to rise (Figs. 2f, g). The temporal hypernetwork of \(p=0.6\) verifies our conclusions from another side, in which snapshots are all fully connected. As \(\tau\) goes up, the subhypernetworks become denser (Fig. 2j) and inhibits cooperation (Fig. 2i). However, the size of hyperlinks is larger for \(p=0.6\) (Figs. 2f, j), making it easier for cooperation to emerge on temporal hypernetworks (Figs. 2i, k). ### Temporal hypernetworks can promote cooperation Compared to static hypernetwork, temporal hypernetwork better models empirical interactions by involving the interaction time. It is natural to ask what impact will temporality have on the evolution of cooperation. According to above results, for proper time interval \(\tau\) the connected density is lower than the static hypernetwork, which provides an opportunity for the temporal hypernetwork to promote cooperation. The round of interaction \(g\) over each subhypernetwork stands for the similarity \(\delta\) of the temporal hypernetworks. If \(g\) gets larger, the similarity is higher for not changing subhypernetworks for a long time steps. Thus, from Eq. (2), we know that increasing \(g\) will enhance cooperation. Numerical results confirm this analysis (Fig. 3a), and we find that, for a series of temporal hypernetworks generated by proper \(\Delta t\) and \(\tau\) (sparser subhypernetworks), if \(g\) is large, temporal hypernetworks facilitate cooperation more than their static counterparts. As Fig. 3a shows, for \(\Delta t=0.1\)h, \(\tau=1\)h, such a transition will occur between \(g=100\) and \(g=5000\). We further show that the magnitude of the promotion is decreasing with the increase of \(g\) (Fig. 3b). As \(g\) increases, cooperation is greatly facilitated, and the critical value \(r_{C}\) drops sharply. But when \(g\) continues to increase, the decrease in the critical values slows down. This can be explained by the fact that Figure 3: Temporal hypernetworks can promote cooperation compared to their static counterparts. We plot the frequency of cooperation \(f_{C}\) as a function of synergy factor \(r\) on empirical (**a**) and synthetic (**c, d**) hypernetworks. For synthetic hypernetwork, we set the probability \(p\) for links to be activated to be 0.6 (Connected, **c**) and 0.05 (Disconnected, **d**). The frequency of cooperators is promoted as the round of interaction \(g\) increases, and finally, exceeds the static counterparts. **b.** We present the critical value of the synergy factor as a function of \(g\). The magnitude of the promotion of cooperation (i.e. reduction of \(r_{C}\)) is decreasing. For pannel **a**, \(\Delta t\)=0.1h, \(\tau=1\)h. And \(\tau=1\) for pannel **c**, \(\tau=10\) for pannel **d**. Other parameters are consistent with those used in Fig. 2. as \(g\) goes up, the rise of similarity \(\delta\) of the temporal hypernetworks gradually slows down (proportional to \(1/g\)). The same situation is observed on the synthetic temporal hypernetworks (Fig. 3c, d). When \(g=100\), all temporal hypernetworks outperform their static counterparts. But by further increasing \(g\) from \(100\) to \(1000\), the promotion is negligible. Intuitively, once the structure is defined, a fast evolutionary process, or, in other words, having a long and fixed period of interaction (i.e. larger \(g\)) helps the spread of cooperation. Such a phenomenon is directly linked to the promotion of aggregations. When the interaction time \(g\) is longer, it allows for the aggregation of cooperators to be fully established. On the other hand, rapid changes of subhypernetworks (i.e. smaller \(g\)) can disrupt newly formed aggregations, thus discouraging cooperation. ### Advantage of higher-order network representation After exploring the evolution of cooperation on the temporal hypernetworks, we further consider advantage of higher-order network representation in promoting cooperation. Specifically, we compare higher-order interactions on hypernetworks with classical pairwise interactions on traditional networks. For pairwise interactions, we construct the traditional temporal networks under the same window \(\Delta t=0.1\)h and time interval \(\tau=1\)h through empirical data sets, where the cliques are no longer considered as hyperlinks [28]. The static network is undirected and unweighted, in which individuals are linked once they interacted with each other. The game is the same as that on hypernetworks, but the participants in each game are two individuals on each link. Our findings indicate that representing group interactions as a set of pairwise interactions may lead to an underestimate of the potential for local interactions to promote cooperation. We find that hypernetworks can significantly facilitate cooperation compared to traditional networks based on pairwise interactions for different settings of \(g\) (Fig. 4). And cooperation emerges with a smaller synergy factor \(r_{C}\) in both temporal and static hypernetworks. The reason may be that pairwise links on traditional networks are smaller in size while at the same time, each individual has a larger number of neighbors (namely, a larger degree) compared with hypernetworks. Both factors may inhibit the evolution of cooperation as we have analyzed. ## IV Discussion By studying the evolution of cooperation on temporal hypernetworks, we highlight the importance of temporal higher-order interactions on the fate of cooperators. Since higher-order interactions can connect multiple individuals at the same time, it is a reasonable way to represent group interactions. We demonstrate that group interactions play a crucial role in facilitating cooperation compared to traditional pairwise interactions. Also, higher-order interactions embedded in hypernetworks are no longer restricted to the previous assumptions on traditional networks, where the group could only be defined over the focal individual and its first-order neighbors. Indeed, group interactions are more complicated than pairwise ones [32]. In microbial populations, the outcomes of 3-species cannot be fully predicted by pairwise outcomes [33]. The complexity, such as the higher-order and time-varying interactions, comes along with group interactions. Note that the proposed framework of evolutionary dynamics on temporal hypernetworks not only applies to public goods games but also to multi-strategy [34; 35; 36], multi-player games with nonlinear and general payoff functions [37; 38; 39; 40; 41; 19], which are important research topics in the future.
2305.18301
Cetaev condition for nonlinear nonholonomic systems and homogeneous constraints
We first present a way to formulate the equations of motion for a nonholonomic system with nonlinear constraints with respect to the velocities. The formulation is based on the Cetaev condition which aims to extend the practical method of virtual displacements from the holonomic case to the nonlinear nonholonomic one. The condition may appear in a certain sense artificial and motivated only to coherently generalize that concerning the holonomic case. In the second part we show that for a specific category of nonholonomic constraints (homogeneous functions with respect to the generalized velocities) the Cetaev condition reveals the same physical meaning that emerges in systems with holonomic constraints. In particular the aspect of the mechanical energy associable to the system is analysed.
Federico Talamucci
2023-05-10T08:20:34Z
http://arxiv.org/abs/2305.18301v1
# Cetaev condition for nonlinear nonholonomic systems and homogeneous constraints ###### Abstract We first present a way to formulate the equations of motion for a nonholonomic system with nonlinear constraints with respect to the velocities. The formulation is based on the Cetaev condition which aims to extend the practical method of virtual displacements from the holonomic case to the nonlinear nonholonomic one. The condition may appear in a certain sense artificial and motivated only to coherently generalize that concerning the holonomic case. In the second part we show that for a specific category of nonholonomic constraints (homogeneous functions with respect to the generalized velocities) the Cetaev condition reveals the same physical meaning that emerges in systems with holonomic constraints. In particular the aspect of the mechanical energy associable to the system is analysed. ## 1 Introduction The vast and growing interest in systems with kinematic constraints is motivated by a series of significant applications, among which we quote the motion of vehicles. From a modelling point of view it is a question of extending the consolidated procedure inherent to holonomic systems (i. e. those subject to geometric constraints which concern the spatial coordinates only) by admitting the presence of restrictions on the velocities (kinematic constraints) and in some cases even more complex limitations. The transition from the linear case (in some way similar to the holonomic case), namely the linear dependence of the constraints equations on the velocities, to that of nonlinear kinematic constraints considerably complicates the treatment from a formal point of view. If on the one hand the treatment of linear nonholonomic systems dates back to over a century ago (for an exhaustive historical review, see the fundamental text [12] and the more recent review [7]), on the other the formal study of the nonlinear ones is recent and not definitive: even the question of the actual implementation of such constraints, starting from the Appell-Hamel example [1], is debated ([12], [19]). In a very generic and not drastic way we can divide the mathematical approaches to nonholonomic systems into three categories: a first method that is certainly expanding and relies on high-profile publications ([2], [8], [11], [10], [14], [17]) is based on differential geometry and extends the basic theory of holonomic systems to more complex concepts (jet manifolds, jet bundles). A second method takes advantage of the possibility of deriving the equations of motion from a variational principle ([15], [13]), likewise the Hamilton principle in the holonomic case; as far as we know the situation is still open and more than one publication warns about some critical aspects regarding this procedure ([3], [6]). The third approach we mention, which is the one we will develop in the paper, takes into account the virtual displacements ([9] and again [13] for a comprehensive analysis of the various types of displacements) associable to the constrained system, as well as the directions tangential to the configuration space establish a way to write a set of independent equations in the range of holonomic systems. A nonholonomic system with linear kinematic constraints can be treated essentially as a system with geometric constraints, by considering a linear subspace of the admitted directions. However, for nonlinear kinematic constraints a plain geometric description of the virtual displacements does not exist: a frequently accepted hypothesis for constituting the set of virtual displacements is the so-called Cetaev condition ([4], [16]), which it is an accepted axiom leading to the expected equations of motion but does not present a real justification from the laws of mechanics, as far as we can understand ([9]). The first part of the work (Section 1) is just right dedicated to the modeling formulation the Cetaev condition, from which the equations for nonholonomic nonlinear constrained systems are derived. A properly combination of the equations lead to an information about the mechanical energy of the system (as well as in the holonomic case occurs) and the energy equation is presented in the same Section. In the second part (Section 2) the intention is to highlight how the validity of a property of the constraint equations (namely (24)) entails the removal of discrepancies that emerge between the mathematical feature of the Cetaev condition and the physical reading associated with it: actually, the virtual displacements prescribed by the condition play in this case the same role as the velocities consistent with the instantaneous configurations of the system, as well as it occurs for holonomic systems. From the mathematical point of view, it is shown that the above "conciliant" condition is satisfied whenever the constraints are homogeneous equations with respect to the generalized velocities; this is the central point of the work. Even if the few examples described here do not cover the multiplicity of nonholonomic models, it is true that a large part of them are implemented by conditions showing homogeneity in the kinetic variable: this motivates the final consideration of Section 3, where the question is reversed, that is whether the natural condition (24) is an exclusive prerogative of those systems where the nonholonomic constraints can be formulated via homogeneous equations. ### The equations of motion Let us consider a mechanical system whose configurations are determined by \(n\) the lagrangian coordinates \(q_{1}\),..., \(q_{n}\); we recall the Lagrangian equations of motion \[\frac{d}{dt}\frac{\partial T}{\partial\dot{q}_{j}}-\frac{\partial T}{\partial q _{j}}=\mathcal{F}^{(j)}+\mathcal{R}^{(j)}\qquad j=1,\ldots,n \tag{1}\] where \(T(q_{1},\ldots,q_{n},\dot{q}_{1},\ldots,\dot{q}_{n},t)\) is the kinetic energy, \(\mathcal{F}^{(j)}\), \(j=1,\ldots,n\) the generalized forces acting on the system and \(\mathcal{R}^{(j)}\) the constraints forces due to the action of the \(k<n\) kinematic constraints \[\phi_{\nu}(q_{1},\ldots,q_{n},\dot{q}_{1},\ldots,\dot{q}_{n},t)=0,\qquad\nu=1, \ldots,k \tag{2}\] The constraints are independent with respect to the kinematic variables, in the sense that the rank of the \(k\times n\) matrix is full: \[\text{\em rank }\left(\frac{\partial\phi_{\nu}}{\partial\dot{q}_{j}}\right) \begin{array}{c}\nu=1,\ldots,k\\ j=1,\ldots,n\end{array}=k \tag{3}\] We can assume, since it is marginal in this discussion, that the active forces admit a potential \(U\): \[\mathcal{F}^{(j)}=\frac{\partial}{\partial q_{j}}U(q_{1},\ldots,q_{n},t), \quad j=1,\ldots,n \tag{4}\] Hence the equations (1) take the form \[\frac{d}{dt}\frac{\partial\mathcal{L}}{\partial\dot{q}_{j}}-\frac{\partial \mathcal{L}}{\partial q_{j}}=\mathcal{R}^{(j)}\qquad j=1,\ldots,n. \tag{5}\] The system of \(n+k\) equations (5), (2) contain the \(2n\) unknown functions \(q_{j}\) and \(\mathcal{R}^{(j)}\), \(j=1,\ldots,n\). At this point, we outline two different procedures to deal with the system: * to formulate an hypothesis for the constraints forces based on the method of lagrangian multipliers, * to identify the set of virtual displacements similarly to the case of holonomic constraints, that is to provide the admitted variations \(\delta q_{1}\),..., \(\delta q_{n}\) of the coordinates consistent with the restrictions (2) (virtual variations). In the first case one sets \[{\cal R}^{(j)}=\sum_{\nu=1}^{k}\lambda_{\nu}\frac{\partial\phi_{\nu}}{\partial \dot{q}_{j}},\qquad j=1,\ldots,n \tag{6}\] where \(\lambda_{\nu}\), \(\nu=1,\ldots,k\), are unknown coefficients; in this way the unknown quantities equalize the number of equations. Concerning \((ii)\), a generally accepted prospect is to define the virtual displacements as those which verify \[\sum_{i=1}^{n}\frac{\partial\phi_{\nu}}{\partial\dot{q}_{i}}\delta q_{i}=0, \qquad i=1,\ldots,\nu \tag{7}\] known as Cetaev condition; the virtual work of the constraints forces along the displacements verifying (7) is assumed to be null: \[\sum_{i=1}^{n}{\cal R}^{(i)}\delta q_{i}=0.\] The two approaches have in common the fact that the virtual work of the forces (6) is zero whenever (7) holds: \[\sum_{i=1}^{n}{\cal R}^{(i)}\delta q_{i}=\sum_{\nu=1}^{k}\sum_{i=1}^{n}\lambda _{\nu}\frac{\partial\phi_{\nu}}{\partial\dot{q}_{i}}\delta q_{i}=0 \tag{8}\] Nevertheless, the starting point (7) yields an useful method which has the advantage of not making the multipliers \(\lambda_{\nu}\) appear and of selecting a set of independent kinetic variables: actually, we see that condition (3) makes possible the explicit writing of conditions (2) in the following way (without losing generality, except for re-enumerating the variables): \[\dot{q}_{m+\nu}=\alpha_{1}(q_{1},\ldots,q_{n},\dot{q}_{1},\ldots,\dot{q}_{m}, t)\qquad\nu=1,\ldots,k \tag{9}\] with \(m=n-k\). The parameters \(\dot{q}_{r}\), \(r=1,\ldots,m\), play the role of independent kinetic variables, and correspondingly the virtual displacements \(\delta q_{1}\), \(\ldots\), \(\delta q_{m}\) can be considered as independent; in turn, (9) allows to express the dependent displacements as \[\delta q_{m+\nu}=\sum_{r=1}^{m}\frac{\partial\alpha_{\nu}}{\partial\dot{q}_{ r}}\delta q_{r},\qquad\nu=1,\ldots,k \tag{10}\] To verify it, we use the role of \(\dot{q}_{r}\), \(r=1,\ldots,m\) as independent variables, differentiate (2) and write \[\frac{\partial\phi_{\nu}}{\partial\dot{q}_{r}^{r}}+\sum_{\mu=1}^{k}\frac{ \partial\phi_{\nu}}{\partial\dot{q}_{m+\nu}}\frac{\partial\alpha_{\nu}}{ \partial\dot{q}_{r}}=0\quad\Longrightarrow\quad\sum_{r=1}^{m}\frac{\partial \phi_{\nu}}{\partial\dot{q}_{r}}\delta q_{r}+\sum_{\mu=1}^{k}\frac{\partial \phi_{\nu}}{\partial\dot{q}_{m+\mu}}\underbrace{\sum_{r=1}^{m}\frac{\partial \alpha_{\mu}}{\partial\dot{q}_{r}}\delta q_{r}}_{\delta q_{m+\mu}}=0,\quad\nu =1,\ldots,k.\] **Remark 1**: _Having in mind a comparison with the holonomic case and referring to the case of \(N\) vectors \({\bf r}_{i}(q_{1},\ldots,q_{n},t)\), \(i=1,\ldots,N\) locating the points of the system, assumption (7) means that the customary condition_ \[\delta{\bf r}_{i}=\sum_{j=1}^{n}\frac{\partial{\bf r}_{i}}{\partial q_{j}} \delta q_{j},\quad i=1,\ldots,N\] _holding for a holonomic system is replaced in the case of kinematic constraints (2) by_ \[\delta{\bf r}_{i}=\sum_{j=1}^{n}\frac{\partial\dot{{\bf r}}_{i}}{\partial\dot {q}_{j}}\delta q_{j},\quad i=1,\ldots,N. \tag{11}\] **Remark 2**: _For linear kinematic constraints_ \[\sum_{i=1}^{n}\sigma_{\nu,i}(q_{1},\ldots,q_{n},t)\dot{q}_{i}+\zeta_{1}(q_{1}, \ldots,\ldots,q_{n},t)=0,\quad\nu=1,\ldots,k\] _admitting the explicit expressions_ \[\dot{q}_{m+\nu}=\sum_{j=1}^{m}\alpha_{\nu,j}(q_{1},\ldots,q_{n},t)\dot{q}_{j}+ \beta_{\nu}(q_{1},\ldots,q_{n},t),\quad\nu=1,\ldots,k \tag{12}\] the condition (10) reduces to_ \[\delta q_{m+\nu}=\sum_{r=1}^{m}\alpha_{\nu,r}(q_{1},\ldots,q_{n},t)\delta q_{r}, \qquad\nu=1,\ldots,k\] _and corresponds to the usual assumption on virtual variations adopted in texts dealing with nonholonomic theory for linear kinematic constraints ([12])._ The approach (10) requires to express coherently the Lagrangian function \(\mathcal{L}\) by means of the only independent kinetic variables: \[\mathcal{L}^{*}(q_{1},\ldots,q_{n},\dot{q}_{1},\ldots,\dot{q}_{m},t)= \mathcal{L}(q_{1},\ldots,q_{n},\dot{q}_{1},\ldots,\dot{q}_{m},\alpha_{1}( \cdot),\ldots,\alpha_{k}(\cdot),t) \tag{13}\] where \(\alpha_{\nu}(\cdot)\) stands for \(\alpha_{\nu}(q_{1},\ldots,q_{n},\dot{q}_{1},\ldots,\dot{q}_{n},t)\), \(\nu=1,\ldots,k\). In terms of \(\mathcal{L}^{*}\) the equations of motion calculated starting from (1) and considered along the virtual displacements (7), (10) take the form (see [18] for details) \[\frac{d}{dt}\frac{\partial\mathcal{L}^{*}}{\partial\dot{q}_{r}}-\frac{ \partial\mathcal{L}^{*}}{\partial q_{r}}-\sum_{\nu=1}^{k}\frac{\partial \mathcal{L}^{*}}{\partial q_{m+\nu}}\frac{\partial\alpha_{\nu}}{\partial\dot {q}_{r}}-\sum_{\nu=1}^{k}B_{r}^{\nu}\frac{\partial T}{\partial\dot{q}_{m+\nu} }=0 \tag{14}\] for \(r=1,\ldots,\) where \[B_{r}^{\nu}(q_{1},\ldots,q_{n},\dot{q}_{1},\ldots,\dot{q}_{m},t)=\frac{d}{dt} \left(\frac{\partial\alpha_{\nu}}{\partial\dot{q}_{r}}\right)-\frac{\partial \alpha_{\nu}}{\partial q_{r}}-\sum_{\mu=1}^{k}\frac{\partial\alpha_{\mu}}{ \partial\dot{q}_{r}}\frac{\partial\alpha_{\nu}}{\partial q_{m+\mu}} \tag{15}\] for \(r=1,\ldots,m\) and \(\nu=1,\ldots,k\). They represent an alternative formal way (without multipliers) to the set (5), (6), (2). The linear case (12) refers to the Voronec equations for nonholonomic systems with linear kinematic constraints ([12]). The unknown functions in (14) are \(q_{1}\),..., \(q_{n}\) but only the derivatives \(\dot{q}_{r}\), \(\ddot{q}_{r}\)\(r=1,\ldots,m\) are present, owing to (9), once the variables \((\dot{q}_{k+1},\ldots,\dot{q}_{n})\) that are present in \(\frac{\partial T}{\partial\dot{q}_{m+\nu}}\) of (14) have been expressed in terms of \((q_{1},\ldots,q_{n},\)\(\dot{q}_{1},\ldots,\dot{q}_{m},t)\). Evidently, the case of merely holonomic constraints which corresponds to suppress all the terms containing the functions \(\alpha_{\nu}\) brings back to the ordinary Euler-Lagrange equations of motion for \(\mathcal{L}^{*}=\mathcal{L}\). **Remark 3**: _The coefficients in (14) can be expressed in terms of the original functions \(\phi_{\nu}\) of (2) by making use of the relations_ \[\frac{\partial\phi_{\nu}}{\partial q_{i}}+\sum_{\mu=1}^{k}\frac{\partial\phi _{\nu}}{\partial\dot{q}_{m+\mu}}\frac{\partial\alpha_{\mu}}{\partial q_{i}}= 0,\quad\frac{\partial\phi_{\nu}}{\partial\dot{q}_{r}}+\sum_{\mu=1}^{k}\frac{ \partial\phi_{\nu}}{\partial\dot{q}_{m+\mu}}\frac{\partial\alpha_{\mu}}{ \partial\dot{q}_{r}}=0\] _holding for any \(\nu=1,\ldots,k\), \(i=1,\ldots,n\), \(r=1,\ldots,m\). This means that the structure of the equations does not depend on the choice (9) for making explicit (2)._ ### An equation for the energy of the system We recall the Hamiltonian function \[\mathcal{E}(q_{1},\ldots,q_{n},\dot{q}_{1},\ldots,\dot{q}_{n},t)=\sum_{i=1}^{n }\dot{q}_{i}\frac{\partial\mathcal{L}}{\partial\dot{q}_{i}}-\mathcal{L} \tag{16}\] and we ascribe to it the role of energy of the system. At the same time, we are interested in the properties of the energy function formulated by only the independent kinetic variables, that is \[\mathcal{E}^{*}(q_{1},\ldots,q_{n},\dot{q}_{1},\ldots,\dot{q}_{m},t)=\sum_{r=1} ^{m}\dot{q}_{r}\frac{\partial\mathcal{L}^{*}}{\partial\dot{q}_{r}}-\mathcal{L }^{*}. \tag{17}\] We can say that (16) is the energy that is naturally associated to (5) with the Lagrangian \(\mathcal{L}\), while (17) is spontaneously connected to (14), where only the independent velocities are present. Although in a much more complex formal context of geometric type, the two notions of energy can be traced back to those present in [5]. Even if we express (16) and (17) by means of the same set of variables, replacing in the first one the dependent velocities \(\dot{q}_{m+\nu}\), \(\nu=1,\ldots,k\) by virtue of (9), we do not find the same function: actually, it can be seen without difficulty that the following relation occurs: \[{\cal E}^{*}={\cal E}+\sum_{\nu=1}^{k}(\overline{\alpha}_{\nu}-\alpha_{\nu}) \frac{\partial{\cal L}}{\partial\dot{q}_{m+\nu}} \tag{18}\] where we set \[\overline{\alpha}_{\nu}(q_{1},\ldots,q_{n},\dot{q}_{1},\ldots,\dot{q}_{m},t)= \sum_{r=1}^{m}\frac{\partial\alpha_{\nu}}{\partial\dot{q}_{r}}\dot{q}_{r},\quad \nu=1,\ldots,k \tag{19}\] At the same time, it is possible to achieve the two equations that \({\cal E}\) and \({\cal E}^{*}\) verify, coming from (5) and (14) by means of the standard procedure of multiplying by the generalized velocities, summing up and rearranging the expressions: \[\frac{d{\cal E}}{dt}=-\frac{\partial{\cal L}}{\partial t}+\sum_{\nu=1}^{k} \sum_{i=1}^{n}\lambda_{\nu}\frac{\partial\phi_{\nu}}{\partial\dot{q}_{i}}\dot {q}_{i}. \tag{20}\] \[\frac{d{\cal E}^{*}}{dt}=-\frac{\partial{\cal L}^{*}}{\partial t}+\sum_{\nu= 1}^{k}(\overline{\alpha}_{\nu}-\alpha_{\nu})\frac{\partial{\cal L}^{*}}{ \partial q_{m+\nu}}+\sum_{\nu=1}^{k}\overline{B}_{\nu}\frac{\partial T}{ \partial\dot{q}_{m+\nu}} \tag{21}\] where \(\overline{\alpha}_{\nu}\) is defined in (19) and (see also (15)) \[\overline{B}_{\nu}(q_{1},\ldots,q_{n},\dot{q}_{1},\ldots,\dot{q}_{m},t)=\sum_ {r=1}^{m}B_{r}^{\nu}\dot{q}_{r}=\frac{d}{dt}(\overline{\alpha}_{\nu}-\alpha_{ \nu})-\sum_{\mu=1}^{k}\frac{\partial\alpha_{\nu}}{\partial q_{m+\mu}}( \overline{\alpha}_{\mu}-\alpha_{\mu})+\frac{\partial\alpha_{\nu}}{\partial t },\qquad\nu=1,\ldots,k. \tag{22}\] Evidently, if the constraints (2) are absent (holonomic case), both (20) and (21) reduce to the standard energy balance for \({\cal L}^{*}={\cal L}\): \[\frac{d}{dt}\frac{\partial{\cal L}}{\partial\dot{q}_{i}}-\frac{\partial{\cal L }}{\partial q_{i}}=0\ \ \Rightarrow\ \ \frac{d}{dt}\left(\sum_{i=1}^{n}\dot{q}_{i}\frac{\partial{\cal L}}{\partial \dot{q}_{i}}-{\cal L}\right)=-\frac{\partial{\cal L}}{\partial t} \tag{23}\] ## 2 The case \(\overline{\alpha}_{\nu}=\alpha_{\nu}\) As it is expected from the governing equations introduced above, the special situation when the functions in (9) verify the condition \[\overline{\alpha}_{\nu}=\alpha_{\nu}\quad\mbox{for any}\quad\nu=1,\ldots,k \tag{24}\] where \(\overline{\alpha}_{\nu}\) is defined in (19), deserves to be analyzed. Many of the nonholonomic models studied in literature starting with the first examples fall into the category (24): for istance, rigid bodies rolling without sliding on a plane or on a surface, knife edges or sleighs sliding on a horizontal plane, some joints simulating the basic functioning of a vehicle are formulated by linear homogeneous constraints of the type \[\sum_{i=1}^{n}\sigma_{\nu,i}(q_{1},\ldots,q_{n},t)\dot{q}_{i}=0\qquad\nu=1, \ldots,k \tag{25}\] which verify (24). Again, the kinematic constraints with homogeneous quadratic functions \[\sum_{i,j=1}^{n}a_{i,j}^{(\nu)}(q_{1},\ldots,q_{n},t)\dot{q}_{i}\dot{q}_{j}=0, \qquad\nu=1,\ldots,k \tag{26}\] which include the conditions of parallel velocities (\(\dot{P}_{1}\wedge\dot{P}_{2}={\bf 0}\)), orthogonal velocities (\(\dot{P}_{1}\cdot\dot{P}_{2}=0\)) or same lenght (\(|\dot{P}_{1}|=|\dot{P}_{2}|\)), fulfill the request (24). **Remark 4**: _Simple examples of constaints that do not verify (24) are affine nonholonomic constraints of degree \(p\), with \(p\) positive integer:_ \[\sum\limits_{j=1}^{n}\sigma_{\nu,j}(q_{1},\ldots,q_{n},t)\dot{q}_{j}^{p}+\zeta_{ \nu}(q_{1},\ldots,q_{n},t)=0,\quad\nu=1,\ldots,k \tag{27}\] _The constraint on the magnitude of the velocity \(|\dot{P}|=C(t)\), with \(C(t)\) given nonnegative function, is part of (27) with \(p=2\); for \(p=1\) we have linear affine constraint. The explicit form (9) is of the type_ \[\alpha_{\nu}=\dot{q}_{m+\nu}=(\pm 1)^{p+1}\left(\sum\limits_{j=1}^{m}\alpha_{ \nu,j}(q_{1},\ldots,q_{n},t)\dot{q}_{j}^{p}+\beta_{\nu}(q_{1},\ldots,q_{n},t) \right)^{1/p},\qquad\nu=1,\ldots,k \tag{28}\] _for suitable coefficients \(\alpha_{\nu,j}\) and \(\beta_{\nu}\). The function (19) can be put in the form_ \[\overline{\alpha}_{\nu}=\alpha_{\nu}\frac{\sum\limits_{i=1}^{m}\alpha_{\nu,i }\dot{q}_{i}^{p}}{\sum\limits_{j=1}^{m}\alpha_{\nu,j}\dot{q}_{j}^{p}+\beta_{ \nu}}\qquad\nu=1,\ldots,k\] _so that (24) is valid if and only if \(\beta_{\nu}=0\), which means that the constraints are homogeneous._ Focusing now on the class of constraints (24), at least three essential properties have to be pointed out, whenever (24) holds: 1. the virtual displacements (11), which may appear somehow artifical in their definition, reveal the physical meaning of settling along the virtual velocities, in the following sense: formally referring to the \(N\)-points system \({\bf r}_{i}(q_{1},\ldots,q_{n},t)\) as in Remark 1, the velocity \(\dot{\bf r}_{i}=\sum\limits_{j=1}^{n}\frac{\partial{\bf r}_{i}}{\partial q_{r }}\dot{q}_{j}+\frac{\partial{\bf r}_{i}}{\partial t}\) makes us consider the term \(\widehat{\bf r}_{i}=\sum\limits_{j=1}^{n}\frac{\partial{\bf r}_{i}}{\partial q _{r}}\dot{q}_{j}\) as the component consistent with the blocked configuration (virtual velocity). In the presence of (9) one has \[\widehat{\bf r}_{i}=\sum\limits_{r=1}^{m}\frac{\partial{\bf r}_{i}}{\partial q _{r}}\dot{q}_{r}+\sum\limits_{\nu=1}^{k}\frac{\partial{\bf r}_{i}}{\partial q _{m+\nu}}\alpha_{\nu}=\left(\sum\limits_{r=1}^{m}\frac{\partial{\bf r}_{i}}{ \partial q_{r}}+\sum\limits_{\nu=1}^{k}\frac{\partial{\bf r}_{i}}{\partial q_{ m+\nu}}\frac{\partial\alpha_{\nu}}{\partial\dot{q}_{r}}\right)\dot{q}_{r}, \qquad i=1,\ldots,N\] (29) where the last equality is due to (24). On the other hand, the calculation of (11) leads to \[\delta\dot{\bf r}_{i}=\sum\limits_{j=1}^{n}\frac{\partial\dot{\bf r}_{i}}{ \partial\dot{q}_{j}}\delta q_{j}=\sum\limits_{r=1}^{m}\left(\frac{\partial{ \bf r}_{j}}{\partial q_{r}}+\sum\limits_{\nu=1}^{k}\frac{\partial{\bf r}_{i}}{ \partial q_{m+\nu}}\frac{\partial\alpha_{\nu}}{\partial\dot{q}_{r}}\right) \delta q_{r},\quad i=1,\ldots,N\] where (10) has been taken into account. Hence, we see that, a part from the formal appearance through \(\delta q_{r}\) or \(\dot{q}_{r}\), the two vectors display the same direction. 2. At the same time, the vanishing \(\sum\limits_{i=1}^{n}{\cal R}^{(i)}\delta q_{i}=0\) (see (8)) does actually represent the absence of virtual work \(\sum\limits_{i=1}^{n}{\cal R}^{(i)}\dot{q}_{i}=0\): a wat to check that goes through the relation \[\sum\limits_{i=1}^{n}{\cal R}^{(i)}\dot{q}_{i}=\sum\limits_{\nu=1}^{k}(\alpha _{\nu}-\overline{\alpha}_{\nu})\left(\frac{d}{dt}\frac{\partial{\cal L}}{ \partial\dot{q}_{m+\nu}}-\frac{\partial{\cal L}}{\partial q_{m+\nu}}\right)\] which can be deduced from (5). 3. The relation (18) shows that (16) and (17) overlap, that is calculating the energy by considering the Lagrangian function with all the velocities \(\dot{q}_{i}\), \(i=1,\ldots,n\) or the restricted function (13) of the independent velocities only \(\dot{q}_{r}\), \(r=1,\ldots,m\), leads to the same result. 4. The balance equation (21) reduces to (see also (22)) \[\frac{d\mathcal{E}^{*}}{dt}=-\frac{\partial\mathcal{L}^{*}}{\partial t}+\sum_{ \nu=1}^{k}\frac{\partial\alpha_{\nu}}{\partial t}\frac{\partial T}{\partial \dot{q}_{m+\nu}}=-\frac{\partial\mathcal{L}}{\partial t}\] (30) and it reveals the first integral of motion \(\mathcal{E}=\mathcal{E}^{*}\), whenever the Lagrangian function \(\mathcal{L}\) does not depend explicitly on time \(t\). **Remark 5**: _The second equality in (30) shows that the energy may be conserved even if the constraints (2) depend explicitly on time \(t\) (rheonomic constraints): a simple example is the motion of a point \(P\) of mass \(M\) whose velocity has to at any time the direction of \(\overrightarrow{PQ}\), where the motion of \(Q\) is assigned by the functions \(x_{Q}(t),y_{Q}(t),z_{Q}(t)\) (pursuing motion). The constraints (9) are_ \[\left\{\begin{array}{l}\dot{q}_{2}=\frac{y_{Q}(t)-q_{2}}{x_{Q}(t)-q_{1}}\dot {q}_{1}=\alpha_{1,1}(q_{1},q_{2},t)\dot{q}_{1},\\ \dot{q}_{3}=\frac{z_{Q}(t)-q_{3}}{x_{Q}(t)-q_{1}}\dot{q}_{1}=\alpha_{2,1}(q_{ 1},q_{3},t)\dot{q}_{1}\end{array}\right.\] _and they verify (24), as it can be easily checked. If there are not active forces, one has \(\mathcal{L}=T=\frac{1}{2}M(\dot{q}_{1}^{2}+\dot{q}_{2}^{2}+\dot{q}_{3}^{2})\) and_ \[\mathcal{E}=\mathcal{E}^{*}=\frac{1}{2}M\dot{q}_{1}^{2}\left(1+\frac{(q_{2}- \eta(t))^{2}+(q_{3}-\zeta(t))^{2}}{(q_{1}-\xi(t))^{2}}\right).\] _is conserved by virtue of (30). This means that \(|\dot{P}|\) is constant during the motion._ The unifying and physically expressive role of (24) motivates the following mathematical investigation. As it occurs in the holonomic case, it is reasonable to expect that the same set of nonholonomic restriction may be formulated by more than one set of constraint equations: a simple instance is the following **Example 1**: _Consider a system of two points \(P_{1}\) and \(P_{2}\) constrained in a way that their velocities are perpendicular to the straight line joining them: \(\dot{P}_{1}\cdot\overrightarrow{P_{1}P_{2}}=0\), \(\dot{P}_{2}\cdot\overrightarrow{P_{1}P_{2}}=0\); referring to (2), the condition is formulated as_ \[\left\{\begin{array}{l}(x_{1}-x_{2})\dot{x}_{1}+(y_{1}-y_{2})\dot{y}_{1}=0 \\ \\ (x_{1}-x_{2})\dot{x}_{2}+(y_{1}-y_{2})\dot{y}_{2}=0\end{array}\right.\] _(we keep the cartesian coordinates for clarity). Calling \(B\) the midpoint of the segment \(P_{1}P_{2}\), the same effect can be achieved by imposing that the distance between the points is constant and the velocity of the midpoint is orthogonal to the joining line, namely the conditions \(|\overrightarrow{P_{1}P_{2}}|=\ell>0\), \(\dot{B}\cdot\overrightarrow{P_{1}P_{2}}=\mathbf{0}\). Actually, the corresponding system in terms of cartesian coordinates_ \[\left\{\begin{array}{l}(x_{1}-x_{2})(\dot{x}_{1}-\dot{x}_{2})+(y_{1}-y_{2}) (\dot{y}_{1}-\dot{y}_{2})=0\\ \\ (x_{1}-x_{2})(\dot{x}_{1}+\dot{x}_{2})+(y_{1}-y_{2})(\dot{y}_{1}+\dot{y}_{2})= 0\end{array}\right.\] _is evidently equivalent to the one written just above (the first constraint expresses the invariable distance in the differential form)._ _Further equivalent conditions can be provided, but attention must be paid to the fact that the equivalence could be lost in correspondence of some particual motions: concerning this example, the pair of conditions \(\dot{B}\cdot\overrightarrow{P_{1}P_{2}}=\mathbf{0}\), \(\dot{P}_{1}\wedge\dot{P}_{2}=\mathbf{0}\) (parallel velocities) is equivalent to the previous ones whenever the velocity \(\dot{B}\) is not null (the critical motions are the rotations around \(B\)). Again, the combination of \(|\overrightarrow{P_{1}P_{2}}|=\ell>0\), \(\dot{P}_{1}\wedge\dot{P}_{2}=\mathbf{0}\) produces the same effect if \(\dot{P}_{1}\neq\dot{P}_{2}\) (the critical motions are of translations). The analysis of the Jacobian matrices of the various cartesian systems confirms the statements of the Remark without difficulty._ **Remark 6**: _In order to find equivalent sets of kinematic conditions the constraint equations are not necessarily linear with respect to the velocities as in the previous case: an example can be formulated by combining two of the three conditions \(|\overrightarrow{P_{1}P_{2}}|=\ell>0\), \(\dot{B}\wedge\overrightarrow{P_{1}P_{2}}=\mathbf{0}\), \(|\dot{P}_{1}|=|\dot{P}_{2}|\) (same magnitude of the velocities) corresponding respectively to the linear and nonlinear equations_ \[\begin{array}{l}(x_{1}-x_{2})(\dot{x}_{1}-\dot{x}_{2})+(y_{1}-y_{2})(\dot{y} _{1}-\dot{y}_{2})=0,\quad(\dot{x}_{1}+\dot{x}_{2})(\dot{x}_{1}-\dot{x}_{2})+( \dot{y}_{1}+\dot{y}_{2})(\dot{y}_{1}-\dot{y}_{2})=0,\\ (x_{1}-x_{2})(\dot{y}_{1}+\dot{y}_{2})-(y_{1}-y_{2})(\dot{x}_{1}+\dot{x}_{2})= 0.\end{array} \tag{31}\] ### The mathematical aspect From the mathematical point of view it is quite simple to understand which functions verify the condition (24). The context that emerges is that of homogeneous functions: we start by reminding that a function \(f(\xi_{1},\ldots,\xi_{n})\) defined on a domain \({\cal D}\subseteq\mathbb{R}^{n}\) is a positive homogeneous function of degree \(\sigma\in\mathbb{R}\) if \[f(\lambda\xi_{1},\ldots,\lambda\xi_{\ell})=\lambda^{\sigma}f(\xi_{1},\ldots, \xi_{\ell})\quad\forall\ (\xi_{1},\ldots,\xi_{\ell})\in{\cal D}\ \ \mbox{and}\ \ \forall\ \lambda>0. \tag{32}\] The functions (32), when differentiable, are distinguished by the following **Property 1**: _(Euler's homogeneous function theorem) A function \(f\in{\cal C}^{1}({\cal D})\) is a positive homogeneous function of degree \(\sigma\) if and only if_ \[\sum_{i=1}^{\ell}\xi_{i}\frac{\partial f}{\partial\xi_{i}}(\xi_{1},\ldots,\xi _{\ell})=\sigma f(\xi_{1},\ldots,\xi_{\ell})\quad\forall\ (\xi_{1},\ldots,\xi_{\ell})\in{\cal D}. \tag{33}\] If we differentiate (32) with respect to \(\xi_{i}\), we run into the following consequence: **Property 2**: _If a function \(f(\xi_{1},\ldots,\xi_{n})\in{\cal C}^{1}({\cal D})\), \({\cal D}\subseteq\mathbb{R}^{\ell}\) is a positive homogeneous function of degree \(\sigma>0\), then each derivative \(\frac{\partial f}{\partial x_{i}}\), \(i=1,\ldots,n\), is a positive homogeneous function of degree \(\sigma-1\)._ A further property that we use immediately after and that is easy to verify is the **Property 3**: _If \(F_{1}\), \(F_{2}\) are two homogeneous functions of degree \(\sigma_{1}\) and \(\sigma_{2}\) respectively, then the product \(F_{1}F_{2}\) is a homogeneous function of degree \(\sigma_{1}+\sigma_{2}\), the ratio \(F_{1}/F_{2}\) (where defined) is a homogeneous function of degree \(\sigma_{1}-\sigma_{2}\)._ The main result of our mathematical digression is the following **Proposition 1**: _Consider a system of \(k\) equations_ \[\left\{\begin{array}{cccc}F_{1}(\xi_{1},\ldots,\xi_{\ell},\eta_{1},\ldots, \eta_{k})=0\\ \ldots&\ldots&\ldots\\ \ldots&\ldots&\ldots\\ F_{k}(\xi_{1},\ldots,\xi_{\ell},\eta_{1},\ldots,\eta_{k})=0\end{array}\right.\] _where each \(F_{\nu}\in{\cal C}^{1}({\cal D})\), \({\cal D}\subseteq\mathbb{R}^{\ell+k}\) is a homogeneous function of degree \(\sigma_{\nu}\), \(\nu=1,\ldots,k\), that is_ \[F_{\nu}(\lambda\xi_{1},\ldots,\lambda\xi_{\ell},\lambda y_{1},\ldots,\lambda y _{k})=\lambda^{\sigma_{\nu}}F(\xi_{1},\ldots,\xi_{\ell},y_{1},\ldots,y_{k}) \quad\lambda>0 \tag{34}\] _in any point of \(D\). Then, assuming that the set of zeros can be written by means of the \(k\) implicitly defined functions_ \[\left\{\begin{array}{cccc}y_{1}=\psi_{1}(\xi_{1},\ldots,\xi_{\ell})\\ \ldots&\ldots&\ldots\\ \ldots&\ldots&\ldots\\ y_{k}=\psi_{k}(\xi_{1},\ldots,\xi_{\ell})\end{array}\right.\] _the functions \(\psi_{1}\),..., \(\psi_{k}\) turn out to be positive homogeneous functions of degree \(1\)._ **Proof**. Since \(F_{\nu}\) is a homogeneous function of degree \(\sigma_{\nu}\), (33) implies \[\frac{\partial F_{\nu}}{\partial\xi_{1}}\xi_{1}+\,\ldots\,+\frac{\partial F_{ \nu}}{\partial\xi_{\ell}}\xi_{\ell}+\frac{\partial F_{\nu}}{\partial\eta_{1}} \eta_{1}+\,\ldots\,+\frac{\partial F_{\nu}}{\partial\eta_{k}}\eta_{k}=\sigma_{ \nu}F_{\nu}(\xi_{1},\ldots,\xi_{\ell},y_{1},\ldots,y_{k}),\quad\nu=1,\ldots,k\] The calculation of \(\eta_{1}\),..., \(\eta_{k}\) from the previous identities consists in solving a \(k\times k\) linear system whose solutions are \[\eta_{1}=\frac{1}{D}\left|\begin{array}{cccc}\sigma_{1}F_{1}-\sum\limits_{r= 1}^{m}\frac{\partial F_{1}}{\partial\xi_{r}}\xi_{r}&\frac{\partial F_{1}}{ \partial\eta_{2}}&\ldots&\frac{\partial F_{1}}{\partial\eta_{k}}\\ \ldots\ldots\ldots&\ldots&\ldots\ldots\\ \sigma_{k}F_{k}-\sum\limits_{r=1}^{m}\frac{\partial F_{k}}{\partial\xi_{r}} \xi_{r}&\frac{\partial F_{k}}{\partial\eta_{2}}&\ldots&\frac{\partial F_{k}}{ \partial\eta_{k}}\end{array}\right|\quad\ldots\quad\eta_{k}=\frac{1}{D} \left|\begin{array}{cccc}\frac{\partial F_{1}}{\partial\eta_{1}}&\ldots& \frac{\partial F_{1}}{\partial\eta_{k-1}}&\sigma_{1}F_{1}-\sum\limits_{r=1}^{m} \frac{\partial F_{1}}{\partial\xi_{r}}\xi_{r}\\ \ldots\ldots&\ldots\ldots&\ldots\ldots\ldots\\ \frac{\partial F_{k}}{\partial\eta_{1}}&\ldots&\frac{\partial F_{k}}{\partial \eta_{k-1}}&\sigma_{k}F_{k}-\sum\limits_{r=1}^{m}\frac{\partial F_{k}}{ \partial\xi_{r}}\xi_{r}\end{array}\right|\] where the vertical bars stand for the determinant of the contained matrix and \(D=\left|\begin{array}{ccc}\dfrac{\partial F_{1}}{\partial\eta_{1}}&\cdots& \dfrac{\partial F_{1}}{\partial\eta_{k}}\\ \cdots&\cdots&\cdots\\ \dfrac{\partial F_{k}}{\partial\eta_{1}}&\cdots&\dfrac{\partial F_{k}}{\partial \eta_{k}}\end{array}\right|\). Owing to Property 2, each derivative \(\dfrac{\partial F_{\nu}}{\partial\eta_{\mu}}\), \(\nu,\mu=1,\ldots,k\) is a homogeneous function of degree \(\sigma_{\nu}-1\); the Leibniz formula for the determinant prescribes the algebric sum of terms of the type \(\prod\limits_{i,j=1}^{k}\dfrac{\partial F_{\nu_{i}}}{\partial\mu_{j}}\), where the suffixes \(\nu_{i}\) are all different. Hence, by virtue of Property 3 and considering that the sum of homogeneous functions with common degree is evidently a homogeneous function of same degree, we have that the determinant \(D\) is a homogeneous function of degree \((\sigma_{1}-1)+(\sigma_{2}-1)+\cdots+(\sigma_{k}-1)=\sum\limits_{\nu=1}^{k} \sigma_{\nu}-k\). The same argument applies to the matrices at the numerator of \(\eta_{\nu}\), whose determinants turn out to be homogeneous function of degree \(\sum\limits_{\nu=1}^{k}\sigma_{\nu}-(k-1)\) (indeed the entries of the \(\nu\)-th column of the matrix pertinent to \(\eta_{\nu}\) are homogeneous function of degree \(\sigma_{\nu}\)). Invoking again Property 3 this time for the ratio, we overall obtain that each \(\eta_{\nu}\) is a homogeneous function of degree \(\sum\limits_{\nu=1}^{k}\sigma_{\nu}-(k-1)-\left(\sum\limits_{\nu=1}^{k} \sigma_{\nu}-k\right)=1\). \(\Box\) ### The physical application If we compare the property (33) with the request (24), we recognise in the latter one the characteristic condition for the function \(\alpha_{\nu}\) to be a homogeneous function of degree \(1\) with respect to the variables \(\dot{q}_{r}\): thus, setting \(\ell=m\) and \(\dot{q}_{r}=\xi_{r}\), \(r=1,\ldots,m\), (19), (32) and (33) entail the following **Proposition 2**: _The function \(\alpha_{\nu}\), \(\nu=1,\ldots,k\), verifies \(\overline{\alpha}_{\nu}=\alpha_{\nu}\) if and only if \(\alpha_{\nu}\) is a positive homogeneous function of degree \(1\) with respect to the kinetic variables \(\dot{q}_{1}\),..., \(\dot{q}_{m}\), namely if and only if_ \[\alpha_{\nu}(q_{1},\ldots,q_{n},\lambda\dot{q}_{1},\ldots,\lambda\dot{q}_{m}, t)=\lambda\alpha_{\nu}(q_{1},\ldots,q_{n},\dot{q}_{1},\ldots,\dot{q}_{m},t) \quad\mbox{for any}\;\;\lambda>0. \tag{35}\] A more general and noteworthy result is provided by Proposition 1, which refers directly to the structure of the assigned constraint functions and not to the implicitly defined ones: actually, with respect to (34) we consider the kinetic variables \(\dot{q}_{i}\) of (2) as \(\dot{q}_{r}=\xi_{r}\), \(r=1,\ldots,m=\ell\) and \(\dot{q}_{m+\nu}=\eta_{\nu}\), \(\nu=1,\ldots,k\) and we make use of Proposition 1 for the following application: **Proposition 3**: _If \(\phi_{\nu}(q_{1},\ldots,q_{n},\dot{q}_{1},\ldots,\dot{q}_{m},t)\) of (2) are homogeneous functions (even with different degrees) of the generalized velocities \(\dot{q}_{1}\),..., \(\dot{q}_{n}\), then however the explicit forms (9) are chosen, they satisfy condition (24)._ As we have already highlighted, the same system can be treated either with linear kinematic constraints or with nonlinear kinematic constraints (or a mixture of them): this fact does not affect the definition and conservation of the energy of the system, if the latter falls into the category admitted by the previous Proposition. For instance, the system we considered in (31) is described by homogeneous functions, whichever pair of constraints is choosen, except for any points where equivalence is lost, as we remarked in Example 1. In any case, the explicit functions (9) will verify condition (24). Among many other examples we can present, we add the following nonholonomic system, studied in [20]. **Example 2**: _Consider on a vertical plane two material points \(P_{1}\), \(P_{2}\) of mass \(M_{1}\), \(M_{2}\) respectively: let us call attention on the three kinematic restrictions_ * _the velocities are perpendicular:_ \(\dot{P}_{1}\cdot\dot{P}_{2}=0\)_,_ * _the velocity of one of them is perpendicular to the straight line joining the points:_ \(\dot{P}_{1}\cdot\overrightarrow{P_{1}P_{2}}=0\)_,_ * _the velocity of the other point is parallel to the joining line:_ \(\dot{P}_{2}\wedge\overrightarrow{P_{1}P_{2}}=\mathbf{0}\)_._ _By formulating the conditions as_ \[\dot{x}_{1}\dot{x}_{2}+\dot{y}_{1}\dot{y}_{2}=0,\;\;\;(x_{1}-x_{2})\dot{x}_{1} +(y_{1}-y_{2})\dot{y}_{1}=0,\;\;\;(x_{1}-x_{2})\dot{y}_{2}-(y_{1}-y_{2})\dot{x} _{2}=0\] _respectively, it can be easily seen that they are not independent and two of them imply the third with the exceptions noted alongside (the explication for the excluded configurations is clear):_ \[(i),\,(ii)\;\Rightarrow\;(iii)\;\;\mbox{if}\;\;\dot{P}_{1}\neq\mathbf{0$ },\;\;\;(i),\,(iii)\;\Rightarrow\;(ii)\;\;\mbox{if}\;\;\dot{P}_{2}\neq\mbox{ \boldmath$0},\;\;\;(ii),\,(iii)\;\Rightarrow\;(i)\;\;\mbox{if}\;\;P_{1}\not \equiv P_{2}.\] Even admitting that the velocities can vanish but excluding the overlapping of the points, we opt for the latter possibility and write (9) as \[\left\{\begin{array}{l}\dot{q}_{3}=\alpha_{1}(q_{1},q_{2},q_{3},q_{4},\dot{q }_{1})=-\alpha\dot{q}_{1}\\ \\ \dot{q}_{4}=\alpha_{2}(q_{1},q_{2},q_{3},q_{4},\dot{q}_{2})=\frac{1}{\alpha} \dot{q}_{2}\end{array}\right.\] where we have set \((q_{1},q_{2},q_{3},q_{4})=(x_{1},x_{2},y_{1},y_{2})\) and defined \(\alpha(q_{1},q_{2},q_{3},q_{4})=\frac{q_{1}-q_{2}}{q_{3}-q_{4}}\). The assumption (24) holds because the constraints are part of (25). Assuming that an internal elastic force of constant \(\kappa\) and the weight are acting, the function (13) is \[{\cal L}^{*}(q_{1},q_{2},q_{3},q_{4},\dot{q}_{1},\dot{q}_{2})=\frac{1}{2}M_{1 }\left(1+\alpha^{2}\right)\dot{q}_{1}^{2}+\frac{1}{2}M_{2}\left(1+\frac{1}{ \alpha^{2}}\right)\dot{q}_{2}^{2}-\frac{1}{2}\kappa(q_{3}-q_{4})^{2}(1+ \alpha^{2})-M_{1}q_{3}-M_{2}q_{4}\] where the terms concerning \(U\) are clear. Equation (30) entails the conservation of the quantity (17), coinciding in this case with the energy of the system \({\cal E}=T-U\). ## 3 Conclusion and next investigation For nonholonomic systems verifying (24), the formulation of the motion through the virtual displacements provided by the Chetaev condition (7) appears natural and not devoid of physical meaning. This represents a convincing extension of the holonomic case equipped by a well-established and dated theory. The absence of a justification of the condition (7) starting from the laws of mechanics sometimes claimed in literature, is overcome when the kinematic constraints are of the type (24). As we have already highlighted, for these systems the notion of virtual displacement assumes a physical sense and also the energy of the system has a correct meaning, as for holonomic systems. Likewise to the latter ones, when the Lagrangian function does not depend on time the energy is conserved, as (30) states. To understanding which functions verify (24) is simple, since it is a plain implementation of Euler's theorem on homogeneous functions. The further step we performed was to indicate a large class of constraints for which the explicit form (9) shows the functions \(\alpha_{\nu}\) of the category (24): this occurs for the constraints formulated by homogeneous functions (with not necessarily the same degree) of the generalized velocities \(\dot{q}_{i}\), \(i=1,\ldots,n\). The incoming investigation will concern a sort of inverted question: if one has to do with explicit functions \(\alpha_{\nu}\) verifying (24), necessarily the originating functions \(\phi_{\nu}\) which define them are homogeneous functions in the kinematic variables? This is not an irrelevant point, since the category of nonholonomic constraints satisfying the physical properties (1)-(4) listed just after Remark 4 would become entirely and clearly defined, as the systems subject to homogeneous constraints, i. e. the Cetaev condition would be completely legitimated for this category of systems. In other words, the constrained systems for which the condition acquires a physical meaning and extends in a natural way the virtual displacements of the holonomic systems are presumed to be all and only those which admit at least one set of conditions (2) formulated with equations homogeneous in the velocities. However, the mathematical aspect is anything but straightforward: as it incidentally emerged in some passage of the script, the geometric and kinematic restrictions can be implemented in more than a way, so that different lists of functions (2) can formulate the same system. Then, from the mathematical point of view, the issue consists in investigating the zero set defined by (2) and wonder if for each explicit set \(\alpha_{\nu}\), \(\nu=1,\ldots,k\) a list of generating functions \(\phi_{\nu}\) which are homogeneous in the velocities can always be found.
2308.09596
Disparity, Inequality, and Accuracy Tradeoffs in Graph Neural Networks for Node Classification
Graph neural networks (GNNs) are increasingly used in critical human applications for predicting node labels in attributed graphs. Their ability to aggregate features from nodes' neighbors for accurate classification also has the capacity to exacerbate existing biases in data or to introduce new ones towards members from protected demographic groups. Thus, it is imperative to quantify how GNNs may be biased and to what extent their harmful effects may be mitigated. To this end, we propose two new GNN-agnostic interventions namely, (i) PFR-AX which decreases the separability between nodes in protected and non-protected groups, and (ii) PostProcess which updates model predictions based on a blackbox policy to minimize differences between error rates across demographic groups. Through a large set of experiments on four datasets, we frame the efficacies of our approaches (and three variants) in terms of their algorithmic fairness-accuracy tradeoff and benchmark our results against three strong baseline interventions on three state-of-the-art GNN models. Our results show that no single intervention offers a universally optimal tradeoff, but PFR-AX and PostProcess provide granular control and improve model confidence when correctly predicting positive outcomes for nodes in protected groups.
Arpit Merchant, Carlos Castillo
2023-08-18T14:45:28Z
http://arxiv.org/abs/2308.09596v1
# Disparity, Inequality, and Accuracy Tradeoffs in Graph Neural Networks for Node Classification ###### Abstract. Graph neural networks (GNNs) are increasingly used in critical human applications for predicting node labels in attributed graphs. Their ability to aggregate features from nodes' neighbors for accurate classification also has the capacity to exacerbate existing biases in data or to introduce new ones towards members from protected demographic groups. Thus, it is imperative to quantify how GNNs may be biased and to what extent their harmful effects may be mitigated. To this end, we propose two new GNN-agnostic interventions namely, (i) PFR-AX which decreases the separability between nodes in protected and non-protected groups, and (ii) PostProcess which updates model predictions based on a blackbox policy to minimize differences between error rates across demographic groups. Through a large set of experiments on four datasets, we frame the efficacies of our approaches (and three variants) in terms of their algorithmic fairness-accuracy tradeoff and benchmark our results against three strong baseline interventions on three state-of-the-art GNN models. Our results show that no single intervention offers a universally optimal tradeoff, but PFR-AX and PostProcess provide granular control and improve model confidence when correctly predicting positive outcomes for nodes in protected groups. 2023 Graph Neural Networks; Node Classification; Algorithmic Fairness + Footnote †: [leftmargin=*]This work was partially completed while Arpit Merchant was visiting Universitat Pompeu Fabra, Barcelona, Spain. + Footnote †: [leftmargin=*]This work was partially completed while Arpit Merchant was visiting Universitat Pompeu Fabra, Barcelona, Spain. + Footnote †: [leftmargin=*]This work was partially completed while Arpit Merchant was visiting Universitat Pompeu Fabra, Barcelona, Spain. + Footnote †: [leftmargin=*]This work was partially completed while Arpit Merchant was visiting Universitat Pompeu Fabra, Barcelona, Spain. + Footnote †: [leftmargin=*]This work was partially completed while Arpit Merchant was visiting Universitat Pompeu Fabra, Barcelona, Spain. + Footnote †: [leftmargin=*]This work was partially completed while Arpit Merchant was visiting Universitat Pompeu Fabra, Barcelona, Spain. + Footnote †: [leftmargin=*]This work was partially completed while Arpit Merchant was visiting Universitat Pompeu Fabra, Barcelona, Spain. + Footnote †: [leftmargin=*]This work was partially completed while Arpit Merchant was visiting Universitat Pompeu Fabra, Barcelona, Spain. + Footnote †: [leftmargin=*]This work was partially completed while Arpit Merchant was visiting Universitat Pompeu Fabra, Barcelona, Spain. ## 1. Introduction Classification on attributed graphs involves inferring labels for nodes in the test set given a training set of labels along with attributes and adjacency information for all the nodes. To address this task, Graph Neural Networks (or GNNs, for short) have exploded in popularity since they effectively combine attributes and adjacency to build a unified node representation which can be used downstream as a feature vector (Han et al., 2017; Karim et al., 2017). GNNs have found applications in a variety of high-risk application domains (as defined, e.g., in the proposed AI Act for Europe of April 20221), including credit risk applications (Kal unfairness (Bogor et al., 2016), GUIDE uses a group-equality informed individual fairness criteria (Srivastava et al., 2017)). Second, dataset properties, training criteria, hyperparameter tuning procedures, and sometimes, even low-level elements of an implementation such as linked libraries are known to significantly influence the efficiency and effectiveness of GNNs on node classification (Krizhevsky et al., 2014; Krizhevsky et al., 2014). Third, while algorithmic discrimination may be reduced at the expense of accuracy (Krizhevsky et al., 2014), specific improvements and trade-offs depend on application contexts (Krizhevsky et al., 2014), and need to be evaluated to understand what kinds of alternatives may offer improvements over current approaches. Our goal is to address these limitations by focusing on the following questions: 1. How do we meaningfully benchmark and analyze the trade-off between algorithmic fairness and accuracy of interventions on GNNs across different graphs? 2. Is there room for improving the fairness/accuracy tradeoff, and if so, how? Our Contributions.We categorize interventions designed to reduce algorithmic discrimination in terms of their loci in the machine learning pipeline: (a) pre-processing, before training, (b) in-processing, during learning, and (c) post-processing, during inference. Using a standardized methodological setup, we seek to maximally preserve accuracy while improving algorithmic fairness. To this end, we introduce two new, unsupervised (independent of ground-truth labels), model-agnostic (independent of the underlying GNN architecture) interventions; PFR-AX that debiases data prior to training, and PostProcess that debiases model outputs after training (before issuing final predictions). In PFR-AX, we first use the PFR method (Krizhevsky et al., 2014) to transform node attributes to better capture data-driven similarity for operationalizing individual fairness. Then, we construct a DeepWalk embedding (Krizhevsky et al., 2014) of the graph, compute its PFR transformation, and reconstruct a graph from the debiased embedding using a method we call EmbeddingReverser. To our knowledge, this is a novel application of a previously known method with suitable augmentations. In PostProcess, we randomly select a small fraction, referred to as \(\gamma\), of nodes from the minority demographic for whom the model has predicted a negative outcome and update the prediction to a positive outcome. This black-box policy aims to ensure that error rates of a model are similar across demographic groups. This is a simple and natural post-processing strategy which, to the best of our knowledge, has not been studied in the literature on GNNs. We conduct extensive experiments to evaluate the efficacies of interventions grouped by their aforementioned loci. To measure accuracy, we use _AUC-ROC_; to measure algorithmic fairness, we use _disparity_ and _inequality_ (cf. Section 3). We compare the accuracy-fairness tradeoff for PFR-AX and PostProcess (plus three additional variants) against three powerful baseline interventions (two for pre-training, one for in-training) on three widely used GNN models namely, GCN, GraphSAGE, and GIN (Gin et al., 2017). Our experiments are performed on two semi-synthetic and two real-world datasets with varying levels of edge homophily with respect to labels and sensitive attributes, which is a key driver of accuracy and algorithmic fairness in the studied scenarios. We design ablation studies to measure the effect of the components of PFR-AX and the sensitivity of PostProcess to the \(\gamma\) parameter. Finally, we analyze the impact of interventions on model confidence. Our main findings are summarized below: * No single intervention offers universally optimal tradeoff across models and datasets. * PFR-AX and PostProcess provide granular control over the accuracy-fairness tradeoff compared to baselines. Further, they serve to improve model confidence in correctly predicting positive outcomes for nodes in protected groups. * PFR-A and PFR-X that debias only adjacency and only attributes respectively, offer steeper tradeoffs than PFR-AX which debiases both. * When imbalance between protected and non-protected groups and model bias are both large, small values of \(\gamma\) offer large benefits to PostProcess. ## 2. Related Work Legal doctrines such as GDPR (in Europe), the Civil Rights Act (in the US), or IPC Section 153A (in India) restrict decision-making on the basis of protected characteristics such as nationality, gender, caste (Krizhevsky et al., 2014). While _direct discrimination_, i.e., when an outcome directly depends on a protected characteristic, may be qualitatively reversed, addressing _indirect discrimination_, i.e., discrimination brought by apparently neutral provisions, requires that we define concrete, quantifiable metrics in the case of machine learning (ML) that can be then be optimized for (Krizhevsky et al., 2014). Numerous notions of algorithmic fairness have been proposed and studied (Krizhevsky et al., 2014). Two widely used definitions include the _separation criteria_, which requires that some of the ratios of correct/incorrect positive/negative outcomes across groups are equal, and the _independence criterion_, which state that outcomes should be completely independent from the protected characteristic (Bogor et al., 2016). Algorithmic Fairness-Accuracy Tradeoffs.However, including fairness constraints often results in classifiers having lower accuracy than those aimed solely at maximizing accuracy. Traditional ML literature (Krizhevsky et al., 2014; Krizhevsky et al., 2014) has extensively studied the inherent tension that exists between technical definitions of fairness and accuracy: Corbett-Davies et al. (Corbett-Davies et al., 2016) theoretically analyze the cost of enforcing disparate impact on the efficacy of decision rules; Lipton et al. (Lipton et al., 2017) explore how correlations between sensitive and nonsensitive features induces within-class discrimination; Fish et al. (Fish et al., 2018) study the resilience of model performance to random bias in data. In turn, characterizing these tradeoffs has influenced the design of mitigation strategies and benchmarking of their utility. Algorithmic interventions such as reweighting training samples (Fish et al., 2018), regularizing training objectives to dissociate outcomes from protected attributes (Krizhevsky et al., 2014), and adversarially perturbing learned representations to remove sensitive information (Krizhevsky et al., 2014) are framed by their ability to reduce bias without significantly compromising accuracy. Algorithmic Fairness in GNNs.The aforementioned approaches are not directly applicable for graph data due to the availability of adjacency information and the structural and linking bias it may contain. GNNs, given their message-passing architectures, are particularly susceptible to exacerbating this bias. This has prompted attention towards mitigation strategies for GNNs. For instance, at the pre-training phase, REDFESS (Beng et al., 2017) seeks to promote individual fairness for the ranking task, and at the in-training phase, FairGNN (Beng et al., 2017) estimates missing protected attribute values for nodes using a GCN-estimator for adversarial debiasing and GUIDE (Srivastava et al., 2017) proposes a novel GNN model for a new group-equality preserving individual fairness metric. We do not compare against these since they are designed for a different task than ours, operate in different settings altogether, and since FairGNN (in particular) exhibits a limited circular dependency on using vanilla GNN for a sensitive task to overcome limitations of a different GNN for classification. We refer the reader to Dai et al. (Dai et al., 2017) for a recent survey. More relevant to our task, EDITS (Dai et al., 2017) reduces attribute and structural bias using a Wasserstein metric and so we use it as a baseline for comparison. At the in-training phase, NIFTY (Beng et al., 2017) promotes a model-agnostic fair training framework for any GNN using Lipschitz enhanced message-passing. However, an explicit fairness-accuracy tradeoff analysis is lacking from literature which, along with methodological differences, makes it difficult to benchmark the comparative utilities of these approaches. Therefore, we include these as baselines. We frame our study in the context of such an analysis and design one pre-training and one post-training intervention that offer different, but useful tradeoffs. ## 3. Problem Setup GraphsLet \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) be an unweighted, undirected graph where \(\mathcal{V}\) is a set of \(n\) nodes and \(\mathcal{E}\) is a set of \(m\) edges. Denote \(\mathbf{A}=[a_{uv}]\in\{0,1\}^{n\times n}\) as its binary adjacency matrix where each element \(a_{uv}\) indicates the presence or absence of an edge between nodes \(u\) and \(v\). Define \(\mathbf{D}=\operatorname{diag}\left(\delta_{1},\delta_{2},\ldots,\delta_{n}\right)\) to be a diagonal degree matrix where \(\delta_{u}=\sum_{v}a_{uv}\). Let each node \(u\) in \(\mathcal{G}\) be associated with one binary sensitive attribute variable \(s_{u}\) indicating membership in a protected demographic group along with \(d-1\) additional real or integer-valued attributes. Together, in matrix form, we denote node attributes as \(\mathbf{X}\in\mathbb{R}^{n\times d}\). Lastly, \(\forall u\in\mathcal{V}\), its binary, ground-truth, categorical label is depicted as \(\mathbf{y}_{u}\). Graph Neural NetworksTypically, GNNs comprise of multiple, stacked graph filtering and non-linear activation layers that leverage \(\mathbf{X}\) and \(\mathbf{A}\) to learn joint node representations (see, e.g., Kipf and Welling (Kipf and Welling, 2015)). Such a GNN with \(L\) layers captures the \(L\)-hop neighborhood information around nodes. For each \(v\in\mathcal{V}\) and \(l\in[L]\), let \(\mathbf{h}_{v}^{(l)}\) denote the representation of node \(v\) at the \(l\)-th GNN layer. In general, \(\mathbf{h}_{v}^{(l)}\) is formulated via message-passing as follows: \[\mathbf{h}_{v}^{(l)}=\textsc{CB}^{(l)}\left(\mathbf{h}_{v}^{(l-1)},\textsc{ AGG}^{(l-1)}\left(\left\{\mathbf{h}_{v}^{(l-1)}:u\in\mathcal{N}(v)\right\} \right)\right) \tag{1}\] where \(\mathcal{N}(v)\) is the neighborhood of \(v\), \(\mathbf{h}_{v}^{(l-1)}\) is the representation of \(v\) at the \((l-1)\)-th layer, AGG is an aggregation operator that accepts an arbitrary number of inputs, i.e., messages from neighbors, and CB is a function governing how nodes update their representations at the \(l\)-th layer. At the input layer, \(\mathbf{h}_{v}^{(0)}\) is simply the node attribute \(\mathbf{x}_{v}\in\mathbf{X}\) and \(\mathbf{h}_{v}^{(L)}\) is the final representation. Finally, applying the softmax activation function on \(\mathbf{h}_{v}^{(L)}\) and evaluating cross-entropy error over labeled examples, we can obtain predictions for unknown labels \(\hat{\mathbf{y}}_{v}\). In this paper, we use AUC-ROC and F1-scores (thresholded at 0) to measure GNN accuracy. Algorithmic FairnessWe measure the algorithmic fairness of a GNN model using two metrics. First, _Statistical Disparity_ (\(\Delta_{\text{SP}}\)), based on the _independence criterion_, captures the difference between the positive prediction rates between members in the protected and non-protected groups (Dai et al., 2017). Formally, for a set of predicted labels \(\hat{\mathbf{Y}}\): \[\Delta_{\text{SP}}=\left|\text{Pr}\left[\hat{\mathbf{Y}}=1|\mathbf{s}=1|- \text{Pr}\left[\hat{\mathbf{Y}}=1|\mathbf{s}=0\right]\right.\right| \tag{2}\] Second, _Inequal Opportunity_ (\(\Delta_{\text{EO}}\)), which is one _separation criterion_, measures the similarity of the true positive rate of a model across groups (Dai et al., 2017). Formally: \[\Delta_{\text{EO}}=\left|\text{Pr}\left[\hat{\mathbf{Y}}=1|\mathbf{s}=1, \mathbf{Y}=1\right]-\text{Pr}\left[\hat{\mathbf{Y}}=1|\mathbf{s}=0,\mathbf{Y}= 1\right]\right. \tag{3}\] Equation (3) compares the probability of a sample with a positive ground-truth label being assigned a positive prediction across sensitive and non-sensitive groups. In the following sections, we refer to \(\Delta_{\text{SP}}\) as _disparity_ and \(\Delta_{\text{EO}}\) as _inequality_ to emphasize that lower values are better since they indicate similar rates. Having defined the various elements in our setting, we formally state our task below: Problem 1 (Algorithmically Fair Node Classification).: _Given a graph \(\mathcal{G}\) as an adjacency matrix \(\mathbf{A}\), node features \(\mathbf{X}\) including sensitive attributes \(\mathbf{s}\), and labels \(\mathbf{Y}_{V}\) for a subset of nodes \(V\subset\mathcal{V}\), debias GNNs such that their predicted labels \(\mathbf{Y}_{\mathcal{V}\setminus V}\) are maximally accurate while having low \(\Delta_{\text{SP}}\) and \(\Delta_{\text{EO}}\)._ ## 4. Algorithms In this section, we propose two algorithms for Problem 1: PFR-AX (pre-training) and PostProcess (post-training). ### Pfr-Ax Our motivation for a data debiasing intervention arises from recent results showing that GNNs have a tendency to exacerbate homophily (Srivastava et al., 2017). Final node representations obtained from GNNs homogenize attributes via Laplacian smoothing based on adjacency. This has contributed to their success in terms of classification accuracy. However, it has also led to inconsistent results for nodes in the protected class when their membership status is enhanced in their representations due to message-passing (Dai et al., 2017; Dai et al., 2017), particularly in cases of high homophily. Lahoti et al. (Lahoti et al., 2018) design PFR to transform attributes to learn new representations that retain as much of the original data as possible while mapping equally deserving individuals as closely as possible. The key benefit offered by PFR is that it obfuscates protected group membership by reducing their separability from points in the non-protected group. Therefore, we directly adapt PFR for graph data to debias attributes and adjacency. Algorithm 1 presents the pseudocode for PFR-AX. Debiasing AttributesIn order to transform attributes \(\mathbf{X}\) using PFR, we build two matrices. The first, denoted by \(W^{X}\), is an adjacency matrix corresponding to a \(k\)-nearest neighbor graph over \(\mathbf{X}\) (not including \(\mathbf{s}\)) and is given as: \[W^{X}_{uv}=\begin{cases}\exp\left(\frac{-\|\mathbf{x}_{u}-\mathbf{x}_{v}\|^{2} }{t}\right),\text{ if }u\in N_{k}\left(v\right)\text{ or }v\in N_{k}\left(u\right)\\ 0,\text{ otherwise}\end{cases} \tag{4}\] where \(t\) is a scaling hyperparameter and \(N_{k}\left(v\right)\) is the set of \(k\) nearest neighbors of \(v\) in Euclidean space. We first normalize \(\mathbf{X}\) using Min-Max scaling to ensure that all attributes contribute equally and then compute \(W^{X}\) as per Equation 4. The second matrix, denoted by \(W^{F}\), is the adjacency matrix of a between-group quantile graph that ranks nodes within protected and non-protected groups separately based on certain pre-selected variables and connects similarly ranked nodes. In the original paper, Lahoti et al. (2019) use proprietary decile scores obtained from Northpointe for creating rankings. However, in the absence of such scores for our data, we use one directly relevant attribute for the task at hand. For instance, in the case of a credit risk application, we define rankings based on the loan amount requested. Formally, this matrix is given as: \[W^{F}_{uw}=\begin{cases}1,\text{ if }u\in X^{P}_{s_{u}}\text{ and }v\in X^{P}_{s_{u}},\ s_{u}\neq s_{v}\\ 0,\text{ otherwise}\end{cases} \tag{5}\] where \(X^{P}_{s}\) denotes the subset of nodes with sensitive attribute value \(s\) whose scores lie in the \(p\)-th quantile. Higher number of quantiles leads a sparser \(W^{F}\). Thus, \(W^{F}\) is a multipartite fairness graph that seeks to build connections between nodes with different sensitive attributes based on similarity of their characteristics even if they are not adjacent in the original graph. Finally, a new representation of \(\mathbf{X}\), denoted as \(\tilde{\mathbf{X}}\), is computed by solving the following problem (Krishnan and Krishnan, 2019): \[\begin{split}\text{minimize}_{\tilde{X}}&(1-\alpha) \sum_{u,v}^{n}\|\tilde{x}_{u}-\tilde{x}_{v}\|^{2}W^{X}_{uw}\\ &+\alpha\sum_{u,v}^{n}\|\tilde{x}_{u}-\tilde{x}_{v}\|^{2}W^{F}_{ u0}\\ \text{s.t.}&\tilde{X}^{\top}\tilde{X}=\mathbb{I}\end{split} \tag{6}\] where \(\alpha\) controls the influence of \(W^{X}\) and \(W^{F}\) on \(\tilde{\mathbf{X}}\). _Debiasing Adjacency._ To reduce linking bias from \(\mathbf{A}\), we apply a three-step process. First, we compute an unsupervised node embedding of the graph using a popular matrix factorization approach named DeepWalk (Krishnan and Krishnan, 2019). Formally, this is computed as follows: \[\mathbf{U}=\log\left(\text{vol}\left(\mathcal{G}\right)\left(\frac{1}{C}\sum _{c=1}^{C}\left(\mathbf{D}^{-1}\mathbf{A}\right)^{c}\right)\mathbf{D}^{-1} \right)-\log b \tag{7}\] where \(\text{vol}\left(\mathcal{G}\right)=2m/n\) is the volume of the graph, \(C\) represents the length of the random walk, and \(b\) is a hyperparameter controlling the number of negative samples. Second, using the same aforementioned procedure for debiasing \(\mathbf{X}\), we apply PFR on U. Third, we design a new algorithm to invert this debiased embedding to reconstruct a graph with increased connectivity between nodes in protected and non-protected groups. This algorithm, which we refer to as EmbeddingReverser, proceeds as follows. We initialize an empty graph of \(n\) nodes and locate for each node \(u\), its \(\delta_{u}\) closest neighbors in the embedding space denoted as \(N_{\delta_{u}}\left(u\right)\) where \(\delta_{u}\) is the degree of \(u\) in the original graph. Starting from the first node (say) \(v\), for every \(w\in N_{\delta_{u}}\left(v\right)\), we check if \(v\) is present in \(w\)'s \(\delta_{w}\) closest neighbors. If so, we add an edge between \(v\) and \(w\) and increment counters corresponding to the current degrees for \(v\) and \(w\). We also increment a global counter maintaining the number edges added so far. If the current degree for any node (say) \(u\) reaches \(\delta_{u}\), we mark that node as completed and remove it from future consideration. This continues either for \(T_{\text{SC}}\) rounds where each round iterates over all nodes or until \(m\) edges have been added. Thus, we seek to maximally preserve the original degree distribution. ### PostProcess _Model Predictions._ Let \(\mathcal{M}\) be a GNN model trained on a set of nodes \(V\in\mathcal{V}\). Let \(V^{\prime}=\mathcal{V}\setminus V\) represent nodes in the test set and let \(\mathbf{s}_{V^{\prime}}\) be their sensitive attribute values. For any \(u\in V^{\prime}\), denote \(r\left(u\right)\in\mathbb{R}\) as the original output (logit) score capturing the uncalibrated confidence of \(\mathcal{M}\). In our binary classification setting, we threshold \(r\left(\cdot\right)\) at \(0\) and predict a positive outcome for \(u\), i.e. \(\tilde{\mathbf{y}}_{u}=1\), if \(r\left(u\right)\geq 0\). Otherwise, we predict a negative outcome. Denote \(\tilde{\mathbf{Y}}_{V^{\prime}}\) as the set of labels predicted by \(\mathcal{M}\). _Do-No-Harm Policy._ Next, we present our model-agnostic post-training intervention called PostProcess which operates in an unsupervised fashion independent of ground-truth labels. Different from prior interventions, especially Wei et al. (2019), PostProcess seeks to relabel model predictions following a _do-no-harm policy_, in which protected nodes with a positive outcome are never relabeled to a negative outcome. We audit \(\hat{\mathbf{Y}}_{V^{\prime}}\) and \(\mathbf{s}_{V^{\prime}}\) to identify all the nodes in the test set belonging to the protected class (\(s=1\)) that have been assigned a negative outcome (\(\hat{y}=0\)). Denote this set as S1-Y0 (and so on for S1-Y1, etc.). For a fixed parameter \(\gamma\in[0,1]\), we randomly select a \(\gamma\) fraction of nodes from S1-Y0 and change their predicted label to a positive outcome, i.e., \(\hat{y}=1\). Then, we update \(\mathcal{M}\)'s scores for these nodes to a sufficiently large (uncalibrated) positive value. That is, we post-process \(\mathcal{M}\) to be confident about its new predicted labels. Predictions for all other nodes in the test set remain unchanged. Algorithm 2 describes the pseudocode. ``` Input: Test set \(V^{\prime}\); Sensitive attribute values \(\mathbf{s}_{V^{\prime}}\); Model predictions \(\hat{\mathbf{Y}}_{V^{\prime}}\); Model output scores \(r\) (\(\cdot\)) for \(V^{\prime}\); Flip parameter \(\gamma\); confidence (uncalibrated) MAX-SCORE; Output: Updated model predictions \(\hat{\mathbf{Y}}_{V^{\prime}}\); Updated model output scores \(r\) (\(\cdot\)); \(\textsc{S1-Y0}\leftarrow\emptyset\) for\(u\)in\(V^{\prime}\)do if\(s_{u}=1\) and \(\hat{y}_{u}=0\)then \(\textsc{S1-Y0}\leftarrow\textsc{S1-Y0}\cup\{u\}\) endif endfor \(P\leftarrow\) Randomly select \(\gamma\) fraction of nodes from S1-Y0 for\(v\)in\(P\)do \(\hat{y}_{v}\gets 1\) \(r\) (\(v\)) \(\leftarrow\) MAX-SCORE endfor ``` **Algorithm 2**PostProcess _Choice of \(\gamma\)._ Determining a useful value for \(\gamma\) depends on two factors: (i) imbalance in the test set with respect to the number of nodes in the protected class, and (ii) bias in \(\mathcal{M}\)'s predictions towards predicting negative outcomes. If imbalance and bias are large, small \(\gamma\) values may be sufficient to reduce disparity. If imbalance is low and bias is large, then large \(\gamma\) values may be required. Let \(\hat{n}_{\textsc{S1-Y1}}\) denote the number of nodes in S1-Y1, and similarly for S1-Y0, etc. Then, disparity (Equation 2) is rewritten as: \[\Delta_{\textsc{SP}}=\left|\frac{\hat{n}_{\textsc{S1-Y1}}}{\hat{n}_{\textsc{S1- Y1}}+\hat{n}_{\textsc{S1-Y0}}}-\frac{\hat{n}_{\textsc{S0-Y1}}}{\hat{n}_{\textsc{S0-Y1}}+ \hat{n}_{\textsc{S0-Y1}}}\right|\] Our do-no-harm policy reduces \(\hat{n}_{\textsc{S1-Y0}}\) and increases \(\hat{n}_{\textsc{S1-Y1}}\). \(\hat{n}_{\textsc{S1-Y1}}+\hat{n}_{\textsc{S1-Y0}}\) remains constant. Thus, the first term in the equation above increases while the second remains the same. If the difference between the first and second terms is small, then PostProcess will increase disparity. Conversely, if the difference is large, then PostProcess will reduce disparity. If \(\hat{n}_{\textsc{S1-Y1}}>>\hat{n}_{\textsc{S1-Y0}}\), then PostProcess will have marginal impact on disparity. The effect on \(\Delta_{\textsc{EO}}\) follows equivalently, but may not be correlated with \(\Delta_{\textsc{SP}}\). Note, the impact of \(\gamma\) on accuracy cannot be determined due to the unavailability of ground-truth label information during this phase. So, in Section 5.3, we empirically analyze the impact of \(\gamma\) on accuracy, averaged over \(T\) trials for smoothening. ## 5. Experiments In this section, we describe the datasets and the methodology used in our experimental study and report our findings. ### Datasets We evaluate our interventions on four publicly-available datasets ranging in size from 1K to 67K nodes. For consistency, we binarize sensitive attributes (\(s\)) and labels in each dataset. \(s=1\) indicates membership in the protected class and 0 indicates membership in the non-protected class. Similarly, label values set to 1 indicate a positive outcome and 0 indicate a negative outcome. Table 1 presents a summary of dataset statistics. _Semi-Synthetic Data._ German(Geman, 2017) consists of clients of a German bank where the task is to predict whether a client has good or bad risk independent of their _gender_. Credit(McCarthy et al., 2017) comprises of credit card users and the task is to predict whether a user will default on their payments. Here, _age_ is the sensitive attribute. Edges are constructed based on similarity between credit accounts (for German) and purchasing patterns (for Credit), following Agarwal et al. (Agarwal et al., 2017). We add an edge between two nodes if the similarity coefficient between their attribute vectors is larger than a pre-specified threshold. This threshold is set to 0.8 for German and 0.7 for Credit. _Real-world Data._ In Penn94(Penn, 2007), nodes are Facebook users, edges indicate friendship, and the task is to predict the graduation year (Krishnan et al., 2017) independent of _gender_ (sensitive attribute). Pokec-z (Agarwal et al., 2017) is a social network of users from Slovakia where edges denote friendship, _region_ is a sensitive attribute, and labels indicate professions. ### Methodology _Processing Datasets._Agarwal et al. (Agarwal et al., 2017) and Dong et al. (Dong et al., 2017) utilize a non-standardized method for creating dataset splits that does not include all nodes. Following convention, we create new stratified random splits such that the label imbalance in the original data is reflected in each of the training, validation, and test sets. For German, Credit, and Pokec-z, we use 60% of the dataset for training, 20% for validation, and the remaining 20% for testing. For Penn94, we use only 20% for training and validation (each) because we find that is sufficient for GNNs, with the remaining 60% used for testing. Additionally, we adapt the datasets for use by PFR as described previously (cf. Section 4.1). 3 For computing between-group quantile graphs, we choose Loan Amount, Maximum Bill Amount Over \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{**Dataset**} & \multicolumn{3}{c}{**Size**} & \multicolumn{3}{c}{**Properties**} \\ \cline{2-7} & \(|\mathcal{V}|\) & \(|\mathcal{E}|\) & \(s\) & \(l\) & \(h_{s}\) & \(h_{l}\) \\ \hline German & 1K & 21K & Gender & Good Risk & 0.81 & 0.60 \\ Credit & 30K & 1.42M & Age & No Default & 0.96 & 0.74 \\ Penn94 & 41K & 1.36M & Gender & Year & 0.52 & 0.78 \\ Pokec-z & 67K & 617K & Region & Profession & 0.95 & 0.74 \\ \hline \hline \end{tabular} \end{table} Table 1. Dataset Statistics: number of nodes (\(|\mathcal{V}|\)), number of edges (\(|\mathcal{E}|\)), sensitive attribute \(s\), label \(l\), sensitive attribute homophily2 (\(h_{s}\)), label homophily (\(h_{l}\)). Last 6 Months, Spoken Language, and F6 as ranking variables for German, Credit, Porec-z, and Penn94 respectively. InterventionsEach intervention in our study is benchmarked against the performance of three vanilla GNNs namely, GCN, GraphSAGE, and GIN, referred to as Original. We construct PFR-AX to debias \(\mathbf{X}\) and \(\mathbf{A}\) as per Section 4.1. For ablation, we consider two variants: (i) PFR-X that only applies PFR on \(\mathbf{X}\), (ii) PFR-A that applies only PFR on a DeepWalk embedding and reconstructs a graph using EmbeddingReverser. We vary \(\gamma\) from 0.1 (1%) to 0.4 (40%) in increments of 0.1. For each \(\gamma\), we use the same hyperparameters that returned the maximum accuracy for vanilla GNNs and post-process their predictions as per Algorithm 2. For each seed and \(\gamma\), we randomly select \(\gamma\) fraction of nodes from the protected class with a predicted negative outcome and smoothen over 20 trials. We define heavy and light versions of PostProcess namely, (i) PostProcess+ and (ii) PostProcess-, in terms of \(\gamma\). PostProcess+ is defined at that value of \(\gamma\) where disparity is lowest compared to Original and PostProcess- is set halfway between the disparity of Original and PostProcess+. We compare these with three baselines: (i) Unaware (that naively deletes the sensitive attribute column from \(\mathbf{X}\)), (ii) EDITS (Chen et al., 2018), and (iii) NIFTY (Chen et al., 2018). Previous studies do not consider Unaware which is a competitive baseline according to our results (see below). TrainingWe set \(k=128\) dimensions for DeepWalk. Depending on the dataset and interventions, we allow models to train for 500, 1000, 1500, or 2000 epochs. As per convention, we report results for each model/intervention obtained after \(T\) epochs and averaged over 5 runs. This is different from previous studies such as NIFTY that train for (say) \(T\) epochs and report results for that model instance that has the best validation score from upto \(T\) epochs. This, combined with our stratified splits, is a key factor for observing materially different scores from those reported by the original authors. To ensure fair comparison, we tune hyperparameters for each intervention and model via a combination of manual grid search and Bayesian optimization using WandB (Chen et al., 2018). The goal of this hyperparameter optimization is to find that setting of hyperparameters that results in a model with a maximal AUC-ROC score while aiming to have lower disparity and equality scores than Original. ImplementationWe implement our models and interventions in Python 3.7. We use SNAP's C++ implementation for DeepWalk. EDITS4 and NIFTY5 are adapted from their original implementations. Our experiments were conducted on a Linux machine with 32 cores, 100 GB RAM, and an V100 GPU. Our code is available at [https://github.com/arpidm/gnn_accuracy_fairness_tradeoff](https://github.com/arpidm/gnn_accuracy_fairness_tradeoff). Footnote 4: [https://github.com/yushundong/EDITS](https://github.com/yushundong/EDITS) (retrieved April 2022) Footnote 5: [https://github.com/chirng126/nify](https://github.com/chirng126/nify) (retrieved April 2022) Since higher values of AUC-ROC and lower values of \(\Delta_{\text{SP}}\) and \(\Delta_{\text{EO}}\) are better, the optimal position is towards the bottom right in each plot (cf. RQ2). For ease of presentation, we defer full results for GIN and all interventions to Table 2 in Appendix A. Across datasets, GraphSAGE and GIN are more accurate than GCN but GraphSAGE displays higher disparity and inequality while GIN displays lower. PFR-AX and PostProcess- offer better tradeoffs than other baselines for German and Credit across models. This translates to upto 70% and 80% lower disparity than Original at less than 5% and 1% decrease in accuracy on German, respectively. In comparison, NIFTY offers 60% lower disparity (2.18% vs. 5.16% on German) at a 4.22% reduction in AUC-ROC. The lack of correlation between decreases in disparity and inequality may be explained in part by the impossibility theorem showing that these two criteria cannot be optimized simultaneously Chouldechova (2018). In Penn94 and Pokec-z, PFR-A and PFR-X are more effective than PFR-AX (cf. Table 2). We caveat the use of PostProcess in these datasets because choosing nodes randomly displays unintended consequences in maintaining accuracy without promoting fairness. Unaware proves effective across models and is especially optimal for Pokec-z. EDITS proves a heavy intervention causing large reductions in accuracy for relatively small gains in disparity. _Sensitivity to \(\gamma\)._ Figure 2 trades off AUC (X-axis), disparity (left Y-axis, red points), and inequality (right Y-axis, purple points) for GCN, GraphSAGE, and GIN on Credit as a function of \(\gamma\). Due to large label imbalance in Credit and small number of nodes with negative predicted outcomes from the protected class, varying \(\gamma\) by 1% translates to changing predictions for 7 nodes. PostProcess thus offers granular control. As \(\gamma\) increases, AUC-ROC decreases while \(\Delta_{\text{SP}}\) first reduces and then increases again. This inflection point indicates that the post-processing policy is overcorrecting in favour of the protected class resulting in disparity towards the non-protected class. Conversely, such improvements are absent in Pokec-z since vanilla GNNs themselves are inherently less biased. _Runtime._ Figure 3 depicts the total computation time in seconds (on log-scale) for each intervention on the four datasets for GCN, GraphSAGE, and GIN. We observe similar trends for all three GNN models. As expected, the larger the dataset, the higher the runtime. Updating a model's predictions at inference time is inexpensive and the resulting overhead for PostProcess is thus negligible. The running time for PFR-AX increases significantly with increasing dataset size. The key bottlenecks are very eigenvalue decompositions for sparse, symmetric matrices in PFR requiring \(\mathcal{O}\left(n^{3}\right)\) time and constructing DeepWalk embeddings. For instance, in the case of Pokec-z, PFR required (on average) 47 minutes in our tests while EmbeddingReverser and GNN training required less than 5 minutes each. For comparison, NIFTY required approximately 22 minutes while EDITS did not complete due to memory overflow. Figure 3. Runtime in seconds (log-scale) of various interventions on GCN, GraphSAGE, and GIN for German, Credit, Penn94, and Pokec-z increasing with dataset size. PostProcess is fastest because updating model inference is inexpensive. Figure 2. AUC-ROC as a function of Disparity (red) and Inequality (purple) for varying levels of the \(\gamma\) parameter of PostProcess on the Credit dataset. Higher values of \(\gamma\) are depicted by larger marker shapes and darker colors and indicate heavier interventions. As \(\gamma\) increases, AUC-ROC always decreases and Equality increases. Disparity first decreases upto an inflection point and then increases indicating an over-correction towards the protected class. _Model Confidence._ In Figure 4, we display the impact of fairness interventions on a model's confidence about its predictions, i.e., un-calibrated density (Y-axis), compared to its logit scores (X-axis) on the Credit dataset. The plots in the top, middle, and bottom rows corresponds to GCN, GraphSAGE, and GIN, respectively. Larger positive values imply higher confidence about predicting a positive outcome and larger negative values imply higher confidence for a negative outcome prediction. While there isn't a universal desired outcome, an intermediate goal for an intervention may be to ensure that a model is equally confident about _correctly_ predicting both positive and negative labels. Blue regions show normalized density of logit values for nodes in the protected class with a positive ground-truth label (S1-Y1) and green regions show the same for nodes in the protected class with a negative outcome as ground-truth. The dashed colored lines indicate density values for these groups of nodes for the Optional model. PostProcess and Unaware induce small changes to GNN's outputs while EDITS is significantly disruptive. PFR-AX nudges the original model's output for nodes in S1-Y1 away from 0 making it more confident about its positive (correct) predictions while NIFTY achieves the reverse. ## 6. Conclusion We presented two interventions that intrinsically differ from existing methods: PFR-AX debiases data prior to training to connect similar nodes across protected and non-protected groups while seeking to preserve existing degree distributions; PostProcess updates model predictions to reduce error rates across protected user groups. We frame our study in the context of the tension between disparity, inequality, and accuracy and quantify the scope for improvements and show that our approaches offer intuitive control over this tradeoff. Given their model-agnostic nature, we motivate future analysis by combining multiple interventions at different loci in the learning pipeline. ###### Acknowledgements. This work has been partially supported by: Department of Research and Universities of the Government of Catalonia (SGR 00930), EU-funded projects "SoBigData++" (grant agreement 871042), "FINDHR" (grant agreement 101070212) and MCIN/AE1/10.13039/501100011033 under the Maria de Maeztu Units of Excellence Programme (CEX2021-001195-M). We also thank the reviewers for their useful comments. Figure 4. Density of logit scores of GCN (first row), GraphSAGE (second row), and GIN (third row) after applying different algorithmic fairness interventions for users in the protected class in the Credit dataset. The vertical dashed (black) line depicts the threshold used for label prediction (positive scores indicate positive outcomes). The colored dashed curves indicate the density of output scores of Original PFR-AX and PostProcess- improve model confidence (density) for correctly predicting a positive label for users in the protected class. ## Appendix A Additional Experimental Results
2310.02851
Dark Side of HAPS Systems: Jamming Threats towards Satellites
Securing satellite communication networks is imperative in the rapidly evolving landscape of advanced telecommunications, particularly in the context of 6G advancements. This paper establishes a secure low earth orbit (LEO) satellite network paradigm to address the challenges of the evolving 6G era, with a focus on enhancing communication integrity between satellites and ground stations. Countering the threat of jamming, which can disrupt vital communication channels, is a key goal of this work. In particular, this paper investigates the performance of two LEO satellite communication scenarios under the presence of jamming attacker. In the first scenario, we consider a system that comprises one transmitting satellite, a receiving ground station, and a high altitude platform station (HAPS) acting as a jammer. The HAPS disrupts communication between the satellite and the ground station, impeding signal transmission. The second scenario involves two satellites, one is the transmitter while the other works as a relay, accompanied by a ground station, and a jamming HAPS. In this scenario, the transmitting satellite sends data to the ground station using two different paths, i.e., direct and indirect transmission paths, with a relay satellite acting as an intermediary in the case of indirect transmission. For both scenarios, we study the system security by developing mathematical frameworks to investigate the outage effect resulting from the jamming signals orchestrated by the HAPS. Our results show that the satellite cooperation in the second scenario improves the system's security since the extreme jamming effect occurs only when both links are simultaneously disturbed.
Hadil Otay, Khaled Humadi, Gunes Karabulut Kurt
2023-10-04T14:35:35Z
http://arxiv.org/abs/2310.02851v1
# Dark Side of HAPS Systems: ###### Abstract Securing satellite communication networks is imperative in the rapidly evolving landscape of advanced telecommunications, particularly in the context of 6G advancements. This paper establishes a secure low earth orbit (LEO) satellite network paradigm to address the challenges of the evolving 6G era, with a focus on enhancing communication integrity between satellites and ground stations. Countering the threat of jamming, which can disrupt vital communication channels, is a key goal of this work. In particular, this paper investigates the performance of two LEO satellite communication scenarios under the presence of jamming attacker. In the first scenario, we consider a system that comprises one transmitting satellite, a receiving ground station, and a high altitude platform station (HAPS) acting as a jammer. The HAPS disrupts communication between the satellite and the ground station, impeding signal transmission. The second scenario involves two satellites, one is the transmitter while the other works as a relay, accompanied by a ground station, and a jamming HAPS. In this scenario, the transmitting satellite sends data to the ground station using two different paths, i.e., direct and indirect transmission paths, with a relay satellite acting as an intermediary in the case of indirect transmission. For both scenarios, we study the system security by developing mathematical frameworks to investigate the outage effect resulting from the jamming signals orchestrated by the HAPS. Our results show that the satellite cooperation in the second scenario improves the system's security since the extreme jamming effect occurs only when both links are simultaneously disturbed. Secure satellite networks, jamming, anti-jamming techniques, HAPS. ## I Introduction In recent times, the interest in satellite-based communication systems has surged, driven by industry initiatives from tech giants like SpaceX, Facebook, and Amazon [1, 2]. This renewed focus extends to satellite-air-ground-integrated networks (SAGIN), which has prompted research into network design, resource allocation, and performance analysis, highlighting challenges and future directions [3]. Additionally, researchers and technology providers are shifting their gaze toward the realm of beyond 5G (B5G) and 6G technologies. The potential for 6G technologies, including SAGIN, terahertz communication, orbital angular momentum (OAM), and quantum communication (QC), has drawn attention, leading to proposals for intelligent architectures and enhanced air-to-ground networks [4]. While satellite communications (SATCOMs) are pivotal, concerns about their security have grown, particularly regarding the vulnerability of satellite military traffic [4]. While some anti-spoofing efforts exist, addressing jamming attacks in Low Earth Orbit (LEO) communication for 6G remains a challenge. Traditional anti-jamming techniques, such as modulation and smart antennas, are facing evolving jamming tactics, necessitating innovative solutions. [5, 6]. In the realm of secure satellite networks, several articles and surveys have addressed the critical issue of cybersecurity in the emerging space industry. For instance, the authors in [8] highlight the lack of integrated security measures or outdated security techniques in satellites. Similarly, [9] derives into the security issues and vulnerabilities existing in the context of 5G networks. Although the security issues in 5G overlap with those of 6G, the integration of satellites into 6G networks and the evolving capabilities of hackers pose additional threats to the confidentiality and integrity of satellite communication links. As such several research works, including [10, 11, 12], shed light on one aspect of security concerns, and provide valuable insights into the impact of jamming attacks on wireless communication systems. Within the space-air-ground domain, the authors of [13] define jamming as the transmission of noise at sufficient power within the same frequency band as the transmitter and receiver. This perspective is further supported by the research conducted in [9] and [14]. In the realm of satellite networks, the article [15] identifies two types of jamming attacks: uplink and downlink. Downlink jamming affects SATCOM broadcasts and navigation satellites, while uplink jamming targets payload and command signals. Command signals play a crucial role in satellite missions, as highlighted in [16]. This article explores the future trends and technical approaches, in particular for satellite tracking, telemetry, and command systems (TT&C), which are responsible for transmitting telemetry and telecommand data, as well as determining satellite orbits. The reliability of satellites heavily depend on the performance of the TT&C system, making it a critical component in the satellite's lifecycle. In the literature various anti-jamming techniques have been proposed to mitigate the effects of jamming attacks. One notable approach is the use of intelligent reflecting surfaces [7]. Additionally, game theory-based approaches incorporating deep learning, reinforcement learning, and Stackelberg games have shown promise in combating jamming attacks, as seen in [17]. Moreover, comprehensive anti-jamming techniques have been explored throughout the entire satellite launch process, as demonstrated in [6]. The focal point of this study lies in the exploration of two satellite communication scenarios. The first scenario encompasses a transmitting satellite, a ground station, and a high altitude platform station (HAPS) taking on the role of a jammer. The HAPS, in this context, disrupts the communication that transpires between the satellite and the ground station, thereby causing obstruction in the transmission of signals. In the second scenario, the setup involves a transmitting satellite, a relay satellite, a ground station, and a HAPS, again acting as a source of disruption. Here, the transmitting satellite orchestrates the transmission of signals toward both the relay satellite and the ground station, with the relay satellite acting as an intermediary. Across both scenarios, the deliberate act of jamming executed by the HAPS generates interference that significantly impacts the communication links. For both scenarios, we study the system security by developing mathematical frameworks to investigate the outage effect resulting from the jamming signals orchestrated by the HAPS. The rest of this paper is organized as follows. In Section II, the system model is introduced. Section III is dedicated to the system performance analysis. In Section IV, Numerical results along with necessary discussions are provided. Finally, we conclude this work in Section V. ## II System model ### _System architecture_ We consider a communication system consisting of two distinct scenarios. In the first scenario, the system comprises a transmitter satellite referred to as \(T\), a ground station \(G\), and a HAPS acting as a jammer. In this configuration, the HAPS interferes with the downlink communication link between the satellite and the ground station, impeding the successful transmission of signals, as illustrated in Fig. 1. In the second scenario, we explore a specific setup involving a transmitter satellite \(T\), a relay satellite denoted as \(R\), a ground station \(G\), and a HAPS that functions as a jammer as well. In this configuration, satellite \(T\) transmits signals to satellite \(R\), and ground station \(G\), simultaneously. The relay satellite \(R\) performs its role by forwarding the signals from satellite \(T\) to the ground station \(G\), thus acting as an intermediary. This enables the ground station to receive its data directly from the satellite \(T\) or through the relay satellite \(R\) as depicted in Fig. 2. Again, the communication links from both satellites \(T\) and \(R\) to the ground station \(G\) are intentionally subjected to disruption through jamming, which is orchestrated by the HAPS. This satellite cooperation is expected to improve the system's security in the addition to the communication performance since the extreme jamming effect occurs only when both links are simultaneously disturbed. ### _Channel model_ To analyze the channel characteristics in the different scenarios, we consider both the large-scale fading and the small-scale fading effects. The large-scale fading, which affects the overall link budget, takes into account factors such as path loss, shadowing, and atmospheric attenuation. This enables us to assess the power budget and determine the feasibility of establishing reliable communication links between the satellites and the ground station. On the other hand, the small-scale fading, specifically Rician fading, is of interest to study the rapid fluctuations in signal amplitude, phase, and power due to multipath propagation, reflections, and interference. By analyzing the Rician fading model, we can understand the statistical behavior of the communication channel, including the presence of a dominant line-of-sight (LOS) component and the influence of scattered signals. This information aids in designing efficient modulation, coding, and diversity schemes to mitigate the adverse effects of fading and enhance the reliability of the communication links. #### Ii-B1 Large-scale fading We consider the path loss of the satellites to ground station links as well as the inter-satellite link. The path loss equations can be determined using suitable path loss models that incorporate factors such as distances, frequency, and environmental effects. The inter-satellite path loss between \(T\) and \(R\) can be expressed as [19], \[PL_{TR}^{ISL}=32.45+20\text{log}_{10}f+20\text{log}_{10}d_{TR}. \tag{1}\] Fig. 1: System model: Scenario 1. In this scenario, the system comprises a transmitter satellite (T), a ground station (G), and a HAPS acting as a jammer. Fig. 2: System model: Scenario 2. In this scenario, to mitigate the effect of the jamming signal originating from the HAPS, the system comprises cooperation between two satellites (the transmitter (T) and the relay (R)) to send the data of the ground station (G). On the other hand, the path loss between the transmitter/relay satellite and \(G\) can be expressed in linear scale as [20]: \[PL_{u}=10^{-\left(\frac{PL_{u}^{prop}+PL_{u}^{hand}+PL_{u}^{ant}+PL_{u}^{ather}}{10} \right)}, \tag{2}\] where \(u\in\{TG,~{}RG\}\) refers to the link from the transmitter \(T\) or the relay \(R\) to the ground station \(G\). \(PL_{u}^{prop}\) is the path loss in dB, accounting for the free-space propagation between the transmitter/relay satellite and the ground station calculated as follows \[PL_{u}^{prop}=32.45+20\log_{10}f+20\log_{10}d_{u} \tag{3}\] \(PL_{u}^{shad}\) is the shadowing loss in dB, representing the additional signal attenuation due to obstacles and environmental effects between the satellite and the ground station. \(PL_{u}^{shad}\sim\mathcal{N}(0,~{}\sigma^{2})\), where \(\sigma^{2}\) is the shadowing variance. \(PL_{u}^{ant}\) is the antenna gain losses in dB, considering the directional properties of the antennas at the satellite and the ground station, and is calculated as follows \[PL_{u}^{ant}=10\log_{10}\Big{(}4\Big{|}\frac{J_{1}(2\pi\eta\sin\omega_{u})}{2 \pi\eta\sin\omega_{u}}\Big{|}^{2}\Big{)} \tag{4}\] where \(J_{1}(.)\) represents the Bessel function of the first kind of order 1, \(\eta\) is the aperture radius of the antenna in wavelengths, \(\omega_{u}\) is the boresight angle. \(PL_{u}^{other}\) is any other relevant loss, such as polarization loss or losses due to atmospheric effects, between the transmitter or relay satellite and the ground station. #### Ii-B2 Small-scale fading In our analysis, we make the assumption that the channel conditions remain static over a coherence time interval. This assumption allows us to adopt a flat fading model for the small-scale fading, meaning that the channel response does not vary significantly over the bandwidth of the transmitted signal. The communication channels between the satellites and the ground station comprise both LOS and non-LOS (NLOS) components. The likelihood of having a LOS connection increases with the elevation angle, peaking at a 90-degree elevation angle. Taking these factors into account, we model the channel between the satellite and \(G\) as a Rician fading channel [18]. This channel fading model is expressed as \[h_{u}=\sqrt{\lambda}e^{j\phi_{u}}+\sqrt{\lambda}h_{u}^{\prime}, \tag{5}\] where \(u\in\{TG,~{}RG\}\), \(h_{u}^{\prime}\) represents the complex channel gain between the transmitter/relay satellite and \(G\), and \(\lambda\) is given as \[\lambda=\frac{K}{K+1}, \tag{6}\] where \(K\) is the Rician factor. In (5), \(e^{j\phi_{u}}\) represents the phase of the LOS component, which is a uniform random variable in the range \([-\pi,\pi]\), and \[\lambda^{\prime}=1/(K+1) \tag{7}\] Here, \(\lambda^{\prime}\) is the power level of the NLOS link. This Rician fading model accounts for both the deterministic LOS component and the random NLOS component, capturing the effects of multipath propagation and potential scattering between satellites and the ground station. ## III Performance analysis To analyze the system's performance, we adopt the Bernoulli theory, which enables to examine binary events, namely a non-jammed link and a jammed link. The Bernoulli random variable \(X\) is defined based on the comparison of the signal-to-jamming ratio (\(SJR\)) to a given threshold \(\gamma_{th}\). For the jammed link event, \(X\) is set to 1 when \(SJR<\gamma_{th}\), representing the occurrence of jamming, whereas, for the non-jammed link event, \(X\) is assigned 0 when \(SJR\geq\gamma_{th}\), indicating an unhindered communication link. The \(SJR\) captures the power ratio between the received signal and the interfering jamming signal. To compute the \(SJR\), we define the received legitimate signal power at the ground station from a given satellite as \(Pr_{u}=Pt_{u}Gt_{u}Gr_{u}PL_{u}h_{u}\), \(u\in\{TG,~{}RG\}\), where \(Pt_{t}\), \(Gt_{u}\), and \(Gr_{u}\) denotes the satellite transmit power, satellite transmit antenna gain, and ground station receive antenna gain for path \(u\), respectively. The large-scale fading \(PL_{u}\) and small-scale fading \(h_{u}\) are given in (2) and (5), respectively. Similarly, the jamming signal power received from the HAPS is given as \(Pr_{j}=Pt_{HG}Gt_{HG}Gr_{HG}PL_{HG}h_{HG}\), where \(HG\) denotes the link between the HAPS and the ground station G which interfere with both \(TG\) and \(RG\) links. For a given link \(u\in\{TG,~{}RG\}\), the \(SJR\) is expressed as \[SJR_{u}=\frac{Pr_{u}}{Pr_{j}}=\frac{Pt_{u}Gt_{u}Gr_{u}PL_{u}h_{u}}{Pr_{j}}. \tag{8}\] To proceed, we need to compute the jamming probability for a given link \(u\). This probability is expressed as follows: \[\mathbf{Pr}[X=1] = \mathbf{Pr}[SJR_{u}<\gamma_{th}] \tag{9}\] \[=\mathbf{Pr}\bigg{[}h_{u}<\frac{\gamma_{th}Pr_{j}}{Pt_{u}Gt_{u} Gr_{u}PL_{u}}\bigg{]},\] while \(\mathbf{Pr}[X=0]=1-\mathbf{Pr}[X=1]\). For the system model in Scenario 1, \(SJR=SJR_{TG}\). In the following, we refer to the jamming probability, \(P_{jam}(\gamma_{th})=\mathbf{Pr}[SJR<\gamma_{th}]\), where \(\gamma_{th}\) is the system reliability threshold. For the analysis tractability, we make some assumptions. First, the Nakagami model is used to approximate the distribution of the Rician random variable. The PDF distribution of the Nakagami random variable is given as [23] \[f(x)=\frac{\Omega^{m}}{\Gamma(m)}x^{m-1}e^{-\frac{1}{\Omega}x},\qquad x>0 \tag{10}\] where \(m\) and \(\Omega\) are the shape and scale parameters, respectively. To match the first and second moments of the Rician and Nakagami distributions, the shape parameter is calculated as \(m=(K^{2}+K+1)/(2K+1)\) which tends to \(m=K/2\) for large \(K\). Since we consider both LOS and NLOS connections, the channel model in (10) is rewritten as \(f_{v}(x)=\frac{\Omega_{v}^{m_{v}}}{\Gamma(m_{v})}x^{m_{v}-1}e^{-\frac{1}{2v}x}\), where \(v\in\{L,\ N\}\) refers to LOS and NLOS paths. Furthermore, to count for the large-scale fading in the analysis, we consider the propagation path loss since it is the dominant part due to a large distance between satellites and ground stations. As such, we rewrite the path loss as \(PL_{u}^{v}=32.45+20\log_{10}(f_{u})+10\alpha^{v}\log_{10}(d_{u})\), where \(\alpha^{v}\) is the path loss exponent and \(v\in\{L,\ N\}\) refers to LOS and NLOS paths. The following lemma computes the jamming probability for Scenario 1. **Lemma 1:** For a given reliability threshold \(\gamma_{th}\), the jamming probability for Scenario 1, denoted as \(P_{jam}^{Sc1}(\gamma_{th})\), which is the probability that the link _T-G_ is jammed by the HAPS, is expressed as \[P_{jam}^{Sc1}(\gamma_{th}) = P_{jam}^{TG}(\gamma_{th}) \tag{11}\] \[= p_{L}(\theta_{TG})P_{jam}^{TG,\ L}\left(\gamma_{th}\right)+p_{N} (\theta_{TG})P_{jam}^{TG,\ N}(\gamma_{th})\] where \(p_{L}(\theta_{TG})\) and \(p_{N}(\theta_{TG})\) define the probability of a link is LOS or NLOS, respectively, between the transmitting satellite and the ground station for a given elevation angle \(\theta_{TG}\), \(P_{jam}^{TG,\ L}(\gamma_{th})\) and \(P_{jam}^{TG,\ N}(\gamma_{th})\) are the conditional jamming probabilities given that the links are in LOS and NLOS, respectively, and are expressed as in (12) and (13) at the top of the next page, where, \(D_{v}^{TG}=Pt_{TG}Gt_{TG}Gr_{TG}PL_{TG}^{v}\) and \(D_{v}^{HG}=Pt_{HG}Gt_{HG}Gr_{HG}PL_{HG}^{v}\), \(v\in\{L,\ N\}\), refers to the LOS and NLOS link budgets of the useful communication link _T-G_ and the jamming link _H-G_, respectively. _Proof:_ see the Appendix For the system model in Scenario 2, the ground station receives the same data throughout two different links, i.e., _T-G_ and _R-G_. AS such, the \(SJR\) at the ground station is given as \[SJR=\max_{u\in\{TG,\ RG\}}\{SJR_{u}\} \tag{14}\] Accordingly, the jamming probability for Scenario 2 can be obtained using the following lemma. **Lemma 2:** For a given reliability threshold \(\gamma_{th}\), the jamming probability for Scenario 2, denoted as \(P_{jam}^{Sc2}(\gamma_{th})\), which is the probability that both links _T-G_ and _R-G_ are jammed by the HAPS, is expressed as \[P_{jam}^{Sc2}(\gamma_{th}) = P_{jam}^{TG}(\gamma_{th})P_{jam}^{RG}(\gamma_{th}), \tag{15}\] where \(P_{jam}^{TG}(\gamma_{th})\) is given in (11) and \(P_{jam}^{RG}(\gamma_{th})\) is given as \[P_{jam}^{RG}(\gamma_{th}) = \tag{16}\] \[p_{L}(\theta_{RG})P_{jam}^{RG,\ L}(\gamma_{th})+p_{N}(\theta_{RG} )P_{jam}^{RG,\ N}(\gamma_{th})\] Similarly, \(p_{L}(\theta_{TG})\) and \(p_{N}(\theta_{TG})\) are the probability that RG link is LOS or NLOS, respectively, for a given elevation angle \(\theta_{RG}\), \(P_{jam}^{RG,\ L}\left(\gamma_{th}\right)\) and \(P_{jam}^{RG,\ N}\left(\gamma_{th}\right)\) are the conditional jamming probabilities given that the RG link is in LOS and NLOS, respectively, and are expressed as in (12) and (13) with replacing \(D_{v}^{TG}\) with \(D_{n}^{RG}\), \(v\in\{L,\ N\}\). _Proof:_ Follows the same steps in the proof of Lemma 1. To compute \(p_{L}(\theta_{u})\), \(u\in\{TG,RG\}\) we apply the model introduced in [24], which calculates the link LOS probability between a satellite and a ground station for an arbitrary elevation angle \(\theta\) as depicted in Fig. 3. This model is expressed as follows \[p_{L}(\theta)=\exp\left(-\beta\cot(\theta)\right), \tag{17}\] where \(\beta\) is a constant related to the environment geometry. The elevation angle \(\theta\) can be calculated as \(\cot(\theta)=\frac{r}{I}\), where \(r\) signifies the horizontal distance between the satellite and the ground station, and \(l\) represents the height of the obstructing structure. The probability of the link is NLOS for a given different environmental scenarios. Unless otherwise specified, the following are the details of the system parameter. The operating frequency is 2 GHz, transmit powers are \(Pt_{TG}=Pt_{RG}=10\) dB, and \(Pt_{HG}=-10\)dB. The distances from the ground station to the transmitter and the relay satellites are \(d_{TG}=550\times 10^{3}\) m and \(d_{TG}=600\times 10^{3}\) m, respectively, while the distance to the jamming HAPS is \(d_{HG}=20\times 10^{3}\) m. The path loss exponents are \(\alpha^{L}=2\), and \(\alpha^{N}=2.2\). The channel fading parameters are set as \(m_{L}=3\), \(m_{N}=2\), \(\Omega_{L}=1/3\), and \(\Omega_{N}=1/2\). To model the LOS probability, three values of \(\beta\) are used, i.e., \(\beta=\{0.57\,0.35\,0.048\}\), for suburban, urban, and dense-urban scenarios respectively [24]. For the jamming signal, we consider the worst case, _i.e._, when the HAPS is in LOS with the ground station. In this section, we run Monte Carlo simulations in order to validate the analytical expressions provided in Lemmas 1 and 2. In Fig. 4, we plot the CDF of the \(SJR\) as a function of the system reliability threshold for the two scenarios with different values of \(\beta\) to consider three different environments, i.e, suburban, urban, and dense-urban areas. As shown in the figure, the analysis results match well with Simulations which validates our derivations in Section III. Furthermore, this figure shows that for the same system parameters, dense-urban environments provide better SJR performance compared Fig. 3: The extension of the terrestrial LOS probability model to satellite scenarios [24]. to urban and suburban environments. This is because dense-urban areas result in higher LOS probability for a given elevation angle according to the findings in [24]. Moreover, Fig. 4 illustrates that the satellite cooperation in Scenario 2 can significantly mitigate the jamming effect compared to the configuration in Scenario 1. In Fig. 5, the CDF of the SJR is plotted as a function of the transmit satellite elevation angle \(\theta_{TG}\) with different values of the relay satellite elevation angle, \(\theta_{RG}\). As shown in this figure, the system performance of Scenario 2 outperforms that of Scenario 1 at small values of \(\theta_{TG}\) while at large values, _i.e_, close to \(\theta_{TG}=90^{o}\) both scenarios provides approximately the same performance. This is encountered due to the fact that at small \(\theta_{TG}\), the LOS probability of the TG link is small which degrades the received signal power compared to the jamming signal. As such using a cooperative system as in Scenario 2 can enhance the system's performance. ## V Conclusion In this paper, we have explored the effectiveness of the indirect link of satellite communication as a robust solution to counter jamming threats. Our findings strongly indicate that the incorporation of a relay satellite represents a viable and effective strategy to combat jamming, ensuring a reliable communication link. In particular, this paper considered a satellite system model for two scenarios: the first scenario uses a single transmit satellite to directly send data to the ground station while the second scenario incorporates a relay system to improve the communication link reliability. For both scenarios, we studied system security by developing mathematical frameworks to investigate the outage effect resulting from the jamming signals orchestrated by an attacker HAPS. Our results showed that using relay satellite systems can provide a safeguarding system against adversarial interference, offering a promising avenue for ensuring dependable and secure Fig. 4: CDF of the SJR for Scenarios 1 and 2 with \(\beta=\{0.57\,0.35\,0.048\}\) for suburban, urban, dense-urban scenarios respectively. Fig. 5: CDF of the SJR as a function of the elevation angle of satellite \(T\) for Scenarios 1 and 2 with \(\gamma_{th}=10\) dB, \(\beta=0.35\) and with different values of satellite \(R\) elevation angle \(\theta_{RG}\). communication networks in jamming-prone environments. ## Acknowledgement This work is supported in part by Mitacs Globalink. ## VI Appendix ### _Proof of Lemma 1_ Again the SJR of the TG link is expressed as \[SJR_{TG}=\frac{Pr_{TG}}{Pr_{j}}=\frac{Pt_{TG}Gt_{TG}Gr_{TG}PL_{ TG}h_{TG}}{Pr_{j}} \tag{18}\] where the jamming received power is \(Pr_{j}=Pt_{HG}Gt_{HG}Gr_{HG}PL_{HG}h_{HG}\). In this case, the jamming probability is obtained as \[\mathbf{Pr}[SJR_{TG}<\gamma_{th}]=\mathbf{Pr}\bigg{[}h_{TG}<\frac{\gamma_{th} Pr_{j}}{Pt_{TG}Gt_{TG}Gr_{TG}PL_{TG}}\bigg{]}, \tag{19}\] Given the TG is a LOS link, the jamming probability is written as \[P_{jam}^{TG,L}(\gamma_{th})=Pr\bigg{[}h_{TG}^{L}<\frac{\gamma_{th}Pr_{j}}{D_{L} ^{TG}}\bigg{]}, \tag{20}\] where \(D_{L}^{TG}=Pt_{TG}Gt_{TG}Gr_{TG}PL_{TG}^{L}\). Since \(H_{GT}^{L}\) follows Nakagami distribution with parameters \(m_{L}\) and \(\Omega_{L}\), then the distribution in (20) is expressed as \[P_{jam}^{TG,L}(\gamma_{th}) = \mathbb{E}_{Pr_{j}}\Bigg{[}\frac{\Gamma(m_{L})-\Gamma\Big{(}m_{L },\;\Omega_{L}\frac{\gamma_{th}Pr_{j}}{D_{L}^{TG}}\Big{)}}{\Gamma(m_{L})} \Bigg{]} \tag{21}\] \[=1-P_{L}(\theta_{HG})\mathbb{E}_{h_{HG}^{L}}\Bigg{[}\frac{\Gamma \Big{(}m_{L},\;\Omega_{L}\frac{\gamma_{th}D_{L}^{HG}G_{h}^{L}h_{TG}}{D_{L}^{ TG}}\Big{)}}{\Gamma(m_{L})}\Bigg{]}\] \[-P_{N}(\theta_{HG})\mathbb{E}_{h_{HG}^{N}}\Bigg{[}\frac{\Gamma \Big{(}m_{L},\;\Omega_{L}\frac{\gamma_{th}D_{L}^{HG}G_{h}^{N}G_{HG}}{D_{L}^{ TG}}\Big{)}}{\Gamma(m_{L})}\Bigg{]}.\] Let \(\Lambda_{L}\) and \(\Lambda_{N}\) refer to the second and third terms in (21), respectively. Then, \[\Lambda_{L} = \frac{\Omega_{L}^{m_{L}}}{\Gamma(m_{L})}\int_{0}^{\infty}\frac{ \Gamma\Big{(}m_{L},\;\Omega_{L}\frac{\gamma_{th}D_{L}^{HG}h_{HG}^{L}}{D_{L}^{ TG}}\Big{)}}{\Gamma(m_{L})}(h_{HG}^{L})^{m_{L}-1} \tag{22}\] \[\times e^{-\frac{1}{\Gamma_{L}}h_{HG}^{L}}dh_{HG}^{L}.\] Conducting the above integration and after some manipulations, we get \[\Lambda_{L} = \frac{p_{L}(\theta_{HG})D_{L}^{TG}}{D_{L}^{HG}\gamma_{th}}\Bigg{(} 1-\frac{\Omega_{L}^{m_{L}}\big{(}\frac{D_{L}^{HG}\gamma_{th}}{D_{L}^{TG}}\Omega _{L}\big{)}^{-m_{L}}\Gamma(2m_{L})}{\Gamma(m_{L})} \tag{23}\] \[\times\frac{{}_{2}F_{1}\Big{(}m_{L},2m_{L},m_{L}+1,-\frac{D_{L}^{ TG}}{D_{L}^{HG}\gamma_{th}}\Big{)}}{\Gamma(m_{L})}\Bigg{)}.\] Similarly, \(\Lambda_{N}\) can be derived to be as written in the second term of (12). Substituting \(\Lambda_{L}\) and \(\Lambda_{N}\) in (21), we will get the entire expression in (12). By following the same steps, we can derive the jamming probability given that the TG link is NLOS to get the results in (13).
2305.06881
Rotation and interaction of the September 8 and 10, 2014 CMEs tested with EUHFORIA
Solar coronal mass ejections (CMEs) can catch up and interact with preceding CMEs and solar wind structures to undergo rotation and deflection during their propagation. We aim to show how interactions undergone by a CME in the corona and heliosphere can play a significant role in altering its geoeffectiveness predicted at the time of its eruption. We consider a case study of two successive CMEs launched from the active region NOAA 12158 in early September 2014. The second CME was predicted to be extensively geoeffective based on the remote-sensing observations of the source region. However, in situ measurements at 1~au recorded only a short-lasting weak negative Bz component followed by a prolonged positive Bz component. The EUropean Heliosphere FORecasting Information Asset (EUHFORIA) is used to perform a self-consistent 3D MHD simulation of the two CMEs in the heliosphere. The initial conditions of the CMEs are determined by combining observational insights near the Sun, fine-tuned to match the in situ observations near 1~au, and additional numerical experiments of each individual CME. By introducing CME1 before CME2 in the EUHFORIA simulation, we modelled the negative Bz component in the sheath region ahead of CME2 whose formation can be attributed to the interaction between CME1 and CME2. To reproduce the positive Bz component in the magnetic ejecta of CME2, we had to initialise CME2 with an orientation determined at 0.1~au and consistent with the orientation interpreted at 1~au, instead of the orientation observed during its eruption. EUHFORIA simulations suggest the possibility of a significant rotation of CME2 in the low corona in order to explain the in situ observations at 1~au. Coherent magnetic field rotations, potentially geoeffective, can be formed in the sheath region as a result of CME-CME interactions in the heliosphere even if the individual CMEs are not geoeffective.
Anwesha Maharana, Camilla Scolini, Brigitte Schmieder, Stefaan Poedts
2023-05-11T15:22:36Z
http://arxiv.org/abs/2305.06881v1
# Rotation and interaction of the September 8 and 10, 2014 CMEs tested with EUHFORIA ###### Abstract Context:Solar coronal mass ejections (CMEs) can catch up and interact with preceding CMEs and solar wind structures to undergo rotation and deflection during their propagation. Aims:We aim to show how interactions undergone by a CME in the corona and heliosphere can play a significant role in altering its geoeffectiveness predicted at the time of its eruption. To do so, we consider a case study of two successive CMEs launched from the active region NOAA 12158 in early September 2014. The second CME was predicted to be extensively geoeffective based on the remote-sensing observations of the source region. However, in situ measurements at 1 au recorded only a short-lasting weak negative \(B_{z}\) component followed by a prolonged positive \(B_{z}\) component. Methods:The EUropean Heliosphere FORCcasting Information Asset (EUHFORIA) is used to perform a self-consistent 3D MHD data-driven simulation of the two CMEs in the heliosphere. First, the ambient solar wind is modelled, followed by the time-dependent injection of CME1 with the LFF spheromak and CME2 with the "Flux Rope in 3D" (FRi3D) model. The initial conditions of the CMEs are determined by combining observational insights near the Sun, fine-tuned to match the in situ observations near 1 au, and additional numerical experiments of each individual CME. Results:By introducing CME1 before CME2 in the EUHFORIA simulation, we modelled the negative \(B_{z}\) component in the sheath region ahead of CME2 whose formation can be attributed to the interaction between CME1 and CME2. To reproduce the positive \(B_{z}\) component in the magnetic ejecta of CME2, we had to initialise CME2 with an orientation determined at 0.1 au and consistent with the orientation interpreted at 1 au, instead of the orientation observed during its eruption. Conclusions:EUHFORIA simulations suggest the possibility of a significant rotation of CME2 in the low corona in order to explain the in situ observations at 1 au. Coherent magnetic field rotations with enhanced strength (potentially geoeffective) can be formed in the sheath region as a result of interactions between two CMEs in the heliosphere even if the individual CMEs are not geoeffective. ## 1 Introduction Coronal Mass Ejections (CMEs) can drive major geomagnetic storms (Gosling et al., 1991; Huttunen et al., 2005). Hence, it is important to model their initiation and propagation in order to forecast their arrival at Earth or at any other planet or satellite. Uncertainties in space weather prediction are introduced by multiple factors, starting from the monitoring of the eruptions at the Sun, to the modelling of their propagation from the solar corona to the planets or satellites in the inner heliosphere (Riley and Ben-Nun, 2021; Verbeke et al., 2022). Magnetohydrodynamic (MHD) modelling is useful for tracking the propagation of CMEs, accounting for their interactions with solar wind structures and other CMEs, and computing their geoeffectiveness. Data-driven MHD modelling of CME evolution is more physical as it constrains the initial and boundary conditions using the early observations of the eruptions. However, if the orientation of the emerging CME is misinterpreted in the low corona, the initial conditions for the propagation models will yield inappropriate prediction results. In this work, we present and analyse such a case of space weather misprediction. As part of the ISEST VarSITI campaign ([http://solar.gmu.edu/heliophysics/index.php/ISEST](http://solar.gmu.edu/heliophysics/index.php/ISEST)), the CME event of September 10, 2014, was used to perform the exercise of real-time forecasting. The prediction was made considering the magnetic field signatures of the eruption on the solar surface and the direction of propagation estimated from the coronagraphic field of view. The CME was predicted to have a strong negative \(B_{z}\) component and be a frontal impact at Earth (Webb and Nitta, 2017). However, by the time the CME reached Earth, the associated magnetic ejecta (ME; Burlaga, 1988; Winslow et al., 2015) was characterised by a long-lasting positive \(B_{z}\) component. A brief period of negative \(B_{z}\) component was present in the sheath ahead of CME2 that drove a moderate storm (minimum Dst\(\sim-88\) nT) instead of the intense storm predicted, and hence, the geoeffectiveness of the different sub-structures associated with the CME was greatly mispredicted. Upon taking a closer look at this period, we noticed the presence of a preceding earthward CME that erupted late on September 8, 2014, and that was not recorded in any of the Interplanetary CME (ICME) catalogues at Earth. This preceding CME could precondition the propagation of the CME that erupted on September 10, 2014, and open the possibility of CME-CME interaction leading to the formation of the geoeffective sheath. Specifically, we are seeking answers to two questions: (1) What is the orientation of the CME that erupted on September 10, 2014, at 0.1 au that must be injected in EUHFORIA in order to obtain the correct signature at 1 au?; and (2) What is the role of the preceding CME in the formation of the negative \(B_{z}\) component (or the magnetic field rotation) in the sheath region? Figure 1: Overview of the eruption, early evolution in corona, and geomagnetic signatures of CME1 and CME2. Panels (a,b) and panels (c,d) span the time ranges of September 8-10, 2014 and September 9-11, 2014, respectively. (a) An M-class flare on late September 8, 2014 (indicated with black arrow), can be associated with the eruption of CME1; (b) Height-time plot of CME1 is shown by the blue height profile starting at \(\sim\)23.30 UT on September 8, 2014; (c) An X-class flare on September 10 (indicated with black arrow), can be associated with the eruption of CME2. (d) Height-time plot of CME2 is shown by the purple height profile starting \(\sim\)17.30 UT on September 9, 2014. The solar source coordinates of the flares are labelled in the GOES X-ray intensity plots in panels (a,c). The colour (line) codes in panels (b,d) define the CME propagation direction (apparent angular width). (e) Disturbance storm index (Dst), a measure of the geomagnetic activity at Earth shows a calm phase followed by a mild disturbance and then a moderate storm in the period 12-14 September 2014 (disturbances indicated with arrows). Source: CDAW catalogue - [https://cdaw.gsfc.nasa.gov/CME_list/daily_plots/sephtx/2014_09/](https://cdaw.gsfc.nasa.gov/CME_list/daily_plots/sephtx/2014_09/) The paper is organised as follows: Section 2 provides an observational overview of the event and builds the motivation of this study. In Sections 3 and 4, we describe the event using various observational proxies, both remote and in situ, at the Sun (1 R\({}_{\odot}\)), close to 0.1 au and at 1 au. We perform MHD simulations of the event as described in Section 5. Section 6 presents the modelling results and our interpretation of this puzzling event, and Section 7 provides the summary and conclusions. ## 2 Observations ### Overview of Sun-to-Earth signatures of the CMEs In this section, we identify the observational signatures of the two successive CMEs that occurred between September 8-10, 2014. The first CME (hereafter, CME1) was associated with an M4.6 flare occurring in the Active Region NOAA 12158 (hereafter, AR 12158), positioned at N12E29, on September 8, 2014, starting around 23:12 UT. The flare peaked at Figure 3: Running difference images showing the development of CME1 (top row) on early September 9, 2014, and CME2 (bottom row) on September 10, 2014, in the LASCO C2 field of view. Source: [https://cdaw.gsfc.nasa.gov/CME_list/](https://cdaw.gsfc.nasa.gov/CME_list/) Figure 2: Position of STEREO-A, STEREO-B and Earth on September 9, 2014, at 00:00. The grid in black corresponds to the Stonyhurst coordinate systems. This polar plot is generated using the Solar-MACH tool ([https://serpentine-h2020.eu/tools/](https://serpentine-h2020.eu/tools/); Gieseler et al. 2022). 00:28 UT on September 9, 2014. The origin of the second CME (hereafter, CME2) has been extensively studied (Cheng et al. 2015; Dudik et al. 2016; Zhao et al. 2016). CME2 was associated with an X1.6 flare which started on September 10, 2014, at 17:21 UT from AR 12158, positioned at N15E02, and peaked at 17:45 UT. Figure 1(a) and 1(c) indicate the flares and provide the X-ray intensities associated with the eruption of CME1 and CME2, respectively. During the early propagation of CME1 and CME2, they were detected in the field of view (FOV) of the C2 and C3 instruments of Large Angle and Spectrometric COranagraph (LASCO, Brueckner et al. 1995) on board the Solar and Heliospheric Observatory (SOHO), and the COR-2 instrument on board the Sun-Earth Connection Coronal and Heliospheric Investigation (SECCHI) package of the twin-spacecraft Solar Terrestrial Relations Observatory (STEREO, Kaiser et al. 2008). Only STEREO-B recorded the observations while a data gap was found in STEREO-A during this period. Figure 2 shows the relative positioning of the observing spacecraft in the heliosphere. CME1 was visible in the C2 FOV at 00:06 UT with an apparent speed of 920 km s\({}^{-1}\) and in COR-2B FOV at 00:24 UT. CME2 was first observed by C2 at 17:48 UT, and it developed as a halo CME at 18:24 UT. It was later visible in the C3 FOV starting from around 18:45 UT with an apparent speed of 1267 km s\({}^{-1}\). CME2 appeared for the first time by the COR-2B at 18:24 UT. The height-time profiles of CME1 and CME2 up to \(\sim 30\) R\({}_{\odot}\), created by automatic tracking of the CME leading edge and approximating a linear fit by the CDAW catalogue are shown in Fig. 1(b) and 1(d), respectively. The above details are also listed in Table 1. The association of the CMEs with the corresponding flares is also reported by Vemareddy et al. (2016a). Figure 3 shows the evolution of both the CMEs in the C2 FOV. CME2, tagged as a textbook event (Webb & Nitta 2017), reached Earth on September 12, 2014. The arrival of CME2 at L1 is recorded in the WIND ICME catalogue ([https://vind.nasa.gov/ICME_catalog/ICME_catalog_viewer.php](https://vind.nasa.gov/ICME_catalog/ICME_catalog_viewer.php)) at 15:17 UT and in the Richardson and Cane catalogue ([https://izwl.caltech.edu/ACE/ASC/DATA/levels1/cmetable2.htm](https://izwl.caltech.edu/ACE/ASC/DATA/levels1/cmetable2.htm); Cane & Richardson 2003; Richardson & Cane 2010) at 15:53 UT, respectively. The interplanetary counterpart of CME1 is not listed in any of the ICME catalogues. No other wide and Earthward CMEs were observed between the time period starting from the eruption of CME1 until two days after the eruption of CME2, which could have arrived at L1 interrupting or affecting the two CMEs of our concern. Extrapolating the CME arrival times at Earth using the projected speeds from the CDAW catalogue ([https://cdaw.gsfc.nasa.gov/CME_list/](https://cdaw.gsfc.nasa.gov/CME_list/)), we obtain a time interval between their estimated arrival times of about 29 hours. This is a rough estimate assuming no effects from the drag and the interaction of the CMEs in the heliosphere affect their kinematics during propagation. The time difference between the arrival times observed in situ at L1 is \(\sim 32\) hours, hence corroborating the calculated time difference and the association between the coronal and interplanetary signatures of the eruptions. Signatures of both the CMEs were identified in the Disturbance storm index (Dst) at Earth as shown in Fig. 1(e). A low drop in Dst (\(\sim-40\) nT) followed by a moderately negative storm (Dst\(\sim-88\) nT) was observed. This prompted the preliminary association of the CMEs in the low corona and the CMEs at 1 au. ### In situ signatures at Earth The in situ signatures of the CMEs are plotted in Fig. 4. The shock (S1) driven by CME1 is observed on September 11 at 22:50 UT based on the IPShock catalogue ([http://ipshocks.fi/](http://ipshocks.fi/); Kilpua et al. 2015). It is characterised by the enhancement in speed and number density. CME1 is directed north of the ecliptic plane as seen from the coronagraph images from LASCO instruments1. The Space Weather Database Of Notifications, Knowledge, Information (DONKI, [https://kauai.ccmc.gsfc.nasa.gov/DONKI/search/](https://kauai.ccmc.gsfc.nasa.gov/DONKI/search/)) catalogue has also recorded a north-eastward launch direction of the CME. These observations suggest that the WIND spacecraft would have encountered the southwestern flank of CME1. The long sheath region (characterised by density enhancement and fluctuating magnetic field signatures in the red-shaded region in Fig. 4) after the shock of CME1 can also be inferred as the signature of the CME1 flank. Following these clear sheath signatures, we observe a simultaneous decrease in density and temperature, plasma beta (\(\beta\)) being less than 1, and a slight apparent increase in the magnetic field after the turbulent phase. However, the lack of a clear rotation in the magnetic field vector suggests the passage of the CME1 flank. A period of bi-directional electrons can also be observed between S1 and S2 (corresponding to the yellow-shaded region), suggesting their propagation inside the magnetic ejecta (hereafter, ME1) associated with CME1. Although some other typical characteristics of MEs, such as a significantly enhanced magnetic field, clear rotations in the magnetic field components, and oxygen enhancements, \begin{table} \begin{tabular}{l|l l} \hline \hline & CME1 & CME2 \\ \hline \hline Active region position & N12E29 & N15E02 \\ \hline Flare class & M4.6 & X1.6 \\ \hline Flare time (start - peak - end) & September 8, 23:12 UT - September 9, 00:28 UT - September 9, 01:30 UT & September 10, 17:45 UT - September 10, 17:45 UT \\ \hline Apparent CME Speed [km s\({}^{-1}\)] & 920 & 1267 \\ \hline \hline \end{tabular} \end{table} Table 1: Observational details of the eruption of CME1 and CME2. Flare details are as reported on Solar Monitor ([https://www.solarmonitor.org/](https://www.solarmonitor.org/)). The position of the active region AR 12158, as per the NOAA catalogue, is in heliographic coordinates. CME speeds are taken from the CDAW catalogue as computed by an automatic linear fit. are missing, the low density, temperature, \(\beta\), and magnetic field fluctuations in combination with bi-directional electrons suggest the passage of ME1 associated with CME1 starting on September 12 at 8:45 UT as indicated in the yellow shaded region (Zurbuchen & Richardson 2006). The second shock (S2) is recorded on September 12 at 15:17 UT. S2 is followed Figure 4: In situ measurements by the Wind spacecraft during the period of September 11, 2014, and September 15, 2014. The figure shows (top to bottom) speed (\(v\)), proton temperature (\(T_{p}\)) and number density (\(n_{p}\)) in the second panel, magnetic field components (\(B_{x}\), \(B_{y}\), \(B_{z}\)) in the GSE coordinate system capped with the total magnetic field (\(\pm|B|\)) in the third panel, total magnetic field (\(|B|\)), the plasma beta (\(\beta\)) in the fourth panel, the \(\theta\) and \(\phi\) components of the magnetic field in GSE angular coordinates in the fifth and sixth panel, respectively, the Dst index in the seventh panel, and in the last panel the suprathermal pitch angle distribution (with energies \(>136\) eV). Vertical dashed lines indicate the shock arrival of CME1 (S1, blue) and the start of the magnetic ejecta passage of CME1 (ME1, green). The shock arrival of CME2 (S2, cyan), the start of the magnetic ejecta passage of CME2 (magenta) and the end of the magnetic ejecta passage (red) are as identified in the Wind ICME catalogue. The shaded red and yellow regions represent the sheath ahead of CME1 and the magnetic ejecta, ME1, respectively. The shaded green and blue regions depict the sheath ahead of the CME2 and the magnetic ejecta, ME2. The magnetic ejecta of CME2 has been identified in the Richardson and Cane catalogue as well. by a distinct turbulent sheath (green shaded region) and a clear magnetic ejecta (hereafter, ME2, shaded blue region). The start and end times of ME2, as recorded in the WIND catalogue, are September 12 at 21:22 UT and September 14 at 11:38 UT, respectively. The Richardson and Cane catalogue reports the ME2 boundaries as September 12 at 22:00 UT and September 14 at 2:00 UT. Upon closer visual inspection of the data, we find that the ME2 boundaries from the WIND catalogue include some part of the sheath before ME2, and the boundaries listed in the Richardson and Cane catalogue detect the end of ME2 \(\sim\) 10 hours earlier than the WIND catalogue. Although the WIND catalogue visibly provides better boundaries for ME2, we correct the start time to September 12 at 21:36 UT based on the visual inspection to remove parts of the sheath. As seen in Fig. 4(panel 1), a decreasing speed profile within ME2 points to an expanding structure passing through the spacecraft. Inside ME2, the plasma density and temperature are also reduced (Fig. 4, panel 2). Rotations in the magnetic field direction, an enhancement in the magnetic field strength, and a reduced \(\beta\) can also be observed in the shaded blue region in Fig. 4(panels 3, and 4). The \(\theta\)-profile (Fig. 4, panel 5) does not show a smooth rotation in the north-south magnetic field direction (i.e., \(|\Delta\theta|\neq 0\)), but rather a constant and long-duration positive profile. This observation is compatible with the passage of a CME flank. There is a jump in \(\phi\) from \(90^{\circ}\) to \(>180^{\circ}\) (Fig. 4, panel 6) which implies a westward axis of CME2. Marubashti et al. (2017) performed a toroidal CME model fitting to the in situ measurements of ME2 and reported a small rotation angle of the observed magnetic field vector. Their most important takeaway was that, although the magnetic field orientation of the CME2 is southward, the observed northward magnetic field inside ME2 could be due to the impact of the southern edge of the CME as the CME propagation was mainly north of the ecliptic. This highlights how crucial it is to predict what part of the gigantic CME would impact Earth in order to forecast the geomagnetic effects. A period of bi-directional suprathermal electrons corresponding to the ME2 boundaries can be observed (Fig. 4, panel 8). Other magnetic ejecta signatures, such as an enhanced oxygen charge ratio (O\({}^{+7}\)/O\({}^{+6}\)) and average iron charge ratio \(<Q_{F}>\) have also been reported in association with ME2 in Kilpua et al. (2021). The geomagnetic storms at L1 during the period of September 12-14, 2014 are characterised by the negative Dst index (Fig. 4, panel 7). The negative \(B_{z}\) component in the sheaths ahead of CME1 and CME2 can be associated with the weak storm on September 12 and the moderate storm on September 13, respectively. Kilpua et al. (2021) modelled CME2 with a time-dependent magnetofrictional model initialised with a flux rope about the PIL as suggested by Vemareddy et al. (2016a) and with a chirality consistent with the inverse S-shaped EUV sigmoid. The modelling results did not match the in situ magnetic field observations. Even if they inferred that CME2 was a flank hit at Earth, the extrapolated \(B_{z}\) component was still mainly negative contrary to the in situ observations at Earth. Some studies have shown that initializing CME2 in MHD heliospheric propagation models using the PIL orientation as in Vemareddy et al. (2016a), did not match the magnetic field configuration at 1 au (An et al., 2019). The results from the previous studies raise the question of whether the CME rotated further anti-clockwise, either in the low corona or during heliospheric propagation in a way that could have led to the passage of a westward axial magnetic field with a dominant positive \(B_{z}\) component through Earth. ## 3 Reconstruction of the CMEs from remote-sensing observations in the corona In this section, we derive the magnetic, geometric, and kinematic CME parameters from remote-sensing observations which we will later use to initialise the CMEs in the heliospheric simulations. In this work, the CMEs are modelled as magnetic flux ropes (defined as bundles of twisted magnetic field lines with electric fields flowing inside; Antiochos et al., 1999; Torok and Kliem, 2005). First, the chirality and the orientation of the erupting flux rope are constrained from the CME source region proxies in Section 3.1. Second, the magnetic flux is derived using statistical relations based on the X-ray flare intensity in Section 3.2. Finally, a 3D geometrical reconstruction is performed using remote sensing coronagraph observations from LASCO and STEREO-B in the corona below 0.1 au in Section 3.3. ### Source region observations In this section, the pre-, during and post-eruptive magnetic field signatures are analysed with remote sensing observations from the Atmospheric Imaging Assembly (AIA) and Helioseismic and Magnetic Imager (HMI) on board the Solar Disk Observatory (SDO; Lemen et al., 2012). AR 12158 appeared rotated onto the disc around September 3, 2014, and erupted twice, once on September 8 and again on September 10. Observations of the active region in different wavelengths during the eruption of CME1 and CME2 are shown in Fig. 5 and 6. We place our focus mainly on CME2 in order to understand its orientation during eruption and to probe the reasons for possible rotations in the low corona which could have led to the mismatch of its orientation at 1 au. To estimate the magnetic field orientation of the front part of the flux rope, we investigate the magnetic chirality and polarity of the active region and the orientation of the polarity inversion line (PIL). In addition, we check the associated dimming regions to identify the footpoints of the erupting flux rope to further support the orientation inferred from the PIL. _Flux rope chirality (the sign of magnetic helicity)_: Using the EUV/soft X-ray sigmoid as a proxy for an emerging flux rope embedded in an arcade (Titov and Demoulin, 1999), a reverse S-shaped sigmoid is identified in AIA 131 A before the eruption of CME1 and CME2 as shown in Fig. 5(a) and 6(a), respectively. This suggests the erupting flux ropes associated with both CME1 and CME2 likely have a left-handed chirality. The leading positive magnetic polarity extends over the southern part of the trailing polarity and gives the active region a negative twist i.e., a proxy for left-handed chirality (Luoni et al., 2011) as shown in the HMI magnetogram images in Fig. 5(b) and 6(b). Additionally, the sigmoids were well identified with two inverse J-shaped flare ribbons in AIA 304 A characterising their left-handedness in Fig. 5(c) and Fig. 6(c) for CME1 and CME2 respectively (Palmerio et al., 2017). _Flux rope orientation_: The orientation (tilt) of the erupting flux rope is inferred from the orientation of the polarity inversion line (PIL), which is usually parallel to the axial magnetic field of the flux rope. The tilt angle, measured from the solar west is assigned a positive (negative) value if the acute angle is calculated counter-clockwise (clockwise) from the ecliptic on the solar west. The directionality of the axis is determined from the chirality of the active region. The flare associated with CME1 was localised to the eastern part of the extended PIL which could be approximated with a straight line as shown in Fig. 5(b). However, determining a univocal orientation for the PIL in the case of CME2 was not straightforward because the eruption extended along the curved geometry of the PIL (green dashed line in Fig. 6(b)). In the case of both CME1 and CME2, we consider the main axial field direction as northeastward making an angle \(\sim-45^{\circ}\) with the ecliptic. The magnetic field topology of AR 12158 reconstructed with a nonlinear force-free field (NLFFF) model also corroborates the presence of a highly twisted pre-eruptive flux rope surrounded by inverse J-shaped magnetic field lines (Zhao et al., 2016). Dudik et al. (2016) found evidence of the occurrence of slipping reconnection in the flaring region where flare loops slip towards both ends of the ribbons. When the eruption occurs, the filaments are observed getting disturbed in AIA 171 A in a northwestward direction, which is also identified by Dudik et al. (2016) as indicated by the white arrow in Fig. 6(d). The location of the footpoints of the flux rope is also identified with the coronal dimming signatures in AIA 211 Aas shown in the white arrows in Fig. 6(e). In Fig. 6(f), the base difference image in AIA 131 A overlaid with HMI magnetogram (saturated at \(\pm 1000\) G; blue for positive and red for negative polarity) after the CME2 eruption. The development of the dark dimmings was observed in the southeast and northwest parts of the active region lying in the negative and positive magnetic polarity regions as marked by yellow circles. This suggests the eruption of the flux rope almost parallel to the linear PIL marked by the red dashed line. The orientation of the main PIL of AR 12158 associated with the CME1 and CME2 eruptions is consistent with the descriptions provided by Vemareddy et al. (2016a); Dudik et al. (2016); Zhao et al. (2016), i.e.,the tilt is \(\sim-45^{\circ}\) using straight-line assumption. The conclusions drawn from the observations of the flare-CME eruption phase are as follows: (a) The axial flux rope fields of the CMEs are directed eastward; (b) The CMEs were characterised by a left-handed helicity, which combined with the eastward axial fields implies north to south poloidal field lines at the flux rope apex, hence characterising the magnetic topology of the flux ropes as SEN (Bothmer & Schwenn, 1998). Figure 5: AR 12158 associated with CME1 eruption in different wavelengths - (a) AIA 131 Å image highlights the evolved sigmoid and the hooks corresponding to flux rope footpoints during the early phase of the flare; (b) HMI magnetogram saturated at \(\pm 200\) G overlaid with the approximated PIL orientation (i.e., the part of the extended PIL that most likely erupted as CME1), before the start of the flare; (c) AIA 304 Å image shows the inverse J-shaped flare ribbons after the eruption, which suggest the eruption of a left-handed flux rope, and (d) AIA 211 Å image shows the post flare coronal dimmings (marked with white arrows). The \(X-\) and \(Y-\)axes correspond to the helio-projective longitude and latitude, respectively. ### Deriving the reconnected magnetic flux associated with the CMEs (near \(1\) R\({}_{\odot}\)) The amount of reconnected magnetic flux is derived using flare-CME statistical relations by previous works as adopted in Scolini et al. (2020). The relations between the flare peak intensity in soft X-rays and the reconnected flux derived from the flare ribbons and coronal dimmings (Kazachenko et al., 2017; Dissauer et al., 2018; Tschernitz et al., 2018) are applied. Once the reconnected flux is obtained, the toroidal flux is derived based on the magnetic topology of the CME model. The statistical relations between the reconnected flux, \(\phi_{r}\) (in units of Mx) and the flare peak intensity, \(I_{SXR}\) (in units of W m\({}^{-2}\)) used in this study are as follows: 1. Kazachenko et al. (2017): Flare ribbon proxy \[\mathrm{log}_{10}(\phi_{r})=24.42+0.64\mathrm{log}_{10}(I_{SXR})\] (1) 2. Dissauer et al. (2018): Coronal dimming proxy \[\mathrm{log}_{10}(\phi_{r})=23.26+0.42\mathrm{log}_{10}(I_{SXR})\] (2) 3. Tschernitz et al. (2018): Flare ribbon proxy \[\mathrm{log}_{10}(\phi_{r})=24.21+0.58\mathrm{log}_{10}(I_{SXR})\] (3) The values of \(\phi_{r}\) computed from the above relations and their averages are reported for CME1 (\(I_{SXR}\)=4.5\(\times\)10\({}^{-5}\) W m\({}^{-2}\)) and CME2 (\(I_{SXR}\)=1.6\(\times\)10\({}^{-4}\) W m\({}^{-2}\)) in Table 2. The method for the conversion of \(\phi_{r}\) to toroidal flux (\(\phi_{t}\)) for the linear force-free spheromak model (hereafter, referred to as spheromak model; Chandrasekhar and Woltjer, 1958; Shiota and Kataoka, 2016) is followed from Scolini et al. (2019) and yields a value (rounded off to the closest integer) of \(\phi_{t}\)=5\(\times\)10\({}^{13}\) Wb for CME1. For the Flux rope in 3D (FRi3D; Isavnin, 2016) model, we follow the FRED method of Gopalswamy et al. (2018) modified for the FRi3D geometry (refer to Appendix A), which results in a total flux (rounded off to the closest integer) Figure 6: AR 12158 associated with CME2 eruption in different wavelengths - (a) AIA 131 Å image before CME2 eruption; (b) HMI magnetogram saturated at \(\pm\)200 G overlaid with the approximated PIL drawn with a red dashed line; (c) AIA 304 Å image highlights the inverse J-shaped flare ribbons indicating left-handedness of the flux rope; (d) AIA 171 Å image showing the coronal loops and the eruption direction of CME2 in north-westward direction as per Dudík et al. (2016); (e) AIA 211 Å image showing coronal dimmings marked with white arrows; (f) AIA 131 Å base-difference image overlaid with HMI magnetogram contours saturated at \(\pm\)1000 G coloured blue (red) for negative (positive) polarity. The yellow circles demarcate the dimmings located at the footpoints of the FR. The red dashed line is the approximated PIL. The \(X-\) and \(Y-\)axes correspond to the helio-projective longitude and latitude, respectively. of \(\phi_{tot}=\phi_{t}+\phi_{p}=\)1\(\times\)10\({}^{14}\) Wb for CME2. We note that the magnetic flux of the CME2 can also be derived by fitting the FRi3D model to the in situ observations (as in Section 4.1), and the preference to use this estimate will be described in Section 5.3. The choice of employing the spheromak and the FRi3D models for CME1 and CME2, respectively, will be explained in Section 5. ### CME kinematics and geometry in the corona On September 9, 2014, Earth was at a longitudinal separation of 167\({}^{\circ}\) and 161\({}^{\circ}\) from STEREO-A and STEREO-B respectively (see Fig. 2). Due to a data gap in STEREO-A during this period, white light coronagraph images from only two viewpoints, i.e., STEREO-B and LASCO, were used in the reconstruction. STEREO-A did not record data during this time. Although the observations by STEREO-B provided an additional vantage point for the reconstruction, its location on the back side of the Sun made the projected view of the CMEs close to halo CMEs, which made the 3D reconstruction more challenging. The 3D reconstruction of CME1 was performed using the Graduated Cylindrical Shell (GCS; Thernisien 2011) model to constrain the geometrical parameters for the spheromak model which will be used in the EUHFORIA simulations in Section 5. The GCS parameters (in Stonyhurst coordinates) for CME1 are listed in Table 3: CME latitude (\(\theta\)), longitude (\(\phi\)), face-on (\(\alpha\)) and edge-on (\(\delta\)) angular half-widths, aspect ratio (\(\kappa\)=sin\(\delta\)), and tilt (\(\gamma\)). The deprojected (3D) speed of the CME leading edge is \(v_{3D}\) which is the sum of the radial speed (\(v_{rad}\), speed of the CME centre) and the expansion speed (\(v_{exp}\), rate of increase of the CME cross-section). The leading edge of the CME is tracked temporally to derive the \(v_{3D}\). The spheromak model is launched with \(v_{rad}=v_{3D}/(1+\kappa)\) so that the CME cross-section expands self-consistently in the MHD heliospheric domain due to the Lorentz force (Scolini et al. 2019). The CME radius of the spheromak model at 21.5 R\({}_{\odot}\) is given by 21.5 sin(\(\alpha\)+\(\delta\)) R\({}_{\odot}\). The reconstructed images are shown in Fig. B.1 in LASCO C3 (top) and STEREO-B (bottom) FOV. As CME2 will be simulated with the FRi3D model (more information later in Section 5), its 3D reconstruction is performed with the FRi3D forward modelling tool in order to constrain the parameters appropriately for the simulation. The parameters obtained from the fitting are listed in Table 3: CME latitude (\(\theta\)), longitude (\(\phi\)), angular half-width (\(\varphi_{hw}\)) and half-height (\(\varphi_{hh}\)), toroidal height (\(R_{t}\)), tilt (\(\gamma\)), flattening (\(n\)), and pancaking (\(\varphi_{p}\)). Toroidal speed (\(v_{R_{t}}\)) and poloidal speed (\(v_{R_{p}}\)) are computed from the temporal fitting of the CME evolution and are similar to \(v_{rad}\) and \(v_{exp}\) respectively, as mentioned in the context of the spheromak model. The FRi3D model fitted to CME2 in COR-2B and C3 FOV are plotted in Fig. B.2. Using the total speed constrained from the 3D reconstruction and assuming self-similar expansion in the upper corona, the time of injection of the CMEs at 0.1 au EUHFORIA boundary is computed. The position of CME2, obtained from the 3D reconstruction, is consistent with the northwestward deflection of the CME and suggests a close-to-flank encounter at Earth if self-similarly extrapolated up to 1 au. Although two viewpoints are reported to improve the reconstruction (Verbeke et al. 2022), CME1 and CME2 were observed as halo by both LASCO and STEREO-B which could increase the error in the especially critical parameters like speed and half-angle (Kay et al. 2020). Although the tilt (geometrical inclination) of the fitted flux rope is obtained from this methodology of geometrical reconstruction, the axial magnetic field is ambiguous i.e., it can be either east-to-west or west-to-east. As it is not straightforward to estimate the vector magnetic field in the middle-to-high corona, we rely on the in situ observations to determine the magnetic field components and hence the flux rope orientation. In the next subsection, in situ observations are used to constrain the CME2 magnetic field orientation at 1 au. \begin{table} \begin{tabular}{l|l l l} \hline \hline GCS parameters & CME1 & FRi3D parameters & CME2 \\ \hline \hline \(\theta\) & 17\({}^{\circ}\) & \(\theta\) & 24\({}^{\circ}\) \\ \(\phi\) & -29\({}^{\circ}\) & \(\phi\) & 15\({}^{\circ}\) \\ \(\alpha\) & 46\({}^{\circ}\) & \(\varphi_{hw}\) & 50\({}^{\circ}\) \\ \(\kappa\) & 0.4 & \(\varphi_{hh}\) & 30\({}^{\circ}\) \\ \(\gamma\) & -55\({}^{\circ}\) & \(\gamma\) & 45\({}^{\circ}\) \\ \(v_{3D}\) & 696 km s\({}^{-1}\) & \(v_{R_{t}}\) & 580 km s\({}^{-1}\) \\ \(v_{rad}\) & 497 km s\({}^{-1}\) & \(v_{R_{p}}\) & 363 km s\({}^{-1}\) \\ \(r_{spr}\) & 21 R\({}_{\odot}\) & & \\ & & \(n\) & 0.5 \\ & & \(\varphi_{p}\) & 0.5 \\ \hline \hline \end{tabular} \end{table} Table 2: Reconnected flux using statistical relations observations. \begin{table} \begin{tabular}{l|l l l} \hline \hline Reconnected flux from statistical relations (\(\times\)10\({}^{21}\)Mx) & CME1 & CME2 \\ \hline \hline Kazachenko et al. (2017) (ribbons) & 4.41 & 9.79 \\ Dissauer et al. (2018) (coronal dimmings) & 2.74 & 4.63 \\ Tschernitz et al. (2018) (ribbons) & 4.94 & 10.2 \\ \hline Average & 4.03 & 8.21 \\ \hline \hline \end{tabular} \end{table} Table 3: Parameters from the GCS fitting of CME1 and the FRi3D fitting of CME2. ## 4 Reconstruction of CME2 from in situ observations at 1 au The in situ magnetic field observations of ME2 from the Earth-bound WIND spacecraft are fit with the FRi3D, Linear Force-Free (LFF) and Circular-Cylindrical (CC) models to derive the chirality and the magnetic axis orientation of the flux rope at 1 au. Multiple models are used for validation purposes. The three selected models have cylindrical or modified-cylindrical configurations and involve the effect of the self-similar expansion of the flux rope, hence making the fittings more realistic (as shown by Vemareddy et al. 2016b). In addition to constraining and verifying the magnetic field parameters, the motivation behind the in situ fitting is to investigate any rotation between 0.1 au and 1 au. As CME1 has no clear rotations in the magnetic field components at 1 au, we perform the fittings only for CME2. The comparison of the CME2 orientation near 1 R\({}_{\odot}\), corona and 1 au is presented at the end of this section. ### FRi3D model The numerical iterative fitting of the FRi3D flux rope to the in situ observations of ME2 is performed using a real-valued version of the genetic algorithm as introduced in Isavnin (2016). The magnetic field in the FRi3D model is defined by the Lundquist model (Lundquist 1950). The flux rope expansion is implemented by constructing a linearly growing CME cross-section into the model. In this model, the tilt parameter provides the latitudinal inclination (positive for counterclockwise and negative for clockwise from the ecliptic on the solar west), and the polarity determines the azimuthal direction (westward is \(+1\); eastward is \(-1\)) of the magnetic field axis. The chirality is negative (positive) for right-handed (left-handed), i.e. the opposite of the standard convention. A detailed description of the parameters can be found in Maharana et al. (2022). The FRi3D fitting (in green), as shown in Fig. 7(a) yields a westward left-handed flux rope with a tilt of \(+55^{\circ}\). This fitting also provides an estimate of the total magnetic flux of \(0.5\times 10^{14}\) Wb and a twist of \(\sim 1.5\) associated with CME2 at 1 au. ### Linear Force Free model Two fits were obtained by employing the Linear Force Free (LFF) model which employs the Lundquist magnetic field configuration (Lundquist 1950) with two different chiralities (\(H\)): positive (\(H=+1\)) and negative (\(H=-1\)). The results shown in Fig. 7(a) (in red for \(H=+1\) and in blue for \(H=-1\)) consistently agree on the following: (i) the flux rope axis direction (\(\theta\sim\)25\({}^{\circ}\), \(\phi\sim\)-150\({}^{\circ}\)); (ii) the high impact angle (\(|z_{0}|\)\(\sim\)0.88) implying the passage of Earth far away from the axis of the ideal cylindrical structure assumed by the model, and (iii) the low chi-square of both these fits reflect high uncertainty in the fitting. As flux rope chirality remains unchanged during heliospheric propagation (Palmerio et al. 2018), we consider the LFF fitting results with negative chirality based on the source region signatures. ### Circular-Cylindrical model The analysis of the in situ signatures with the circular-cylindrical analytical flux rope model (CC model; Nieves-Chinchilla et al. 2016) is adapted from the WIND ICME catalogue ([https://wind.nasa.gov/ICME_catalog/ICME_catalog_viewer.php](https://wind.nasa.gov/ICME_catalog/ICME_catalog_viewer.php); Nieves-Chinchilla et al. 2018) (see Fig. 7(d)). The magnetic field configuration of this model is based on the non-force-free approach by relating the magnetic field vector with its current density, as proposed by Hidalgo et al. (2002a). In this model, the cross-section distortion, expansion, curvature and deformation are implemented following Hidalgo et al. (2002b) to reconcile CME with ICME from in situ, remote sensing and MHD simulations perspectives. The orientation of the flux rope quantified by the latitude, \(\theta=9^{\circ}\) (positive) and the longitude, \(\phi=350^{\circ}\) (\(>180^{\circ}\)) corresponds to a low-inclined northwestward flux rope. Negative helicity suggests the left-handedness of the ME. Impact parameter, \(|y_{0}|\sim 0.9R\) implies that the smallest distance of the spacecraft to the flux rope axis (\(y_{0}\)) relative to the flux rope radius (R), is almost at the edge of the flux rope boundary. The conclusions from the different in situ reconstruction techniques are: (a) the axial magnetic field of ME2 has a northwest orientation at 1 au, (b) the flux rope is left-handed, and (c) a high impact parameter implies flank encounter. It must be noted that the uncertainty associated with determining the exact flux rope orientation increases in cases with high-impact parameters as compared to the head-on impacts (Riley et al. 2004; Al-Haddad et al. 2013). The conclusions (a) and (b) suggest the magnetic topology of CME2 to be NWS (Bothmer & Schwenn 1998) at 1 au, contrary to SEN as inferred close to 1 R\({}_{\odot}\) in Section 3.1. ### Discrepancy in CME2 orientation from observations at different locations The analysis of the flux rope during the eruption in Section 3.1 suggests an SEN topology. However, an analysis of the in situ signatures of the magnetic ejecta at 1 au points to an NWS topology. This discrepancy in the flux rope orientation retrieved for CME2 at the Sun and 1 au is the focus of our investigation in the following sections. As per the PIL orientation of the CME2 source (Fig. 6(b)), the flux rope orientation can be approximated to be SEN pointing in the northeast direction. However, the details of the eruption from Dudik et al. (2016) and Vemareddy et al. (2016a) suggest a shearing rotation and deflection motion that could have led to the eruption of a southeast pointing SEN flux rope. The two possible cases are depicted in Fig. 8(a). Cho et al. (2017) also estimate the PIL orientation of the CME2 to be southeast SEN during the eruption. Close to 0.1 au, CME2 can have two axial magnetic field directions for the same geometrical tilt as shown in Fig. 8(b) - either SEN or NWS. We propose two possible scenarios henceforth. First, assuming a dominant low coronal rotation and no significant rotation in the inner heliosphere (0.1 au to 1 au), the northwestward directed tilt constrained close to 0.1 au (hereafter, tilt\({}^{NWS}_{0.1au}\)) turns out to be consistent with the tilt constrained at 1 au (hereafter, tilt\({}_{1au}\) as in Fig. 8(c)). The repercussion of this assumption is a physical rotation of CME2 by \(\sim\)180\({}^{\circ}\) - 270\({}^{\circ}\) (assuming uncertainties in defining the PIL) in an anti-clockwise direction owing to the left-handed chirality of the CME (Green et al., 2007; Lynch et al., 2009). Vemareddy et al. (2016a) suggest the eruption of CME2 was triggered by a helical kink instability driven by sunspot rotation. Such kink-unstable magnetic flux repres are known to produce CME rotation in the low corona by converting their twist into writhe (Kliem et al., 2012) which could have resulted in such a significant rotation of CME2. The second scenario suggests a partial anti-clockwise rotation in the low corona leading to the southeast SEN flux rope at 0.1 au (tilt\({}^{SEN}_{0.1au}\)), followed by an additional rotation in the heliosphere Figure 8: Schematic representation of the CME2 orientation inferred from different observational proxies at different locations - (a) Close to 1 \(R_{\odot}\), based on the analysis of the source region in Section 3.1 and the analysis of Cho et al. (2017); (b) Close to the 0.1 au, based on the 3D reconstruction of the white-light images (Section 3.3); (c) At 1 au, based on the in situ observations (Section 4). Figure 7: Fitting of in situ observations of ME2 at 1 au with various models: (a) FRi3D model (green), LFF fit with right-handed chirality (red), and LFF fit with left-handed chirality (blue); (b) CC fit adapted from the WIND ICME catalogue (Nieves-Chinchilla et al., 2018). The vertical lines in black in both plots correspond to the ME2 boundary as per the same catalogue. The fitted parameters of the model are discussed in Section 4.1, 4.2, and 4.3. by about \(\sim\)180\(-\)270\({}^{\circ}\) to reach the reconstructed tilt\({}_{1au}\) at 1 au. The second scenario seems less probable as previous studies suggest that most CMEs cease to undergo significant rotation and deflection at larger heliospheric distances and instead propagate self-similarly further away from the Sun (Demoulin and Dasso, 2009; Isavnin et al., 2014; Balmaceda et al., 2020). We investigate the possibility of these two scenarios with numerical simulations in the next section. ## 5 MHD modelling with EUHFORIA In this section, we present the simulation setup of the heliospheric propagation of the CMEs using the physics-based MHD model EUropean Heliospheric FORecasting Information Asset (Pomoell and Poedts, 2018, EUHFORIA). The aim is to match the observations at 1 au measured by the WIND spacecraft. The questions we seek to answer are: (a) What is the orientation of CME2 at 0.1 au that must be injected to obtain the correct signature of ME2? (b) What is the role of CME1 in forming the magnetic field rotation in the sheath region of CME2? ### EUHFORIA setup EUHFORIA consists of two parts: a coronal domain and a heliospheric domain. The coronal part is a 3D semi-empirical model based on the Wang-Sheeley-Arge (WSA, Arge et al. (2004)) model, which provides the solar wind plasma conditions at the inner boundary of EUHFORIA, i.e., 0.1 au. It is driven by the photospheric magnetic field via synoptic magnetogram maps. More details about the coronal model can be found in Pomoell and Poedts (2018) and Asvestari et al. (2019). The heliospheric part is a 3D time-dependent model of the inner heliosphere that numerically solves the ideal MHD equations, including gravity, using a cell-average finite volume method in the Heliocentric Earth EQuatorial (HEEQ) coordinate system. The constrained transport approach is applied to advance the magnetic field components in a divergence-free way. The boundary conditions at 0.1 au of this part are obtained from the coronal model. The computational domain extends from 0.1 au to 2 au in the radial direction, \(\pm\)80\({}^{\circ}\) in the latitudinal direction, and 0-360\({}^{\circ}\) in the longitudinal direction. EUHFORIA enables the injection of the CMEs at the inner boundary as time-dependent boundary conditions which are then self-consistently evolved by MHD equations. There are three functional CME models: (1) the cone model (Pomoell and Poedts, 2018): a simplified non-magnetised spherical blob of plasma; (2) the LFF sphero-max model (Verbeke et al., 2019): an improvement over the cone model by the inclusion of an internal magnetic field configuration; and (3) the FRi3D model (Maharana et al., 2022): an upgrade over the spherical shape of the spheromak model for improving the modelling of flank encounters and deformations. EUHFORIA version 2.0 has been used for the simulations in this work. The radial resolution of the computational mesh is 0.0074 au (corresponding to 1.596 R\({}_{\odot}\)) for 256 cells in the radial direction, and the angular resolution is 4\({}^{\circ}\) in the latitudinal and 2\({}^{\circ}\) in the longitudinal directions, respectively. ### The background solar wind We perform the first EUHFORIA simulation by evolving the solar wind as a boundary condition without the insertion of CMEs to obtain an optimal ambient medium with reasonable plasma properties in which CMEs can propagate. The background solar wind is modelled using the synoptic magnetogram from Global Oscillation Network Group (GONG) on September 8, 2014, at 23:00 UT (mrbqs14090842314c2154_055.fits.gz). With this magnetogram fed as the boundary condition to the default coronal model of EUHFORIA, we obtained a high-speed stream traversing through Earth with its peak reaching \(\sim 3\) days later as compared to the in situ observations. Hence, we rotated the inner boundary map of extrapolated solar wind plasma and magnetic field properties by 40\({}^{\circ}\) westward, in order to make the high-speed stream arrive earlier at Earth, and reproduce more accurately the actual CME propagation and its position with respect to the high-speed stream. The speed and the proton number density profiles from both simulations, the default wind (in blue) and the rotated wind (in red) are shown in Fig. 9. ### Modelling of CMEs In this work, we employ the spheromak model and the FRi3D model to simulate CME1 and CME2, respectively. We refrain from modelling both CMEs with FRi3D due to an implementation limitation affecting the injection of consecutive FRi3D CMEs into the heliospheric domain. As the legs of a first CME simulated with FRi3D would remain connected to the inner boundary, the insertion of a second CME would raise numerical complications. As the main focus of this study is CME2, FRi3D is used for improved modelling of the magnetic field components and the spheromak model is used for CME1. We first experimented by using the spheromak model for simulating both CMEs. When CME2 was modelled with spheromak, the CME had to be launched almost along the Sun-Earth line (although the real CME2 event was a flank encounter) in order to model its interaction with CME1. This is because of the inability of the spheromak model to reproduce the flank impact of CME2 due to the lack of CME legs in the model. Recent studies point to additional drawbacks of using the spheromak model in modelling the interplanetary propagation of the CMEs. The magnetic moment of the spheromak model tends to tilt and align itself with the ambient solar wind magnetic field. As the spheromak model is not anchored to the Sun unlike a real CME, it is free to undergo unreal rotation in the heliosphere due to the spheromak tilting instability (Asvestari et al., 2022). Therefore, it will be unreliable for understanding the possibility of the actual rotation of the CMEs in interplanetary space. We had to adjust the density of the spheromak model in an ad hoc manner to limit the inherent spheromak tilting. This resulted in an overestimated number density profile of the CME2, and yet the magnetic field profiles of CME2 were not appropriately reproduced. Due to the numerical constraint of using the FRi3D model to simulate both the CMEs consecutively, we chose to use the spheromak model for CME1 and shifted it towards the Sun-Earth line in a calculated manner to represent the leg of CME1. We constrain the CME input parameters for the spheromak and the FRi3D model following the methods described by Verbeke et al. (2019), Scolini et al. (2019), and Maharana et al. (2022). The geometrical and kinematic parameters are obtained from the 3D reconstruction of the CMEs in the solar corona as detailed in Section 3.3. The magnetic field parameters for the CME models are constrained using observations of the source region as detailed in Section 3.2. All the parameters are summarised in Table 5. We perform three numerical experiments (Run1, Run2, and Run3) involving one CME (CME1 or CME2) each time to determine the parameters of the individual CMEs at the heliospheric boundary that match the observations at 1 au. The results of these experiments are used to perform the final simulation (Run4) involving both the CMEs. The first two simulations labelled Run1 and Run2, aim to investigate the orientation of CME2 that must be injected at 0.1 au to reproduce the magnetic field components when propagated to 1 au, before introducing CME1 ahead of it. All the parameters corresponding to the FRi3D model are kept the same in these two runs except for the polarity. Run1 initialised with the south-eastward tilt\({}_{8.1u}^{SEN}\) and Run2 with north-westward directed tilt\({}_{0.1u}^{NWS}\). The magnetic flux value used for the FRi3D model in Run1 and Run2 is \(0.5\cdot 10^{14}\) Wb which is half the value constrained near the photospheric surface (1 R\({}_{\odot}\)) in Section 3. This value is not ad hoc, but is consistent with the total magnetic flux value constrained from the fitting of the FRi3D model to the in situ observations at 1 au in Section 4.1. Maharana et al. (2022) showed that the FRi3D model expands faster and arrives earlier when initialised with the magnetic flux value constrained using the methodology involving the remote-sensing observations, and hence can be initialised with a lower flux estimated from in situ observations for better prediction accuracy. In Run3, only CME1 is simulated with the spheromak model to assess its independent signature at Earth. The fourth simulation, Run4, is aimed to investigate the effect of CME1 in preconditioning the propagation of CME2, and the consequence of CME-CME interaction on the geoeffectiveness of the impact at Earth. Run4 will use the input parameters of CME1 from Run3, and that of CME2 from the best simulation \begin{table} \begin{tabular}{l|l l} \hline \hline Simulations & CME1 & CME2 \\ \hline \hline Run1 & - & FRi3D (tilt\({}_{0.1au}^{SEN}\)) \\ & & (2014-09-12 \\ & & 18:13 UT) \\ \hline Run2 & - & FRi3D (tilt\({}_{0.1au}^{NWS}\)) \\ & & (2014-09-12 \\ & & 18:23 UT) \\ \hline Run3 & Spheromak & - \\ & (2014-09-11 \\ & 23:23 UT) & \\ \hline Run4 & Spheromak & FRi3D (tilt\({}_{0.1au}^{NWS}\)) \\ & (2014-09-11 \\ & 23:23 UT) & 12:33 UT) \\ \hline \hline Observed & 2014-09-11 & 2014-09-12 \\ ToA & 22:50 UT & 15:17 UT \\ \hline \hline \end{tabular} \end{table} Table 4: List of EUHFORIA simulations, the CME models used and the time of arrival of the shocks (ToA; datetime in yyyyy-mm-dd HH:MM format) of the CMEs at Earth in the EUHFORIA simulations. The observed ToA of the CMEs from the Wind ICME catalogue is provided for comparison. Figure 9: Background solar wind modelled with the default EUHFORIA coronal model setup using the synoptic magnetogram from GONG on September 8, 2014, at 23:00 (in blue), and the rotated solar wind (in red). The shaded regions provide an error estimate in \(\pm\)5-10\({}^{\circ}\) in latitude (\(\sigma_{\theta}\)) and longitude (\(\sigma_{\phi}\)) around Earth. Corresponding WIND observations are plotted in black. between Run1 and Run2 based on the results of Section 6.1. As the geometrical reconstruction points to a glancing blow of CME1 at Earth and the spheromak model does not possess legs, it is possible to miss the effect of its flank encounter on the interaction with CME2. Hence, we shift CME1 \(\sim 15^{\circ}\) westward in longitude and \(\sim 5^{\circ}\) northward in latitude in order to better reproduce the effect of its legs in Run4. In the simulation domain, we place virtual spacecraft around Earth separated by an angular distance of \(5^{\circ}\) and \(10^{\circ}\) in latitude and longitude to capture the variability of the results in the vicinity of Earth. Additional virtual spacecraft are placed along the Sun-Earth line with a radial separation of 0.1 au. The standard mass densities used for the spheromak and the FRi3D model are \(10^{-18}\) kg m\({}^{-3}\) and \(10^{-17}\) kg m\({}^{-3}\), respectively. According to Maharana et al. (2022), the typical volume of the flux rope geometry (as in the case of FRi3D) requires a higher standard density in the order of \(10^{-17}\) kg m\({}^{-3}\) (supported by observations in Temmer et al. (2021)) to enhance the modelling accuracy of mass of a CME modelled with FRi3D at 0.1 au. However, the spherical volume of the spheromak model is up to 2-3 orders of magnitude higher than that of the FRi3D model and hence, a comparable mass can be modelled a density of \(10^{-18}\) kg m\({}^{-3}\). Therefore, CME1 and CME2 have different initial mass densities depending on the CME model used. ## 6 Simulation results and discussion A summary of the EUHFORIA simulations performed in this study, including the CME models used for each CME, and the time of arrival of the corresponding CME shock (ToA, datetime in yyyy-mm-dd HH:MM format) of the CMEs in the simulations, is provided in Table 4. ### Propagation of CME2 only The comparison of Run1 and Run2 is shown in Fig. 10. Speed and density are modelled similarly in both cases. The arrival time of CME2 in both Run1 and Run2 is delayed by \(\sim 3\) hours as compared to the observed ToA of the shock. Using \(\mathrm{tilt}_{0.1au}^{NN}\) in Run2, the prolonged positive \(B_{z}\) component is well reproduced and the negative \(B_{y}\) component better matches observations compared to the use of \(\mathrm{tilt}_{0.1au}^{NN}\) in Run1. Through this experiment, we have developed an understanding of the circumstances that led to the formation of the positive \(B_{z}\) component in ME2 instead of the predicted prolonged negative \(B_{z}\) component. Run1 (\(\mathrm{tilt}_{0.1au}^{NN}\)) does not seem to rotate significantly in the heliospheric domain of our simulation \begin{table} \begin{tabular}{l||l|l} \hline \hline \multicolumn{3}{c}{Input parameters} \\ \hline \hline & CME1 & CME2 \\ \hline CME model & Spheromak & FRi3D \\ \hline & Geometrical & \\ \hline Insertion time & 2014-09- & 2014-09- \\ & 09 04:24 UT & 10 20:14 UT \\ Speed & 450 km s\({}^{-1}\) & 500 km s\({}^{-1}\) \\ Latitude & \(22^{\circ}\) & \(24^{\circ}\) \\ Longitude & \(-14^{\circ}\) & \(15^{\circ}\) \\ Half-width & - & \(50^{\circ}\) \\ Half-height & - & \(30^{\circ}\) \\ Radius & 21 R\({}_{\odot}\) & - \\ Toroidal height & - & \(13.6\) R\({}_{\odot}\) \\ \hline & Magnetic field & \\ \hline Chirality & \(-1\) & \(+1^{*}\) \\ Polarity & - & \(+1(-1)\) \\ Tilt & \(-135^{\circ\ **}\) & \(45^{\circ}\) \\ Toroidal magnetic flux & \(5\cdot 10^{13}\) Wb & - \\ Total magnetic flux & - & \(5\cdot 10^{13}\) Wb \\ Twist & - & \(1.5\) \\ \hline & Deformation & \\ \hline Flattening & - & 0.5 \\ Pancaking & - & 0.5 \\ \hline & Plasma parameters & \\ \hline Mass density & \(10^{-18}\) kg m\({}^{-3}\) & \(10^{-17}\) kg m\({}^{-3}\) \\ Temperature & \(0.8\cdot 10^{6}\) K & \(0.8\cdot 10^{6}\) K \\ \hline \hline \end{tabular} \end{table} Table 5: CME parameters used in the EUHFORIA simulations employing the spheromak model for CME1 in Run3 and Run4, and the FRi3D model for CME2 in Run1, Run2 and Run4. The only change for Run1 is in the polarity parameter of the FRi3D model, which is -1 (eastward) as opposed to +1 (westward) for Run2 and Run4. \({}^{*}\)FRi3D chirality is implemented with an opposite convention i.e., -1 for right-handedness and +1 for left-handedness. \({}^{**}\)Conventionally, a left-handed tilt of \(0^{\circ}\) configuration of the spheromak model is a westward flux-rope normal to the Sun-Earth line. Hence, the eastward tilt of \(45^{\circ}\), the spheromak model is rotated _anti-clockwise_ by \(135^{\circ}\), i.e., \(-135^{\circ}\). to match the observations at 1 au (tilt\({}_{0.1au}^{NWS}\)). This suggests that the flux rope must have undergone rotation in the corona (i.e. within 0.1 au) up to reaching tilt\({}_{0.1au}^{NWS}\), and it then would have propagated in the heliosphere without significant rotation resulting in the observed magnetic field profile at 1 au. ### Propagation of CME1 only This section discusses the results of Run3. We first initialised the CME with \(v_{rad}\)=497 km s\({}^{-1}\), and yet, the CME arrived at Earth \(\sim\) 4 hours earlier than the observed time of arrival. As the main purpose of this study is to understand the CME magnetic field signatures rather than predicting the CME arrival times, we optimised the \(v_{rad}\) to 450 km s\({}^{-1}\) in order to model the latter interaction with CME2 in Run4 more accurately. The modelled time series of the physical parameters at Earth are presented in Fig. 11. After reducing the CME speed as described above, the shock of CME1 is modelled to arrive on 2014-09-11 23:23 UT (43 minutes later than the actual CME1 arrival), as visible in the sharp increases in the speed and proton density profiles. The drop in \(\beta\) at \(\sim\)2014-09-12 12:00 UT is due to the passage of ME1 at its flank i.e., the southwest portion of the magnetic ejecta associated with CME1 (see figures in Section 6.3). There is no clear rotation in the magnetic field components. A leading positive \(B_{z}\) signature is obtained (slightly overestimated) followed by a weak trailing negative \(B_{z}\) component. ### Propagation of CME1 followed by CME2 Based on the previous runs, we find that the most accurate modelling of the CME2 signatures at 1 au (in Run4) arises from the simulation of CME1 as in Run3 and of CME2 as in Run2. The results of Run4 are discussed and compared to Run2 in this section. In Fig. 12, plasma and magnetic field properties of Run4 are over-plotted on the results of Run2 to distinguish the features due to the possible CME-CME interaction in the presence of CME1 in Run4. The solid vertical lines in blue, green, cyan and magenta correspond to the S1, the start of ME1, S2 and the start of ME2 in Run4. The shaded regions around the solid line of the simulation time series represent the same physical properties in \(\pm\)5-10\({}^{\circ}\) vicinity in the latitude (\(\sigma_{\theta}\)) and longitude (\(\sigma_{\phi}\)) around Earth. We first analyse the speed and number density in the time series plot at Earth to compare the effect of CME1 in Run4. In the absence of CME1 (Run2), CME2 arrived at Earth \(\sim\) 3 hours later than observations. While, in the presence of CME1 (Run4), CME2 arrived at Earth \(\sim\) 3 hours earlier with respect to the observations, hence implying that it was sped up by \(\sim\) 6 hours by CME1. The passage of CME1 creates a low-density region ahead of CME2 which lets it expand faster, leading to a higher shock speed and a depletion in density inside the flux rope (the \(n_{p}\) peak is lower in Run4 than Run2). In Run4, the difference between the arrival Figure 10: Time series plot showing the comparison between the simulations with different orientations of CME2 modelled with FRi3D, at Earth - Run1 (blue) is simulated with tilt\({}_{0.1au}^{SEN}\) (deduced based on the orientation close to 1 R\({}_{\odot}\)) and Run2 (red) with tilt\({}_{0.1au}^{NWS}\) (similar to the tilt at 1 au). Both the orientations follow the same geometrical tilt derived from the 3D reconstruction at 0.1 au but with two different axial magnetic field orientations. From top to bottom: speed (v), proton number density (\(n_{p}\)), \(B_{x}\), \(B_{y}\), \(B_{z}\), magnetic field strength (\(|B|\)) and plasma beta (\(\beta\)). The solid line and the shaded regions show the profile at Earth and in the 5-10\({}^{\circ}\) latitudinal and longitudinal offset around Earth respectively. time of the CMEs is \(\sim\)13 hours. It is to be noted that ToA is the arrival time of the CME shock. The sheath and the magnetic ejecta following the shocks are extended structures in the radial direction (the ME alone can be up to 0.5 au in radial size at 1 au for very large or expanding events). So, even if CME2 shock arrives 16 hours after CME1 shock, this does not imply the absence of interaction between the two structures. The trailing part of ME1 is, in fact interacting with CME2 in this case. Second, we analyse the magnetic field signatures of CME2 in Run4 and highlight the additional features due to the presence of CME1. The first weak drop in \(\beta_{p}\) (proton plasma beta, i.e., \(\beta/2\)) at \(\sim\)2014-09-12 06:00 UT is modelled in Run4, while it was missing in Run2, hence reaffirming the short passage of ME1. The drop in \(\beta_{p}\) at \(\sim\)2014-09-12 19:00 UT in Run4 corresponds to the starting of ME2 and matches the observations well. The period when \(\beta_{p}<1\) continues beyond the actual observed end time of ME2 due to the over-expansion of the CMEs in the simulation. The \(B_{x}\) component is not affected much, however, additional structures are observed in the \(B_{y}\) and \(B_{z}\) components in the sheath region ahead of CME2 in Run4 as compared to Run2. A strong drop in \(B_{y}\) component up to \(-\)27 nT in the sheath at 2014-09-12 16:00 UT although overestimated, qualitatively corresponds to \(B_{y}=-13\) nT at \(\sim\)2014-09-12 18:20 UT in the in situ observations. It is followed by a short-lasting increase to positive \(B_{y}=9\) nT at \(\sim\) 2014-09-12 22:50 UT and a long-lasting negative \(B_{y}\) corresponding to ME2. The following feature, i.e. the transition of the positive \(B_{y}\) to long-lasting negative \(B_{y}\), is sharp in observations. The sheath and the magnetic ejecta of CME2 have been reasonably well reproduced in Run4. The sheath has an enhanced positive \(B_{z}=33\) nT at \(\sim\)2014-09-12 14:00 UT (28 nT at 2014-09-12 18:08 UT in situ) followed by a negative \(B_{z}=-5\) nT (\(-\)17 nT at 2014-09-12 20:57 UT in situ) which is clearly missing in Run2. Both simulations capture the prolonged positive \(B_{z}\) in ME2 similarly. The strength of the minimum negative \(B_{z}\) component in the sheath is underestimated at Earth. The virtual spacecraft at \(\sigma_{\theta,\phi}\)=\(\pm\)10\({}^{\circ}\) from Earth registered a minimum \(B_{z}=-\)11 nT around 2014-09-12 19:00 UT, that is closer to the in situ observations. Due to the overestimation of \(B_{y}\) in the sheath, the total magnetic field in the sheath is also overestimated. We have also provided the radial evolution plots for Run4 in Fig. 13, to understand the evolution of the magnetic ejecta (especially \(B_{z}\)) along the Sun-Earth line at different times during the propagation of both the CMEs. \(B_{clt}\) (the co-latitudinal component in the spherical coordinate system) is plotted in the equatorial and the meridional plane for Run4 in Fig. 14 for a 2D view of the process. \(B_{clt}\) is equivalent to \(-B_{z}\) on the equatorial plane. The red and blue spectra of the colour bar correspond to positive and negative \(B_{z}\) respectively. While referring to Fig. 14, we will provide the description in terms \(B_{z}\) instead of \(B_{clt}\) to discuss the phenomena. We discuss the results for two phases of the CMEs' propagation process: pre-interaction and interaction. _Pre-interaction_: Run4 gives similar results to Run3 in the pre-interaction phase until 2014-09-12 12:33 UT. In Fig. 14(a), the leading part of the magnetic ejecta associated with CME1 can be observed to propagate with a positive \(B_{z}\) component followed by a negative \(B_{z}\) component, and the same signatures are reproduced in situ when ME1 arrives at Earth (Fig. 12). _Interaction_: In Fig. 13, the vertical yellow lines corresponding to the shock of the CMEs (S1 and S2), and the other coloured vertical lines in the \(B_{z}\) panel marking different locations (radial distances) from the Sun are used for the Figure 11: Results of Run3 where CME1 is modelled with the spheromak model. The figure description is similar to that of Fig. 10. purpose of description in this section. The yellow and blue shaded areas mark the extent of magnetic ejecta of CME1 and CME2 respectively extracted using the criterion \(\beta_{p}<1\) in all figures except for Figure 13(a). The criterion of low \(n_{p}\), in combination with \(\beta_{p}<5\) is used to identify the CME1 extent below 0.5 au in Figure 13(a). It shows the phase where CME1 has propagated alone in the heliosphere with a dominant positive \(B_{z}\) component (\(>10\) nT), followed by a weak minimum negative \(B_{z}\) component (\(\sim-2\) nT), and has reached 0.65 au at 2014-09-10 20:13 UT. The extent of CME1 in the equatorial and meridional planes of the heliospheric domain can be observed in Fig. 14(a). In the next phase in Fig. 13(b), CME2 has entered the heliospheric domain and its shock has reached 0.35 au (marked by yellow lines). An enhancement of the negative \(B_{z}\) component interval of the ejecta (hereafter, compressed ejecta 1, CE1a) to \(\sim-4\) nT is observed in the CME2 sheath at 2014-09-11 08:13 UT just before 0.35 au (vertical blue line). This enhancement is present in the sheath region as the corresponding \(\beta_{p}>1\) and can be interpreted as the interval of trailing negative \(B_{z}\) component of CME1 being compressed by CME2. CME1 appears weaker than CME2 as it is the flank of the CME1 that hits Earth and the magnetic field strength of a flux rope becomes weaker away from its axis. In our simulations, we observe that CME1 expands rapidly in the heliosphere and hence, has a larger trailing part extending up to \(\sim 0.2\) au while the leading part has reached \(\sim 0.6\) au (Fig. 14(b)). Hence, CME2 (faster than CME1) could catch up with CME1 and start compressing it below 0.5 au. Fig. 14(c) shows the prominent development of CE1a at 2014-09-11 17:13 UT. Figure 13(c) shows CE1a (marked with magenta arrow in \(B_{z}\) panel) in the CME2 sheath being further compressed to \(B_{z}<-10\) nT as it reaches 0.75 au at 2014-09-12 02:13 UT while CME1 shock reaches \(\sim 1\) au (vertical red line). A strong positive \(B_{z}>10\) nT in the region of \(\beta<1\) up to 0.7 au corresponds to ME2 which seems to be pushing CE1a further. The next phase depicted in Fig. 13(d) is the development of an enhancement in the interval of the leading positive \(B_{z}\) component of CME1 ahead of CE1a (hereafter, compressed ejecta 2, CE1b, marked with green arrow in \(B_{z}\) panel) at 0.9 au (vertical magenta line) at 2014-09-12 08:13 UT. Through Fig. 14(c) and (d), it can be inferred how CE1a compresses the interval of the leading positive \(B_{z}\) component of ME1 to create CE1b. It must be noted that these features are very thin and localised. Fig. 13(e) shows the further enhancement of CE1b at 1 au with a \(B_{z}>20\) nT which is more than the maximum \(B_{z}\sim 10\) nT inside ME1. CE1a starts weakening and has a lesser magnitude at 2014-09-12 14:13 UT as compared to the in situ observations. However, when CE1a was at 0.9 au the minimum negative \(B_{z}\) matched the 1 au observations better. In addition, it is evident from Fig. 14(d) that the most enhanced part of CE1a (i.e., minimum \(B_{z}\) component) is to the east of the Sun-Earth line and to the north of the ecliptic. The enhanced features are quite small compared to the prominent magnetic ejecta in the event and are localised enough to be missed easily while reading out the 3D data at a single point in the simulation. Fig. 13(f) shows the phase where CE1a has a very weak magnetic field strength upon reaching 1.2 au (vertical cyan line) at 2014-09-13 08:13 UT while CE1b is further compressed. CE1a seems to have been compressed between ME2 and CE1b. The propagation of the CME2 shock (the trailing vertical yellow line) towards the CME1 shock (the leading vertical yellow line) can be observed through Fig. 13(c-f). The CME2 shock moves across the \(\beta<1\) region associated with CME1, along the Sun-Earth line which can be inferred as a signature Figure 12: Results of Run2 (blue; CME2 NWS) and Run4 (red; CME1 SEN + CME2 NWS) are compared. CME1 and CME2 parameters in Run4 correspond to Run3 and Run2 respectively. Figure description is similar to Fig. 10. S1, the start of ME1, S2 and the start of ME2 are marked with blue, green, cyan and magenta vertical lines respectively in Run4. of interaction. Moreover, the \(\beta_{p}<1\) part associated with CME1 gets narrower in time, which points to the subsequent increase in compression by CME2. ### Parameters affecting the sheath formation The width and the duration of the compressed features in the sheath of CME2 (the CME that propagates behind CME1 and compresses it while catching up with it) depend on the relative speed between the two CMEs and on the physical properties of both CMEs, such as their shape and size (Russell & Mulligan, 2002). Additional simulations were performed (whose detailed results are not shown in this paper) to analyse the effect of these parameters on the formation of the features in the sheath ahead of CME2. The duration of a particular feature, for example, CE1b, depends on the compression induced by CE1a on the \(B_{z}\) profile (with intervals of negative and positive values) of CME1 during their propagation through the interplanetary space. If CME1 is slower (i.e., in the case of a smaller relative speed), CME2 can catch up with it earlier and can compress the interval with the negative \(B_{z}\) component of CME1 more, generating a thinner and diffused CE1b by the time they arrive at 1 au. We also find in our simulations that, for a faster CME1 (i.e., in the case of a higher relative speed), CE1a catches up with the interval with the positive \(B_{z}\) component of CME1 to form CE1b at a later time. In this case, the compression takes place beyond the Earth's orbit, and the simulation fails to capture the enhanced positive \(B_{z}\) signature in the sheath before the negative \(B_{z}\) signature. Given the short spatio-temporal nature of these distinct features, these dynamics can be easily missed if virtual spacecraft are not taken into consideration. The minimum \(B_{z}\) strength of CE1a at 1 au is indeed captured by the spacecraft at \(10^{\circ}\) longitudinal offset from Earth rather than the solid red line at Earth in Run4 (Fig. 12). The magnetic field strength of the compressed ejecta depends on the flux contained in CME1. Moreover, if the magnetic field configuration of CME1 was not modelled correctly, then the \(B_{y}\) and \(B_{z}\) features in the sheath would have had different signs and may have resulted in completely different features in the sheath. The shape and size of CME1 also play an important role in the morphology of the compressed ejecta. The flux and twist of CME2 not only influence its magnetic field strength obtained at 1 au but also its ability to compress the ejecta ahead. This event clearly depicts how the sheath region carries the history of the minute interactions between different magnetic ejecta, which might be challenging to predict based on remote-sensing observations. Although 3D MHD simulations help in modelling and understanding such interactions better, it can still be challenging to reproduce the observed features exactly, owing to the sensitivity of the simulations to the uncertainties in the initial parameters as discussed above. ## 7 Summary and conclusions In this study, we presented the evolution of two successive CMEs that erupted from the active region AR 12158 on September 8, 2014, and September 10, 2014, respectively. The motivation of this work was to investigate the misinterpretation of certain observational aspects related to the CMEs and to reproduce their in situ signatures at 1 au using MHD simulations. The first CME was not predicted to hit Earth and was not even recorded in the ICME catalogues. The second CME was predicted to be geoeffective based on the remote observations of the CME chirality and magnetic axis orientation during the eruption. However, unexpectedly, upon arrival at Earth its magnetic ejecta was dominated by a positive \(B_{z}\) component. Nonetheless, a short period of negative \(B_{z}\) component developed in the sheath of CME2 during its propagation in the heliosphere. This resulted in a geomagnetic storm with a Dst index of \(\sim-88\) nT at Earth, which is a moderate value and not an extreme value as was originally predicted. Hence, the geoeffectiveness of the various sub-structures involved in this event was gravely mispredicted. The study of this event is based on the observational investigation of the CMEs, and the 3D MHD modelling of their evolution using EUHFORIA. We performed an in-depth analysis of the two CMEs using remote sensing and in situ observations, and constrained the geometrical and magnetic field parameters successively, close to 1 R\({}_{\odot}\), close to 0.1 au and at 1 au. A discrepancy was observed in the axial magnetic field orientation of CME2 between 1 R\({}_{\odot}\) and 0.1 au. The numerical experiments for involving CME1 and CME2 individually (Run1, Run2, Run3) and the global EUHFORIA simulation including CME1 and CME2 (Run4) have the following rationales: * Run1: We first performed a simulation including only CME2 using the FRi3D model in EUHFORIA, with the magnetic field orientation obtained close to 1 R\({}_{\odot}\). However, the results did not match the in situ magnetic field observations at 1 au. * Run2: In order to determine the cause of this discrepancy, we performed another simulation with the initial magnetic field orientation consistent with in situ observations at 1 au using the FRi3D model in EUHFORIA. The inference of the new magnetic field orientation of CME2 is based on the 3D reconstruction (close to 0.1 au) and the assumption of anti-clockwise rotation of left-handed CME2 in the low corona (based on the relationship between the chirality and the rotation direction from Green et al. (2007); Lynch et al. (2009)). Run2 could reproduce the prolonged positive \(B_{z}\) component seen in the in situ observations and matched the observations much better than Run1. However, Run2 requires a significant \(\sim 180^{\circ}-270^{\circ}\) rotation of CME2 in the low corona which remains to be explained, as no rotation was observed during the heliospheric propagation in the simulations. * Run3: After obtaining the best simulation of CME2, we performed a simulation including just CME1 with the spheromak model. The magnetic field orientation of CME1 close to 1 R\({}_{\odot}\) was consistent with the orientation obtained close to 0.1 au and was used to obtain the best simulation of CME1 to reproduce the in situ signatures at 1 au. Figure 13: Radial evolution profile of the CME1 and CME2 along Sun-Earth line at different times of propagation and interaction extracted from Run4. Top to bottom (in each plot): speed (\(v\)), proton number density (\(n_{p}\)), z-component of magnetic field (\(B_{z}\)), total magnetic field (\(B\)), and proton plasma beta (\(\beta_{p}\)). The yellow and blue shaded areas depict the extent of magnetic ejecta of CME1 and CME2 respectively extracted using the criterion \(\beta_{p}<1\) (\(\beta_{p}<5\) in (a)). All physical quantities except the speed are scaled by (D/1 au)\({}^{2}\) where D is the radial distance from the Sun. The CME shocks (S1 and S2) are marked in yellow lines. The colourful vertical lines in the \(B_{z}\) panel correspond to different radial distances to explain various phases of Run4 (detailed in Section 6.3): (a) Propagation of CME1 alone in the heliosphere; (b) Formation of a compressed negative \(B_{z}\) ejecta CE1a (shown with magenta arrow); (c) Further compression of CE1a; (d) Development of a compressed positive \(B_{z}\) ejecta, CE1b (shown with green arrow), ahead of CE1a (e) Further compression of CE1b; and (f) Diffusion of CE1a (\(\sim 0\) nT) upon reaching 1.2 au while CE1b undergoes further enhanced. * Run4: Finally, we introduced CME1 and CME2 using the boundary conditions found in Run3 and Run2 to create the final global simulation (Run4). The kinematics and the magnetic field components of the CMEs were successfully modelled at Earth with the magnetised flux rope CME models in EUHFORIA. The interaction between CME1 and CME2 was found to produce the short interval of negative \(B_{z}\) component in the sheath ahead of CME2. With the 3D MHD EUHFORIA simulation, it was possible to understand the different phases of CME1-CME2 interaction forming coherent enhanced sub-structures (CE1a and CE1b) in the sheath region. In a nutshell, we investigated the reasons for the space weather misprediction of this event. We found it to be two-fold: first, the consideration of a low coronal rotation of CME2 was missed which led to predicting a different magnetic field topology heading towards Earth; second, the presence of CME1 was overlooked and hence the geoeffective feature formed by its interaction with CME2 was not predicted. EUHFORIA in its present version allows us to propagate the CMEs in the heliosphere. We have shown that a substantial rotation of CME does not take place in the heliosphere. Therefore, we suggest that the rotation of CME2 could have occurred in the corona. With this study, we also highlight the importance of observations in correctly constraining simulations for obtaining accurate space weather forecasting. Previous studies suggest that a significant amount of CME rotation, deflection, and deformation occurs in the low corona, followed by a self-similar propagation further away from the Sun in most CME events (Demoulin and Dasso, 2009; Isavnin et al., 2014; Balmaceda et al., 2020). Kliem et al. (2012) have highlighted the possibility of extensive low coronal rotation up to even more than \(100^{\circ}\) by the combination of twist and shear-driven rotation, the latter being dominant in the lower corona. It is not possible to verify the hypothesis of a substantial rotation (\(\sim 180^{\circ}-270^{\circ}\)) of CME2 in the low corona under the scope of this work as EUHFORIA does not account for the modelling of the initial CME evolution below 0.1 au. In addition, the lack of magnetic field observations in the corona restricts the observational verification of this hypothesis. This speculation, although a relatively strong assumption, provides an idea about the consistency of the CME2 orientation observed in the upper corona and at 1 au. Hence, the knowledge of the CME magnetic field is crucial for deriving the correct orientation of the emerging flux rope in the low corona, in order to propagate it further in the heliospheric models for space weather forecasting purposes. Although the white light coronagraph images help to Figure 14: Evolution of \(B_{z}\) in the heliosphere as simulated in Run4. The co-latitudinal component in the spherical coordinate system is \(B_{clt}\), which is equivalent to \(-B_{z}\) on the ecliptic plane. The red and blue spectra of the colour bar correspond to positive and negative \(B_{z}\), respectively. Each sub-figure shows the view of the equatorial (X-Y) and the meridional (X-Z) planes at a particular time mentioned at its top. (a) CME1 in the pre-interaction phase is identified schematically with a dashed ellipse; (b) CME2 is shown evolving behind CME1 in the early stage of interaction; (c) CME2 compressing the interval of the trailing negative \(B_{z}\) component of CME1 to create CE1a (darker blue region) in the sheath ahead of itself; and (d) CE1a is shown to be further compressing the interval of the leading positive \(B_{z}\) component of CME1 to create CE1b (darker red region) during the interaction phase of the event. The animation of this figure can be found in the online version of the paper. reconstruct the CMEs in the middle and upper corona, they are not sufficient to derive the magnetic field configuration. This leaves us to rely on the source region proxies for guessing the magnetic field configuration for prediction purposes, being agnostic to the dominant low coronal dynamics. Although the closest approach of Parker Solar Probe is in the upper corona, its trajectory does not act as a constant in situ monitoring point in the corona. The other forecasting limitation arises from the evolution of CME structures during their propagation, and their interaction with the solar wind and other CMEs. The interaction of CMEs may lead to severe geoeffective events, as demonstrated by Shen et al. (2018),Scolini et al. (2020) and Koehn et al. (2022). The lack of multiple in situ crossings through CME2 makes it challenging to predict its global behaviour through reconstruction using data from just a single point. Furthermore, for CME crossings with a high impact factor, single-point reconstruction techniques introduce greater uncertainty in the estimation of the flux rope orientation (Riley et al. 2004). The serendipitous alignment of spacecraft like PSP, Solar Orbiter, and Bepi Colombo, although helpful in obtaining information about the early phase of the CME, is also not feasible for constant monitoring. Without the knowledge of the global behaviour of the individual CMEs, it is even more challenging and non-trivial to predict the strength and configuration of the magnetic ejecta formed during CME-CME interaction. Hence, we advocate for a stronger observational infrastructure for the study of CME-CME interaction events from the perspective of space weather forecasting, in addition to MHD simulations. ## Acknowledgements We thank the anonymous referee for their comments and suggestions that led to improvements in the manuscript. This project (EUHFORIA 2.0) has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 870405. SP acknowledges support from the projects C14/19/089 (C1 project Internal Funds KU Leuven), G.0D07.19N (FWO-Vlaanderen), SIDC Data Exploitation (ESA Prodex-12), and Belspo project B2/191/P1/SWiM. The simulations were carried out at the VSC - Flemish Supercomputer Centre, funded by the Hercules Foundation and the Flemish Government - Department EWI. We are grateful to Dr. Nariaki Nitta and Dr. Tibor Torok for the valuable discussions that improved our understanding of the eruptions. We thank Dr Jasmina Magdalenic for the suggestions that facilitated the better 3D reconstruction of the halo CMEs. We also appreciate the availability of open-source data and catalogues used in this work: * CDAW LASCO catalogue: [https://cdaw.gsfc.nasa.gov/CME_list/](https://cdaw.gsfc.nasa.gov/CME_list/) * IPShock catalogue: [http://ipshocks.fi/](http://ipshocks.fi/) * Wind ICME catalogue: [https://wind.nasa.gov/ICME_catalog/ICME_catalog_viewer.php](https://wind.nasa.gov/ICME_catalog/ICME_catalog_viewer.php) * Richardson and Cane ICME catalogue: [https://izw1.caltech.edu/ACE/ASC/DATA/level3/icmetable2.htm](https://izw1.caltech.edu/ACE/ASC/DATA/level3/icmetable2.htm) * AIA images: SDO database [http://jsoc.stanford.edu/AIA/AIA_gallery.html](http://jsoc.stanford.edu/AIA/AIA_gallery.html) ## Appendix A FRi3D model flux calculation The toroidal flux (\(\phi_{t}\)) as a function of the poloidal flux (\(\phi_{p}\)) for a flux rope with Lundquist magnetic field configuration is given by (Gopalswamy et al., 2018): \[\phi_{t}=\phi_{p}\frac{2\pi R_{0}}{L}J_{1}(x_{01}) \tag{1}\] where \(R_{0}\) and \(L\)=-2.6 \(R_{tip}\) are the radius and length of flux rope respectively. \(R_{tip}\) is the leading edge of the flux rope. \(J_{1}\) is the first order Bessel function and \(x_{01}\) is the first zero of zeroth order Bessel function, \(J_{0}\). We modify above formula for FRi3D geometry by replacing \(R_{0}\) with the poloidal height of FRi3D (\(R_{p}\)) and \(L\) with the FRi3D axis length given by: \[L=\int_{-\phi_{hw}}^{b_{hw}}\left[r(\phi)^{2}+\left(\frac{dr(\phi)}{d\phi} \right)^{2}\right]^{\frac{1}{2}}d\phi \tag{2}\] where \(r(\phi)=R_{t}cos^{n}(a\phi)\) is the cross-section at a given \(\phi\) and \(a=(\pi/2)/\phi_{hw}\). \(R_{t}\) is the toroidal height and \(\phi_{hw}\) is the angular half-width. Further details of FRi3D geometry and the parameters used here can be found in Maharana et al. (2022). ## Appendix B 3D reconstruction of the CMEs The geometrical and kinematic parameters of CME1 and CME2 are constrained from the 3D reconstruction using the GCS model (Thernisien, 2011) and the FRi3D model (Isavnin, 2016), respectively. As CME1 is modelled with the spheromak model in EUHFORIA simulations, it is reconstructed with the GCS model (Fig. 1) as done in Verbeke et al. (2019); Scolini et al. (2019). CME2 is reconstructed with FRi3D model (Fig. 2) and modelled with the same in EUHFORIA.
2305.18467
Geometric Graph Filters and Neural Networks: Limit Properties and Discriminability Trade-offs
This paper studies the relationship between a graph neural network (GNN) and a manifold neural network (MNN) when the graph is constructed from a set of points sampled from the manifold, thus encoding geometric information. We consider convolutional MNNs and GNNs where the manifold and the graph convolutions are respectively defined in terms of the Laplace-Beltrami operator and the graph Laplacian. Using the appropriate kernels, we analyze both dense and moderately sparse graphs. We prove non-asymptotic error bounds showing that convolutional filters and neural networks on these graphs converge to convolutional filters and neural networks on the continuous manifold. As a byproduct of this analysis, we observe an important trade-off between the discriminability of graph filters and their ability to approximate the desired behavior of manifold filters. We then discuss how this trade-off is ameliorated in neural networks due to the frequency mixing property of nonlinearities. We further derive a transferability corollary for geometric graphs sampled from the same manifold. We validate our results numerically on a navigation control problem and a point cloud classification task.
Zhiyang Wang, Luana Ruiz, Alejandro Ribeiro
2023-05-29T08:27:17Z
http://arxiv.org/abs/2305.18467v2
# Geometric Graph Filters and Neural Networks: Limit Properties and Discriminability Trade-offs ###### Abstract This paper studies the relationship between a graph neural network (GNN) and a manifold neural network (MNN) when the graph is constructed from a set of points sampled from the manifold, thus encoding geometric information. We consider convolutional MNNs and GNNs where the manifold and the graph convolutions are respectively defined in terms of the Laplace-Beltrami operator and the graph Laplacian. Using the appropriate kernels, we analyze both dense and moderately sparse graphs. We prove non-asymptotic error bounds showing that convolutional filters and neural networks on these graphs converge to convolutional filters and neural networks on the continuous manifold. As a byproduct of this analysis, we observe an important trade-off between the discriminability of graph filters and exhibits to approximate the desired behavior of manifold filters. We then discuss how this trade-off is ameliorated in neural networks due to the frequency mixing property of nonlinearities. We further derive a transferability corollary for geometric graphs sampled from the same manifold. We validate our results numerically on a navigation control problem and a point cloud classification task. Graph Neural Networks, Manifold Filters, Manifold Neural Networks, Convergence Analysis, Discriminability ## I Introduction Geometric data, or data supported in non-Euclidean domains, is the object of much interest in modern information processing. It arises in a number of applications, including protein function prediction [3, 4], robot path planning [5, 6], 3D shape analysis [7, 8, 9] and wireless resource allocation [10, 11]. Graph convolutional filters [12, 13] and graph neural networks (GNNs) [14, 15], along with manifold convolutional filters [16] and manifold neural networks (MNNs) [17, 18, 19], are the standard tools for invariant information processing on these domains when they are discrete and continuous respectively. The convolution operation is implemented through information diffusion over the geometric structure, thus enabling invariant and stable representations [20, 21, 22, 23] and feature sharing. The cascading neural network architecture interleaves convolutions and nonlinearities, further expanding the model's expressiveness. Although there is a clear parallel between graphs and manifolds--the former can be seen as discretizations of the latter--, manifolds are infinite-dimensional continuous latent spaces which can only be accessed by discrete point sampling [7, 24, 25, 26]. In general, we have access to a set of sampling points from the manifold, and build a graph model to approximate the underlying continuous manifold while attempting to retain the local and global geometric structure [7, 11, 27]. GNNs have been shown to do well at processing information over the manifold both experimentally and theoretically [25, 26, 16]. Of particular note, conditions that guarantee asymptotic convergence of graph filters and GNNs to manifold filters and MNNs are known [16]. Asymptotic convergence is a minimal guarantee that can be enriched with non-asymptotic approximation error bounds. These bounds are unknown and they are the focus of this paper. These non-asymptotic approximation error bounds relating graph filters and GNNs to manifold filters and MNNs are important because they inform the practical design on graphs of information processing architectures that we want to deploy on manifolds. In addition, explicit finite-sample error bounds often reveal details about the convergence regime (e.g., rates of convergence and discriminability trade-offs) that are not revealed by their asymptotic counterparts. For example, the non-asymptotic convergence analysis of GNNs on graphs sampled from a graphon (also referred to as a _transferability_ analysis) gives a more precise characterization of the discriminability-convergence tradeoff that arises in these GNNs [29], which is not elucidated by the corresponding asymptotic convergence result [30]. **Contributions.** In this paper, we prove and analyze a non-asymptotic approximation error bound for GNNs on graphs sampled from manifold, thus closing the gap between GNNs and MNNs with an explicit numerical relationship. We start by importing the definition of the manifold filter as a convolutional operation where the diffusions are exponentials of the Laplace-Beltrami (LB) operator \(\mathcal{L}\) of the manifold \(\mathcal{M}\subset\mathbb{R}^{\text{N}}\)[16]. Given a set of discrete sampling points from the manifold, we describe how to construct both dense and relatively sparse geometric graphs that approximate the underlying manifold in both the spatial and the spectral domains. Next, we import the concept of Frequency Difference Threshold (FDT) filters (Definition 3) [17] to overcome the challenge of dimensionality associated with the infinite-dimensional spectrum of the LB operator. We show that manifold filters exhibit a trade-off between their discriminability and their ability to be approximated by graph filters, which can be observed in the approximation error bound of geometric graph filters in Theorems 1 and 2. The same analysis is conducted for GNNs by incorporating nonlinearities (Theorem 3), but in GNNs we hypothesize that the trade-off is alleviated, i.e., that we can recover discriminability, through the addition of these nonlinear operations (Section V). In other words, geometric GNNs can be both discriminative and approximative of the underlying MNNs, which we verify empirically through numerical experiments (Section VI). Finally,
2304.10312
Secret-Key-Agreement Advantage Distillation With Quantization Correction
We propose a novel advantage distillation strategy for physical layer-based secret-key-agreement (SKA). We consider a scenario where Alice and Bob aim at extracting a common bit sequence, which should remain secret to Eve, by quantizing a random number obtained from measurements at their communication channel. We propose an asymmetric advantage distillation protocol with two novel features: i) Alice quantizes her measurement and sends partial information on it over an authenticated public side channel, and ii) Bob quantizes his measurement by exploiting the partial information. The partial information on the position of the measurement in the quantization interval and its sharing allows Bob to obtain a quantized value closer to that of Alice. Both strategies increase the lower bound of the secret key rate.
Francesco Ardizzon, Francesco Giurisato, Stefano Tomasin
2023-04-20T13:43:39Z
http://arxiv.org/abs/2304.10312v1
# Secret-Key-Agreement Advantage Distillation ###### Abstract We propose a novel advantage distillation strategy for physical layer-based secret-key-agreement (SKA). We consider a scenario where Alice and Bob aim at extracting a common bit sequence, which should remain secret to Eve, by quantizing a random number obtained from measurements at their communication channel. We propose an asymmetric advantage distillation protocol with two novel features: i) Alice quantizes her measurement and sends partial information on it over an authenticated public side channel, and ii) Bob quantizes his measurement by exploiting the partial information. The partial information on the position of the measurement in the quantization interval and its sharing allows Bob to obtain a quantized value closer to that of Alice. Both strategies increase the lower bound of the secret key rate. Advantage distillation, secret-key-agreement, physical layer security. ## I Introduction Secret-key-agreement (SKA) is a security mechanism by which two users, namely Alice and Bob, agree on a common key while keeping it secret from any third malicious user, namely Eve. The secret key can then be used for other security services, e.g., for symmetric key encryption or authentication. Initially proposed by Maurer [1], Ahlswede, and Csiszar [2], physical-layer-based SKA schemes are information-theoretic secure, and their security is based on the physical properties of the channel itself. A source-model SKA procedure involves four steps [3]: _channel probing_, where Alice and Bob transmit in turn probing signals and collect the channel measurements later used to extract the keys; _advantage distillation_ by which each agent extracts a bit sequence from his/her measurement; _information reconciliation_, where Alice and Bob exchange information with the aim of reducing the disagreement among the bit sequences; finally, _privacy amplification_, where each user extracts from the bit sequences a shorter one typically by using universal hashing (for further details see surveys [4] and [5]). In this paper, we focus on the advantage distillation step. The basic approach requires quantizing the channel feature used for key extraction. A channel quantization scheme for multiple-input multiple-output (MIMO) channels is proposed in [6] and [7]. In particular, in the strategy of [6] Alice transmits a quantization correction to Bob, the observations have a (known) Gaussian distribution, and the quantizer thresholds are set to provide equiprobable bit sequences (with maximum entropy). However, Eve's observations are assumed to be independent of those of Bob. We consider here instead a more realistic scenario, where the features' distribution is not known a priori, and Eve's observations are statistically correlated to those of Alice and Bob. In [8] the quantization intervals are separated by guard bands and samples falling in these regions are discarded to reduce quantization mismatches between Alice and Bob. Indeed, this increases the probability of agreement and the bit sequence length, at the expense of fewer extracted bits. A related approach is also proposed in [9], where the quantizer thresholds are set to assure that each sequence is equiprobable, maximizing the output entropy. In both works, legitimates' and Eve's channels are assumed to be uncorrelated, thus no information about the actual bit sequence by collected from Eve. Recently, a technique to extract bits from electrocardiograms (ECGs) signals for wireless body area networks (WBANs) has been proposed in [10]. The quantizer thresholds are optimized to maximize both the entropy and the matching rate of the extracted bits. Still, due to the particular nature of the channel, no information is leaked to Eve during the channel probing step. We consider instead the case wherein Eve is observing a channel correlated to that of Alice and Bob, and Eve also overhears any public discussion between Alice and Bob. In this letter, we propose a novel advantage distillation strategy for a source-model SKA, where Alice and Bob obtain each a random number and optimize their quantizers to obtain bit sequences providing the highest secret key rate (through a lower bound). Then, they coordinate the quantization of the observed feature with a discussion over a public authenticated channel. In particular, Alice quantizes her measurement and sends the position of the measurement in the quantization interval over an authenticated public side channel. In turn, Bob (and Eve) quantizes his measurement by exploiting the partial information. We denote the described advantage distillation technique as advantage distillation with quantization correction (ADQC). We show that such a strategy allows the extraction of more secret bits from the channel measurements. Finally, with respect to the existing literature, we show that a careful design of the quantizers used during the advantage distillation and the transmission of quantization error correction over a public channel allows Alice and Bob to obtain a secret key, even in those harsh scenarios where Eve is close to one of the agents. The rest of the paper is organized as follows. Section II introduces the system model. Section III describes the step of the proposed advantage distillation protocol. Section IV presents the numerical results. Section V draws the conclusions. ## II System Model We consider a scenario where Alice and Bob aim to agree on a common bit sequence, which has to stay secret from Eve. To this end, they use a source model SKA procedure [3]. First, they probe their channel, as shown in Fig. 1: Alice and Bob alternatively send pilot signals through the connecting wireless channel to enable their partner to estimate the channel, so that Alice obtains the estimated channel \(h_{\mathrm{BA}}\) and Bob obtains estimated channel \(h_{\mathrm{AB}}\). We assume Alice and Bob have already agreed on a feature selection and extraction function such that Alice extracts \(x\) from \(h_{\mathrm{BA}}\), while Bob extracts \(y\) from \(h_{\mathrm{AB}}\). We focus on the scalar case where \(x\) and \(y\) are real numbers, although the SKA will operate on sequences of \(x\) and \(y\), thus using longer observation sequences. We remark that, in general, the channels are only partially reciprocal, therefore \(x\) and \(y\) will be strongly correlated but not identical. Eve is modeled as a passive attacker. From each exchange, she estimates channels \(h_{\mathrm{AE}}\) and \(h_{\mathrm{BE}}\), from Alice and Bob, respectively. We assume Eve has an extraction function that exploits (one or) both channels and retrieves the scalar real feature \(z\). Indeed, if Eve and Bob (or Alice) are in a different position, \(y\neq z\) and \(x\neq z\). Still, if Eve is not too far from Alice or Bob, there exists a non-negligible correlation between \(z\) and both \(x\) and \(y\). We assume that the statistics of \(x\), \(y\), and \(z\) are not known in close form, but a dataset of measurements is available to all parties for the design of the SKA procedure. An authenticated public side channel is available, over which Alice and Bob can exchange information, while Eve overhears any communication. Channel coding is used on this side channel, allowing Bob to detect and correct, with arbitrarily small error probability, any error of publicly exchanged information. ## III Advantage Distillation With Quantization Correction We now describe the ADQC technique. Let us introduce the binary space \(\mathcal{S}=\{0,1\}^{b}\) containing \(M=2^{b}\) different binary strings, each of \(b\) bits. Alice and Bob aim at drawing two sequences, \(\boldsymbol{s}_{\mathrm{A}}\in\mathcal{S}\) and \(\boldsymbol{s}_{\mathrm{B}}\in\mathcal{S}\), by processing the observed channel features \(x\) and \(y\), respectively. The problem of associating a real number (in this case, the feature measurement) to a binary sequence can be seen as a quantization problem that partitions the set of real numbers into \(M\) intervals so that the \(m\)-th interval is associated with the sequence \(\boldsymbol{s}_{m}\in\mathcal{S}\). A quantizer \(q\) provides the bit sequence \(\boldsymbol{s}=q(a)\) from the real number \(a\). First, note that the quantizers used by Alice, Bob, and Eve are chosen before the actual key agreement protocol, as will be detailed later. Moreover, we consider a worst-case scenario where all quantizers are publicly known, However, both the secrecy and the randomness of the scheme still lie in the extracted channel measurements. Now, we aim to make this extraction process such that \(\boldsymbol{s}_{\mathrm{A}}\) is as close as possible to \(\boldsymbol{s}_{\mathrm{B}}\) while remaining secret to Eve. We can write the observation at Bob as the observation at Alice corrupted by an error \(\epsilon\), i.e., \[y=x+\epsilon. \tag{1}\] Let \(q_{\mathrm{A}}(x)\) be the quantized value at Alice (corresponding to the \(m\)-th quantization interval), and let \(\eta=x-q(x)\) be the quantization error at Alice. Then, from (1) we have \[y=q_{\mathrm{A}}(x)+\eta+\epsilon. \tag{2}\] In general, note that \(\eta\) and \(\epsilon\) are statistically dependent. However, ignoring this dependency, we can have that \(y\) is turned away from the quantization value \(q_{\mathrm{A}}(x)\) by both errors \(\eta\) and \(\epsilon\). Thus, to improve the advantage distillation procedure, in ADQC Alice communicates over the public channel the value of the quantization error \(\eta\) so that Bob can compute \[y^{\prime}=y-\eta=q_{\mathrm{A}}(x)+\epsilon, \tag{3}\] and quantize \(y^{\prime}\) with quantizer \(q_{\mathrm{B}}(\cdot)\) to obtain its bit sequence. If Alice uses \(B\) bits to feedback \(\eta\) over the public channel, we must quantize \(\eta\). To this end, each quantization interval \(\mathcal{I}_{m}^{(\mathrm{A})}\), \(m=1,\ldots,M\), is split into \(K=2^{B}\) sub-intervals of equal length, and (a binary representation) of the index of the sub-interval in which \(\eta\) is falling is transmitted over the public channel. Then, Alice transmits \[\xi=\left\lceil\eta\frac{K}{L^{(\mathrm{A})}(x)}\right\rceil\, \tag{4}\] where \(L^{(\mathrm{A})}(x)\) is the length of the quantization interval of \(x\). This quantization procedure also avoids transmitting the value of \(\eta\) that may reveal in part the interval \(\mathcal{I}_{m}\) to Eve, since quantization intervals may have different lengths. Upon reception of \(\xi\), Bob computes \[\eta^{\prime}=\xi\,L^{(\mathrm{B})}(y)\, \tag{5}\] where \(L^{(\mathrm{B})}(y)\) is the length of quantization interval of \(y\). Then Bob uses \(\eta^{\prime}\) instead of \(\eta\) in (3) to quantize \(y^{\prime}\) with \(q_{\mathrm{B}}(\cdot)\). Indeed, it may happen that \(L^{(\mathrm{A})}(x)\neq L^{(\mathrm{B})}(y)\). Nonetheless, it is reasonable to assume the length of intervals close to each other to be similar. Eve can do the same procedure of Bob, by computing its own correction factor \(\eta^{\prime\prime}\) and applying it to its measurement \(z\) before quantizing it with \(q_{\mathrm{E}}(\cdot)\). However, there will be a higher probability that \(z^{\prime}=z-\eta^{\prime\prime}\) falls in another interval than \(x\), thus the correction factor won't provide the same benefit on the sequence extraction of Bob. Fig. 1: Scheme of a channel probing procedure. ### _Quantizer Design_ We are now left with the design of the Alice, Bob, and Eve quantizers, i.e., \(q_{\text{A}}\), \(q_{\text{B}}\), and \(q_{\text{E}}\), respectively. Now, note that a quantizer \(q\) with \(M\) quantization intervals is fully defined by the position of \(M+1\) thresholds, \(\mathcal{T}=\{T_{i},i=0,\ldots,M+1\}\), where however the saturation values \(T_{0}=T_{\min}\) and \(T_{M+1}=T_{\max}\) are set to match a predefined saturation probability. 1 Let \(\mathcal{T}_{\text{A}}\), \(\mathcal{T}_{\text{B}}\), and \(\mathcal{T}_{\text{E}}\) be sets of thresholds used for the three quantizers. The metric used for the design is the lower bound on the secret-key capacity for the source model [1, 3, Ch. 4], i.e., Footnote 1: Samples eventually falling outside the region \([T_{\min},T_{\max}]\) are remapped to the closest interval. \[C_{\text{sk}}^{\text{low}}(\mathcal{T}_{\text{A}},\mathcal{T}_{\text{B}}, \mathcal{T}_{\text{E}})=I(\boldsymbol{s}_{\text{A}};\boldsymbol{s}_{\text{B}} )-\min\left\{I(\boldsymbol{s}_{\text{A}};\boldsymbol{s}_{\text{E}}),I( \boldsymbol{s}_{\text{B}};\boldsymbol{s}_{\text{E}})\right\}. \tag{6}\] where \(I(\boldsymbol{v}_{1};\boldsymbol{v}_{2})\) is the mutual information between random vectors \(\boldsymbol{v}_{1}\) and \(\boldsymbol{v}_{2}\). Alice and Bob aim at designing the quantizers \(q_{\text{A}}\) and \(q_{\text{B}}\) to increase \(C_{\text{sk}}^{\text{low}}\), i.e., by increasing the agreement between Alice's and Bob's extracted bit sequences, while limiting the amount of information revealed to Eve. Eve in turn aims at minimizing \(C_{\text{sk}}^{\text{low}}(\mathcal{T}_{\text{A}},\mathcal{T}_{\text{B}}, \mathcal{T}_{\text{E}})\) with a proper choice of her quantizer \(q_{\text{E}}\). To estimate the mutual information it is necessary to have the associated joint probability density function (PDF): this is either known a priori or estimated by using a dataset of observations \((x,y,z)\) as input to the quantizers. To design the quantizer we consider the following iterative procedure. Starting from uniform quantizers on a predefined range, at each iteration Eve optimizes her quantizer \[\hat{\mathcal{T}}_{\text{E}}=\operatorname*{arg\,min}_{\mathcal{T}_{\text{E} }}C_{\text{sk}}^{\text{low}}(\mathcal{T}_{\text{A}},\mathcal{T}_{\text{B}}, \mathcal{T}_{\text{E}})\, \tag{7}\] with \(\mathcal{T}_{\text{A}}\) and \(\mathcal{T}_{\text{B}}\) fixed. Next, Alice and Bob optimize their own \[[\hat{\mathcal{T}}_{\text{A}},\hat{\mathcal{T}}_{\text{B}}]=\operatorname*{ arg\,max}_{\hat{\mathcal{T}}_{\text{A}},\mathcal{T}_{\text{B}}}C_{\text{sk}}^{ \text{low}}(\mathcal{T}_{\text{A}},\mathcal{T}_{\text{B}},\hat{\mathcal{T}}_{ \text{E}}). \tag{8}\] Finally, Alice, Bob, and Eve set the quantizers \(\hat{q}_{\text{A}}\), \(\hat{q}_{\text{B}}\), and \(\hat{q}_{\text{E}}\), from the new thresholds \(\hat{\mathcal{T}}_{\text{A}}\), \(\hat{\mathcal{T}}_{\text{B}}\), and \(\hat{\mathcal{T}}_{\text{E}}\). The optimizations are performed via numerical methods. The procedure is repeated either until convergence is reached or a maximum number of iterations has been performed. ### _Advantage Distillation vs Information Reconciliation with Limited-Rate Public Channel_ When the public channel has no rate limitations, a large value of \(B\) (number of bits describing the quantization error) is to be preferred to improve the agreement between the bit sequences extracted by Alice and Bob. However, in a scenario where the side-channel rate is limited, and it is used for both advantage distillation and information reconciliation, we must decide the number of bits to be used for both processes. For ADQC, we have seen that \(B\) bits are transmitted for each quantized sample. For the information reconciliation, a sequence of \(n>b\) bits obtained from the advantage distillation is considered an error-corrupted version of a codeword of a linear code \((k,n)\) as done, for instance, in [11]. Hence, during the reconciliation, Bob will share \(n-k\) bits over the public channel for \(n/b\) samples. The number of bits shared on the public channel for each bit of the extracted bit sequence is \(\beta\triangleq\frac{B}{b}\), with \(\beta=0\) when no information is shared during advantage distillation, in what we will denote as no error correction (NEC) technique. Next, we observe that the code rate is related to the secret key capacity (after the advantage distillation) as follows \[C_{\text{AB}}=\frac{I(\boldsymbol{s}_{\text{A}};\boldsymbol{s}_{\text{B}})}{b} =\frac{k}{n}. \tag{9}\] We introduce now the cost function \(\gamma\) representing the ratio between the numbers of bits shared on the side channel for the ADQC and the NEC techniques. For the same number of measurements (thus for the same \(n\)), the ADQC and NEC techniques generate \(k^{\text{(ADQC)}}\) and \(k^{\text{(NEC)}}\) bits of the secret key, respectively. Then, \(\gamma\) is computed as \[\gamma\triangleq\frac{n-k^{\text{(ADQC)}}+\beta n}{n-k^{\text{( NEC)}}}=\frac{1+\beta-C_{\text{AB}}^{\text{(ADQC)}}}{1-C_{\text{AB}}^{\text{(NEC)}}}, \tag{10}\] where \(C_{\text{AB}}^{\text{(ADQC)}}\) and \(C_{\text{AB}}^{\text{(NEC)}}\) are the mutual information between Alice and Bob bit sequences for the ADQC and NEC techniques, respectively. ## IV Numerical Results In this Section, we report the performance of the ADQC technique and compare it with both the NEC technique and the guard-band (GB) technique of [8]. We model the vector \(\boldsymbol{v}=\left\lfloor x\,y\,z\right\rceil^{\text{T}}\) of Alice's, Bob's, and Eve's measurements as a jointly Gaussian vector having zero-mean and covariance \[\boldsymbol{\Sigma}=\mathbb{E}[\boldsymbol{v}\boldsymbol{v}^{\text{T}}]= \begin{bmatrix}1&\rho_{\text{AB}}&0.8\\ \rho_{\text{AB}}&1&0.8\\ 0.8&0.8&1\end{bmatrix}\, \tag{11}\] where we fixed the correlation between legitimates and Eve features to \(\rho_{\text{AE}}=\rho_{\text{BE}}=0.8\). Next, we let \(\rho_{\text{AB}}\) varying in the interval \(\rho_{\text{AB}}\in[0.8,1]\). The saturation thresholds are set at \(T_{\max}=-T_{\min}=6\), assuring a saturation probability \(P_{\text{sat}}\leq 2\cdot 10^{-9}\). For the ADQC technique we considered \(B=1\) and \(2\,\text{bit}\) of quantization error correction. For both ADQC and NEC techniques, quantizers are either optimized as described in the previous section or uniform, with \(M-1\) thresholds, placed uniformly in \([-T_{\min},T_{\max}]\). For the GB technique, the quantizer is uniform and guard bands are set to \(0.85\), to maximize the secret key capacity lower bound. Fig. 2 shows \(C_{\text{sk}}^{\text{low}}\) for the considered SKA techniques when extracting \(b=3\) bit per sample. We remark that the GB technique discards samples falling on the guard bands, reducing the observation rate (and in general the secret key rate). The best performance is in fact achieved by ADQC with optimized quantizers, thus, sharing information during the advantage distillation is advantageous. In particular, optimizing the quantizers and using ADQC yields on average a 60% improvement of the secrecy capacity, more than doubling it for low correlation values, i.e., when \(\rho_{\text{AB}}\approx\rho_{\text{AE}}=\rho_{\text{BE}}=0.8\). Note that even the NEC technique with optimized quantizers yields a higher \(C_{\mathrm{sk}}^{\mathrm{low}}\) with respect to both [8] and NEC with uniform quantizers. Table I shows the performance of the ADQC with \(B=2\,\mathrm{bit}\) used for quantization error correction and for several values of extracted bit per measurement, \(b=2\), \(3\), and \(4\,\mathrm{bit}\). Increasing the number of bits extracted from the channel yields a higher \(C_{\mathrm{sk}}^{\mathrm{low}}\), even just sharing just \(B=2\,\mathrm{bit}\) of error correction. We now consider the case of limited side-channel capacity, described in Section III-B, focusing on the NEC and ADQC techniques, both with optimized quantizers, to understand the overhead introduced on the side channel. Fig. 3 shows \(\gamma\) as a function of the correlation \(\rho_{\mathrm{AB}}\), with \(B=1\,\mathrm{bit}\), \(b=2\) or \(3\), and \(B=1\) or \(2\,\mathrm{bit}\). We first note that for \(B=1\) (thus a very limited side-channel overhead due to quantization error correction) the number of bits exchanged on the side channel is very close for both ADQC and NEC schemes (i.e., \(\gamma\approx 1\)). Indeed, for high values of \(\rho_{\mathrm{AB}}\) the ADQC technique requires even fewer bits than NEC (for \(b=3\) and \(4\)) since the extracted bit sequences are more similar and the information reconciliation part is less demanding. Instead, when we consider \(B=2\), we note that the data rate of the side channel increases by a factor of 3 (for highly correlated channels) to obtain however a higher secrecy capacity as from Fig. 2. ## V Conclusion We have proposed an advantage distillation technique for physical layer-based SKA, where Alice transmits via a publicly authenticated channel a correction, that is exploited by Bob and, eventually by Eve, to correct their measurements. Numerical results show that both the quantizer optimization and the correction transmission allow Alice and Bob to achieve a higher lower bound of the secret key capacity, even when Eve optimizes her quantizers as well. Additionally, we showed that the lower bound of the secrecy key rate per bit shared on the public channel is higher when correction is used, revealing an efficient use of the public channel by this technique.
2306.11675
Insights of quantum time into quantum evolution
If time is emergent, quantum system is entangled with quantum time as it evolves. If the system contains entanglement within itself, which we can call internal entanglement to distinguish it from the "external" time-system entanglement, the speed of evolution is enhanced. In this paper, we show the correlation between the novel time-system entanglement and the conventional internal entanglement of a system that contains two entangled qubits. We consider two cases: (1) two initially entangled qubits that evolve under local dynamics; (2) two interacting qubits such that entanglement between them is generated over time. In the first case, we obtain the main result that increasing internal entanglement speeds up the evolution and makes the system more entangled with time. For both cases, we show the dependence of time-system entanglement entropy on the distance of evolution which is characterized by fidelity. The interacting system can evolve faster than the non-interacting system if the interaction is sufficiently strong, and thus it can be entangled with time more efficiently.
Ngo Phuc Duc Loc
2023-06-20T16:53:30Z
http://arxiv.org/abs/2306.11675v4
# Insights of quantum time into quantum evolution ###### Abstract If time is emergent, quantum system is entangled with quantum time as it evolves. If the system contains entanglement within itself, which we can call _internal entanglement_ to distinguish it from the "external" time-system entanglement, the speed of evolution is enhanced. In this paper, we explore the insights of quantum time for the evolution of a system that contains two entangled qubits. We consider two cases: (1) two initially entangled qubits that evolve under local dynamics; (2) two interacting qubits such that entanglement between them is generated over time. In the first case, we obtain the main result that increasing internal entanglement speeds up the evolution and makes the system more entangled with time. In the second case, we show the dependence of time-system entanglement entropy on the distance of evolution which is characterized by fidelity. We compare the two cases with each other and find that two interacting qubits can evolve faster than two non-interacting qubits if the interaction is sufficiently strong, and thus they become entangled with time more efficiently. These results could be useful to gain new insights of quantum time into black hole evaporation or cosmological perturbations in an expanding Universe, because we also have an evolving entangled bipartite system in those cases. ###### Contents * I. Introduction * II. Evolution with quantum time: Two non-interacting qubits * II.1 Speed of evolution * II.2 A single qubit clock * II.3 C Continuous quantum time * III Evolution with quantum time: Two interacting qubits * IV Conclusions ## I Introduction The nature of time is a big puzzle. Does time have a quantum structure? Or, in other words, is it emergent? The physics community recently is interested in "extracting space" from entanglement, but what about time? Given the fact that we have a fabric of spacetime (not space and time) in relativity theory, it is therefore not unreasonable to ask for the emergence of time, just like the emergence of space. Of course, unlike space, the arrow of time must somehow emerge as well. Entanglement is another mysterious feature of the quantum world. While the nature of entanglement is still an open question at the fundamental level, it is both interesting and important to understand different phenomenological aspects of entanglement. Entanglement could be generated by interaction between systems, or interaction between a system and an environment that is crucial to understand decoherence. Entanglement can also be "artificially felt" by different observers when a quantum state is Wigner rotated due to a Lorentz transformation [1; 2]. If time really has a quantum structure, another kind of entanglement appears: a quantum system will be entangled with time as it evolves [3; 4]. In this paper, we study how an entangled bipartite system is entangled with quantum time as it evolves. We call entanglement existing within the system _internal entanglement_ to distinguish it from the "external" time-system entanglement. We consider two cases: (1) two initially entangled qubits that evolve under some local dynamics; (2) two interacting qubits such that their internal entanglement is time-dependent. In both cases, the key message is that increasing internal entanglement speeds up the evolution and makes the system more entangled with time. This paper is organized as follows. In Sec. II, we consider the case of two initially entangled but non-interacting qubits. We first show that increasing internal entanglement can make the state vector travel further and faster. We then compute the time-system entanglement measures when there is a single qubit clock. We then generalize our results to the continuous limit. In Sec. III, we study time-system entanglement for a system containing two interacting qubits, focusing on computing the time-system entanglement entropy in the continuous limit. Conclusions and possible further developments are presented in Sec. IV. ## II Evolution with quantum time: two non-interacting qubits ### Speed of evolution Consider the initial quantum state of two entangled qubits: \[\left|\psi(0)\right\rangle=\alpha\left|00\right\rangle+\beta\left|11\right\rangle, \tag{1}\] where \(\alpha\) and \(\beta\) are complex numbers satisfying the normalization condition \(|\alpha|^{2}+|\beta|^{2}=1\). The basis vectors are energy eigenstates of the local Hamiltonians: \[H_{A}\left|0\right\rangle_{A}=0,\hskip 28.452756ptH_{A}\left|1 \right\rangle_{A}=\epsilon\left|1\right\rangle_{A}, \tag{2}\] \[H_{B}\left|0\right\rangle_{B}=0,\hskip 28.452756ptH_{B}\left|1 \right\rangle_{B}=\epsilon\left|1\right\rangle_{B}, \tag{3}\] where the subscripts indicate the corresponding subsystems, and \(\epsilon\) is a positive real number. The total Hamiltonian is given by \[H_{total}=H_{A}\otimes I_{B}+I_{A}\otimes H_{B}. \tag{4}\] This Hamiltonian acts locally on each subsystem and therefore entanglement measures are preserved throughout the state's evolution. The entanglement entropy of the quantum state in Eq. 1 is \[S(A)=-\text{Tr}(\rho_{A}\log_{2}\rho_{A})=-\left(|\alpha|^{2}\log_{2}|\alpha|^ {2}+(1-|\alpha|^{2})\log_{2}(1-|\alpha|^{2})\right). \tag{5}\] Another useful entanglement measure is the quadratic entanglement entropy determined from the purity of the state instead of its eigenvalues. It is given by \[S_{2}(A)=2(1-\text{Tr}(\rho_{A}^{2}))=4|\alpha|^{2}(1-|\alpha|^{2}). \tag{6}\] The state vector at time \(t\) is \[\left|\psi(t)\right\rangle=\alpha\left|00\right\rangle+\beta e^{-2i\epsilon t/ \hbar}\left|11\right\rangle. \tag{7}\] The fidelity (or overlap) between the initial and final states is \[\Delta\psi\equiv|\left\langle\psi(t)|\psi(0)\right\rangle|=\sqrt{1+2|\alpha|^{ 2}(1-|\alpha|^{2})\left(\cos\frac{2\epsilon t}{\hbar}-1\right)}. \tag{8}\] Smaller fidelity means larger distance, so it can also be thought of as a distance measure. We define \(\tau\) to be the amount of time for the state to travel through the distance \(\Delta\psi\). For convenience, it is given in units of \(\hbar/\epsilon\): \[\frac{\tau}{\hbar/\epsilon}=\frac{1}{2}\arccos\left(\frac{(\Delta\psi)^{2}-1} {2|\alpha|^{2}(1-|\alpha|^{2})}+1\right). \tag{9}\] Using Eqs. 5 and 9, we plot in the left panel of Fig. 1 the evolution time \(\tau\) as a function of \(S(A)\) for different values of \(\Delta\psi\). Similarly, using Eqs. 6 and 9, we plot in the right panel of Fig. 1 the evolution time \(\tau\) as a function of \(S_{2}(A)\) for different values of \(\Delta\psi\). From Fig. 1, we see that the more entanglement the system has, the further and faster its state vector can move. Figure 1: _Left panel:_ Evolution time \(\tau\) (in units of \(\hbar/\epsilon\)) as a function of entanglement entropy \(S(A)\) for different distances \(\Delta\psi\). _Right panel:_ Evolution time \(\tau\) (in units of \(\hbar/\epsilon\)) as a function of quadratic entanglement entropy \(S_{2}(A)\) for different distances \(\Delta\psi\). ### A single qubit clock To gain some initial intuition, we first consider the simplest case of a qubit clock that is entangled with the system as follows [5]: \[\left|\Psi\right\rangle\rangle=\frac{1}{\sqrt{2}}\left(\left|0\right\rangle_{T} \left|\psi_{0}\right\rangle_{S}+\left|1\right\rangle_{T}\left|\psi_{1}\right\rangle _{S}\right). \tag{10}\] The system evolves from the initial state \(\left|\psi_{0}\right\rangle_{S}\) to the final state \(\left|\psi_{1}\right\rangle_{S}\) while the qubit clock ticks from \(\left|0\right\rangle_{T}\) to \(\left|1\right\rangle_{T}\). The double ket notation of \(\left|\Psi\right\rangle\rangle\) is just to remind us that it is the static "global state" and not the evolving state of the system. The reduced density matrix of the system is \[\rho_{S}=\mathrm{Tr}_{T}(\left|\Psi\right\rangle)\left\langle\left\langle\Psi \right|\right\rangle=\frac{1}{2}(\left|\psi_{0}\right\rangle\left\langle\psi_ {0}\right|+\left|\psi_{1}\right\rangle\left\langle\psi_{1}\right|), \tag{11}\] whose the nonzero eigenvalues are \[p_{\pm}=\frac{1\pm\left|\left\langle\psi_{1}|\psi_{0}\right\rangle\right|}{2}, \tag{12}\] which can also be obtained from Eq. 22 below by substituting \(N=2\). The time-system entanglement entropy is then \[E(T,S)=-\mathrm{Tr}(\rho_{S}\log_{2}\rho_{S})=-\sum_{k=\pm}p_{k}\log_{2}p_{k}. \tag{13}\] The quadratic time-system entanglement entropy is \[E_{2}(T,S)=2(1-\mathrm{Tr}(\rho_{S}^{2}))=4p_{+}p_{-}. \tag{14}\] From Eq. 12, we see that the more the system evolves, the more it becomes entangled with time. From Eq. 8, we see that, for a given state with a fixed \(|\alpha|^{2}\), the minimum time at which the fidelity is minimum is1 Footnote 1: We did not want to use the notation \(t_{min}\) since it may cause some confusion with the quantum speed limit (QSL)[6; 7]. The QSL is a bound on the minimum time required for a given state to evolve to an orthogonal state and it is obtained from the average and variance of energy. In our discussion, \(t_{*}\) is the actual time of evolution for the initial state to evolve to the furthest possible state. It is minimum in the sense that it is obtained from \((2n+1)\pi\hbar/2\epsilon\)\((n\in\mathbb{Z})\) by setting \(n=0\). \[t_{*}=\frac{\pi\hbar}{2\epsilon}. \tag{15}\] This time corresponds to the maximum time-system entanglement. Using Eqs. 13 and 5, we can plot \(E(T,S)\) as a function of \(S(A)\). Similarly, using Eqs. 14 and 6, we can plot \(E_{2}(T,S)\) as a function of \(S_{2}(A)\). These two plots are shown in Fig. 2. From Fig. 2, we have two remarks: * For a given time interval, we see that if the system has more entanglement within itself, then it becomes more entangled with time as it evolves. Combining this result with Fig. 1, we can now see the whole picture: The more internal entanglement the system has, the faster it can evolve. Therefore, during a fixed time interval, the state vector can travel through a larger distance in Hilbert space and thus becomes more entangled with time. * The patterns of \(E(T,S)\) and \(E_{2}(T,S)\) are quite similar, though the scale of the former is slightly greater than that of the latter. ### Continuous quantum time We now generalize our results to the continuous limit where there are infinitely many steps of evolution between the initial and final states. A useful procedure is to consider the global state [5] \[\ket{\Psi}=\frac{1}{\sqrt{N}}\sum_{t^{\prime}=0}^{N-1}\ket{t^{\prime}}_{T} \otimes\ket{\psi(t^{\prime})}_{S}, \tag{16}\] and then take the limit \(N\rightarrow\infty\) in the end. The normalization factor here indicates that each moment of time is equally likely to be occupied by the system's evolution. By taking Figure 2: **A single qubit clock**. _Left panel:_ Time-System entanglement entropy \(E(T,S)\) as a function of \(S(A)\) for different times. _Right panel:_ Quadratic time-system entanglement entropy \(E_{2}(T,S)\) as a function of \(S_{2}(A)\) for different times. the limit \(N\rightarrow\infty\), it is also true to say that the system's evolution generates an infinite-dimensional Hilbert space of quantum time. The state vector at time \(t\) is \[\left|\psi(t)\right\rangle=\alpha\left|00\right\rangle+\beta e^{-i\theta_{t}} \left|11\right\rangle, \tag{17}\] where \(\theta_{t}\equiv 2\epsilon t/\hbar\) for brevity. Note that \(\left|\psi(t^{\prime})\right\rangle=\alpha\left|00\right\rangle+\beta e^{-i \theta_{t}\frac{t^{\prime}}{N-1}}\left|11\right\rangle\), so the state vector moves from the initial state \(\left|\psi(0)\right\rangle\) to the target state \(\left|\psi(t)\right\rangle\) in \(N-1\) steps. The reduced density operator of the system is (18) where \[\gamma(t)\equiv\alpha\beta^{*}\left\langle T_{1}|T_{0}\right\rangle, \tag{19}\] \[\left|T_{0}\right\rangle\equiv\frac{1}{\sqrt{N}}\sum_{t^{\prime}=0}^{N-1} \left|t^{\prime}\right\rangle, \tag{20}\] \[\left|T_{1}\right\rangle\equiv\frac{1}{\sqrt{N}}\sum_{t^{\prime}=0}^{N-1}e^{- i\theta_{t}\frac{t^{\prime}}{N-1}}\left|t^{\prime}\right\rangle. \tag{21}\] The nonzero eigenvalues of the reduced density matrix \(\rho_{S}\) are \[p_{\pm}=\frac{1\pm\sqrt{1-4(|\alpha|^{2}(1-|\alpha|^{2})-|\gamma(t)|^{2})}}{2}, \tag{22}\] where \[|\gamma(t)|^{2}=\frac{|\alpha|^{2}(1-|\alpha|^{2})}{N^{2}}\Bigg{|}\sum_{t^{ \prime}=0}^{N-1}e^{i\theta_{t}\frac{t^{\prime}}{(N-1)}}\Bigg{|}^{2}=\frac{| \alpha|^{2}(1-|\alpha|^{2})}{N^{2}}\frac{\cos\left(\frac{N\theta_{t}}{N-1} \right)-1}{\cos\left(\frac{\theta_{t}}{N-1}\right)-1}. \tag{23}\] In the continuous limit, this reduces to \[\lim_{N\rightarrow\infty}|\gamma(t)|^{2}=2|\alpha|^{2}(1-|\alpha|^{2})\frac{ 1-\cos\theta_{t}}{\theta_{t}^{2}}. \tag{24}\] The entanglement measures can then be calculated as usual by taking the continuous limit \(N\rightarrow\infty\): \[E(T,S)=\lim_{N\rightarrow\infty}\left(-\sum_{k=\pm}p_{k}\log_{2}p_{k}\right), \tag{25}\] \[E_{2}(T,S)=\lim_{N\rightarrow\infty}\left(4p_{+}p_{-}\right). \tag{26}\] Using Eqs. 25 and 5, we can plot \(E(T,S)\) as a function of \(S(A)\). Similarly, using Eqs. 26 and 6, we can plot \(E_{2}(T,S)\) as a function of \(S_{2}(A)\). These plots are shown in Fig. 3. From Fig. 3, we have two remarks: * The two remarks written in the end of Sec. II.2 continue to hold. Namely, increasing internal entanglement speeds up the evolution and makes the system more entangled with time. The scale of \(E_{2}(T,S)\) is slightly less than that of \(E(T,S)\). * In the continuous limit, we see that the time-system entanglement entropies are smaller than that of the single qubit clock case. In other words, if the state vector has to travel through more intermediate steps between the initial and final states, it becomes less entangled with time [5]. ## III Evolution with quantum time: two interacting qubits We now consider the case where entanglement between two qubits is generated by interaction between them. We focus on computing the time-system entanglement entropy in the continuous limit. The Hamiltonian is given by \[H_{total}=H_{A}\otimes I_{B}+I_{A}\otimes H_{B}+H_{int}, \tag{27}\] where the local Hamiltonians \(H_{A}\) and \(H_{B}\) were defined in Eqs. 2 and 3, and the interacting Hamiltonian is \[H_{int}=-\lambda(\ket{1}\bra{0}\otimes\ket{1}\bra{0}+\ket{0}\bra{1}\otimes \ket{0}\bra{1}), \tag{28}\] Figure 3: **Continuous quantum time.**_Left panel:_ Time-System entanglement entropy \(E(T,S)\) as a function of \(S(A)\) for different times. _Right panel:_ Time-System quadratic entanglement entropy \(E_{2}(T,S)\) as a function of \(S_{2}(A)\) for different times. where \(\lambda\) is a real coupling constant that has dimension of energy. This interacting Hamiltonian is capable of generating entanglement between two qubits. One can also add a constant term to the total Hamiltonian but that will only introduce an irrelevant overall phase factor. The initial state is a factorized state: \[\ket{\psi(0)}=\ket{00}. \tag{29}\] The state vector at time \(t\) is given by the formal solution to the Schrodinger equation: \[\ket{\psi(t)} =e^{-i\int H_{tot}dt/\hbar}\ket{00} \tag{30}\] \[=\cos\frac{\lambda t}{\hbar}\ket{00}+i\sin\frac{\lambda t}{\hbar }\ket{11}\] (31) \[\equiv\cos\phi_{t}\ket{00}+i\sin\phi_{t}\ket{11}, \tag{32}\] where we used the fact that \(H_{A}\ket{0}_{A}=H_{B}\ket{0}_{B}=0\) and \(H_{int}\ket{00}=\ket{11}\) and \(H_{int}\ket{11}=\ket{00}\). We also defined \(\phi_{t}\equiv\lambda t/\hbar\) in the last line for brevity. The global state is \[\ket{\Psi}=\frac{1}{\sqrt{N}}\sum_{t^{\prime}=0}^{N-1}\ket{t^{\prime}}\otimes \ket{\psi(t^{\prime})}. \tag{33}\] Here \(\ket{\psi(t^{\prime})}=\cos\left(\phi_{t}\frac{t^{\prime}}{N-1}\right)\ket{00 }+i\sin\left(\phi_{t}\frac{t^{\prime}}{N-1}\right)\ket{11}\), so the state vector moves from the initial state \(\ket{\psi(0)}\) to the target state \(\ket{\psi(t)}\) in \(N-1\) steps. The reduced density operator of the system is \[\rho_{S}=\text{Tr}_{T}(\ket{\Psi})\langle\langle\Psi|\right)=a(t)\ket{00} \bra{00}+c(t)\ket{00}\bra{11}+c^{*}(t)\ket{11}\bra{00}+b(t)\ket{11}\bra{11}, \tag{34}\] where \[a(t)\equiv\frac{1}{N}\sum_{t^{\prime}=0}^{N-1}\cos^{2}\left(\phi_{t}\frac{t^{ \prime}}{N-1}\right), \tag{35}\] \[b(t)\equiv\frac{1}{N}\sum_{t^{\prime}=0}^{N-1}\sin^{2}\left(\phi_{t}\frac{t^{ \prime}}{N-1}\right), \tag{36}\] \[c(t)\equiv-\frac{i}{2N}\sum_{t^{\prime}=0}^{N-1}\sin\left(2\phi_{t}\frac{t^{ \prime}}{N-1}\right). \tag{37}\] The nonzero eigenvalues of the system density operator \(\rho_{S}\) are \[p_{\pm}=\pm\frac{1}{4N}\csc^{2}\left(\frac{\phi_{t}}{N-1}\right)\left[\pm N \left(1-\cos\left(\frac{2\phi_{t}}{N-1}\right)\right)+2\sqrt{\sin^{2}\left( \frac{\phi_{t}}{N-1}\right)\sin^{2}\left(\frac{N\phi_{t}}{N-1}\right)}\right]. \tag{38}\] In the continuous limit, this reduces to \[\lim_{N\rightarrow\infty}p_{\pm}=\frac{1}{2}\left(1\pm\frac{\sin\phi_{t}}{\phi_{t }}\right). \tag{39}\] Time-system entanglement entropy is then \[E(T,S)=\lim_{N\rightarrow\infty}\left(-\sum_{k=\pm}p_{k}\log_{2}p_{k}\right). \tag{40}\] The fidelity between the initial and final states is \[\Delta\psi=\left|\left\langle\psi(t)|\psi(0)\right\rangle\right.|=\cos\phi_{t}, \tag{41}\] and thus \[\phi_{t}=\arccos\Delta\psi. \tag{42}\] The minimum time required for the initial state to evolve to the orthogornal final state is \[t_{*}^{I}=\frac{\pi\hbar}{2\lambda}. \tag{43}\] Comparing Eq. 43 with Eq. 15, we see that the speed of evolution of two interacting qubits is greater than that of two non-interacting qubits if the interaction is sufficiently strong compared to the energy scale of the individual subsystems: \(\lambda>\epsilon\). Using Eqs. 40 and 42, we can plot \(E(T,S)\) as a function of fidelity in Fig. 4. The interacting case is represented by dashed black line. From Fig. 4, we see that \(E(T,S)\) increases when fidelity decreases. That is because the more the system evolves, the more it becomes entangled with time. When the state vector reaches its maximum distance at \(\Delta\psi=0\) (i.e. it evoles to an orthorgonal state), the corresponding entropy it can acquire is \(E_{\perp}(T,S)\approx 0.68\). Also in Fig. 4, we plotted two solid colorful lines to represent the case of two non-interacting qubits by using Eqs. 8 and 24. These lines are cut at the maximum distances that the corresponding state vectors can travel. Among the non-interacting cases, we see that increasing internal entanglement speeds up the evolution, since the state vector can travel further in a fixed time interval, and makes the system more entangled with time. If two non-interacting qubits are maximally entangled, its line coincides with the interacting line, though it should be noted that the speed of evolution in the two cases, in general, is different (see Eqs. 15 and 43). ## IV Conclusions In this paper, we studied the entanglement between quantum time and a quantum system containing two entangled qubits. We considered the case of two initially entangled qubits evolving under local dynamics, as well as the case of two interacting qubits such that their internal entanglement is generated over time. In the first case, we obtained the main result that increasing internal entanglement speeds up the evolution and makes the system more entangled with time. In the second case, we showed the dependence of time-system entanglement entropy on the distance of evolution which is characterized by fidelity. We compared the two cases with each other and found that two interacting qubits can evolve faster than two non-interacting qubits if the interaction is sufficiently strong, and thus they become entangled with time more efficiently. Our results could be useful to gain new insights of quantum time into black hole evaporation or cosmological perturbations in an expanding Universe, since we also have an evolving entangled bipartite system in those cases. For black hole, the total Hilbert space is decom Figure 4: Time-system entanglement entropy E(T,S) as a function of fidelity \(\Delta\psi\). _Blue line:_ two non-interacting qubits with \(S(A)\approx 0.72\) corresponding to \(|\alpha|^{2}=1/5\) or \(|\alpha|^{2}=4/5\). _Red line:_ two non-interacting qubits with \(S(A)\approx 0.92\) corresponding to \(|\alpha|^{2}=1/3\) or \(|\alpha|^{2}=2/3\). _Dashed black line:_ two interacting qubits with time-dependent internal entanglement. The line of two maximally entangled but non-interacting qubits (\(S(A)=1\)) coincides with the interacting line, though it should be noted that the speed of evolution in the two cases is different (see Eqs. 15 and 43). posed on a spatial slice as \(\mathcal{H}_{tot}=\mathcal{H}_{in}\otimes\mathcal{H}_{out}\), where \(\mathcal{H}_{in}\) contains infalling degrees of freedom and \(\mathcal{H}_{out}\) contains outgoing Hawking quanta. For cosmological perturbations, the Hilbert space can be decomposed in momentum space as \(\mathcal{H}_{tot}=\mathcal{H}_{k>aH}\otimes\mathcal{H}_{k<aH}\), where \(\mathcal{H}_{k>aH}\) contains subhorizon modes and \(\mathcal{H}_{k<aH}\) contains superhorizon modes. In both cases, the system can be an evolving pure state that contains two entangled subsystems. It is therefore interesting to see how our idea can be applied to those cases and if quantum time can offer new insights into the information paradox in each case [8; 9]. We plan to investigate further along these lines.
2302.12853
Toric 2-group anomalies via cobordism
2-group symmetries arise in physics when a 0-form symmetry $G^{[0]}$ and a 1-form symmetry $H^{[1]}$ intertwine, forming a generalised group-like structure. Specialising to the case where both $G^{[0]}$ and $H^{[1]}$ are compact, connected, abelian groups (i.e. tori), we analyse anomalies in such `toric 2-group symmetries' using the cobordism classification. As a warm up example, we use cobordism to study various 't Hooft anomalies (and the phases to which they are dual) in Maxwell theory defined on non-spin manifolds. For our main example, we compute the 5th spin bordism group of $B|\mathbb{G}|$ where $\mathbb{G}$ is any 2-group whose 0-form and 1-form symmetry parts are both $\mathrm{U}(1)$, and $|\mathbb{G}|$ is the geometric realisation of the nerve of the 2-group $\mathbb{G}$. By leveraging a variety of algebraic methods, we show that $\Omega^{\mathrm{Spin}}_5(B|\mathbb{G}|) \cong \mathbb{Z}/m$ where $m$ is the modulus of the Postnikov class for $\mathbb{G}$, and we reproduce the expected physics result for anomalies in 2-group symmetries that appear in 4d QED. Moving down two dimensions, we recap that any (anomalous) $\mathrm{U}(1)$ global symmetry in 2d can be enhanced to a toric 2-group symmetry, before showing that its associated local anomaly reduces to at most an order 2 anomaly, when the theory is defined with a spin structure.
Joe Davighi, Nakarin Lohitsiri
2023-02-24T19:02:37Z
http://arxiv.org/abs/2302.12853v2
# Toric 2-group anomalies via cobordism ###### Abstract \(2\)-group symmetries arise in physics when a \(0\)-form symmetry \(G^{[0]}\) and a \(1\)-form symmetry \(H^{[1]}\) intertwine, forming a generalised group-like structure. Specialising to the case where both \(G^{[0]}\) and \(H^{[1]}\) are compact, connected, abelian groups (_i.e._ tori), we analyse anomalies in such 'toric \(2\)-group symmetries' using the cobordism classification. As a warm up example, we use cobordism to study various 't Hooft anomalies (and the phases to which they are dual) in Maxwell theory defined on non-spin manifolds. For our main example, we compute the \(5\)th spin bordism group of \(B|\mathbb{G}|\) where \(\mathbb{G}\) is any \(2\)-group whose \(0\)-form and \(1\)-form symmetry parts are both \(\mathrm{U}(1)\), and \(|\mathbb{G}|\) is the geometric realisation of the nerve of the \(2\)-group \(\mathbb{G}\). By leveraging a variety of algebraic methods, we show that \(\Omega_{5}^{\mathrm{Spin}}(B|\mathbb{G}|)\cong\mathbb{Z}/m\) where \(m\) is the modulus of the Postnikov class for \(\mathbb{G}\), and we reproduce the expected physics result for anomalies in \(2\)-group symmetries that appear in \(4\)d QED. Moving down two dimensions, we recap that any (anomalous) \(\mathrm{U}(1)\) global symmetry in \(2\)d can be enhanced to a toric \(2\)-group symmetry, before showing that its associated local anomaly reduces to at most an order \(2\) anomaly, when the theory is defined with a spin structure. ## 1 Introduction * 2 Review: 2-groups and their classifying spaces * 2.1 What is a 2-group? * 2.2 2-group symmetries in quantum field theory * 3 Cobordism with 2-group structure * 3.1 Background fields and their classifying spaces * 3.1.1 Elementary examples of \(B|\mathbb{G}|\) * 3.1.2 General \(B|\mathbb{G}|\) as a fibration * 3.2 Cobordism description of 2-group anomalies * 4 Maxwell revisited * 4.1 Cobordism classification of 1-form anomalies * 4.2 Phases of non-spin Maxwell theory * 4.3 Scalar QED and 1-form anomaly interplay * 5 QED anomalies revisited * 5.1 From the physics perspective * 5.2 From the bordism perspective * 5.2.1 Zero Postnikov class * 5.2.2 Even Postnikov class * 5.2.3 Odd Postnikov class * 5.2.4 The free part * 6 Abelian 2-group enhancement in two dimensions * 6.1 'Spin structure anomalies' in two dimensions * 6.2 Non-spin generalisation * 7 Conclusion and outlook * A Additional cohomology computations * A.1 Cohomology groups of Eilenberg-Maclane spaces * A.2 Mod 2 cohomology ring of the classifying space of an abelian 2-group B Additional bordism calculationsB.1 MaxwellB.2 Scalar QED with charge-2 bosonC Computation of a key differential ## 1 Introduction In quantum field theory, anomalies are loosely defined to be quantum obstructions to symmetries. More precisely, anomalies can themselves be identified with (special classes of) quantum field theories in one dimension higher than the original theory, via the idea of 'anomaly inflow'.1 This modern viewpoint led to an algebraic classification of anomalies via cobordism, which was made rigorous by Freed and Hopkins [3] following many important works (including [4; 5; 6; 7; 8]). The cobordism classification includes all known anomalies afflicting chiral symmetries of massless fermions in any number of dimensions, as well as other more subtle anomalies that involve discrete spacetime symmetries (see _e.g._[9; 10; 11]). Footnote 1: Mathematically, any anomalous theory in \(d\) dimensions can be described by a relative quantum field theory [1] between an extended field theory in one dimension higher (the ‘anomaly theory’), and the trivial extended field theory (see _e.g._[2]). The cobordism group that classifies anomalies, for \(d\) spacetime dimensions and symmetry type \(\mathcal{S}\),2 is the Anderson dual of bordism [3], Footnote 2: A given symmetry _type_\(\mathcal{S}\), as defined precisely in [3], includes both the spacetime symmetry and an internal symmetry group, as we as maps between them, extended to all dimensions. \[H^{d+2}_{I\mathbb{Z}}\left(MTS\right):=[MTS,\Sigma^{d+2}I\mathbb{Z}], \tag{1}\] where \(MTS\) denotes the Madsen-Tillman spectrum associated to symmetry type \(\mathcal{S}\), \(I\mathbb{Z}\) denotes the Anderson dual of the sphere spectrum (with \(\Sigma^{d+2}\) denoting the \(d+2\)-fold suspension), and \([X,Y]\) denotes the homotopy classes of maps between a pair of spectra \(X\) and \(Y\). In fact, this cobordism classification goes beyond what we would normally think of as 'anomalies' to cover all reflection positive invertible field theories (with or without fermions) with symmetry \(\mathcal{S}\) in \(d+1\)-dimensions. To unpack the meaning and significance of the cobordism group \(H^{d+2}_{I\mathbb{Z}}\left(MTS\right)\), it is helpful to recall that it fits inside a defining short exact sequence [3] \[\mathrm{Ext}^{1}_{\mathbb{Z}}\left(\pi_{d+1}(MTS),\mathbb{Z}\right)\hookrightarrow H ^{d+2}_{I\mathbb{Z}}\left(MTS\right)\twoheadrightarrow\mathrm{Hom}\left(\pi_{ d+2}(MTS),\mathbb{Z}\right), \tag{2}\] where \(\pi_{k}(X)\) is the \(k^{\rm th}\) stable homotopy group of a spectrum \(X\). For the special case of chiral fermion anomalies in \(d\) dimensions, the invertible field theory in question is precisely the exponentiated \(\eta\)-invariant [8] of Atiyah, Patodi and Singer [12; 13; 14] in dimension \(d+1\). In that case, integrating the gauge-invariant _anomaly polynomial_\(\Phi_{d+2}\) provides an element of the right factor \({\rm Hom}\,(\pi_{d+2}(MTS),\mathbb{Z})\). Theories with \(\Phi_{d+2}=0\), which are in the kernel of the right map and thus in the image of the left map, correspond to residual 'global anomalies' which are thus captured by the left-factor. For theories with symmetry type \(\mathcal{S}={\rm Spin}\times G\), where \(G\) is the internal symmetry group, we have that \({\rm Ext}^{1}_{\mathbb{Z}}\,(\pi_{d+1}(MT\mathcal{S}),\mathbb{Z})\cong{\rm Hom }\left({\rm Tor}\,\Omega^{\rm Spin}_{d+1}(BG),\mathbb{R}/\mathbb{Z}\right)\) classifies global anomalies, where \(\Omega^{\rm Spin}_{\bullet}\) denotes spin bordism. A straightforward corollary is that if the group \({\rm Tor}\,\Omega^{\rm Spin}_{d+1}(BG)\) vanishes there can be no global anomalies, which has been applied to various particle physics applications in recent years [15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32]. Concurrent with this development of the cobordism classification of anomalies, in the past decade the notion of'symmetry' has been generalised beyond the action of groups on local operators, in various exciting directions (for recent reviews, see Ref. [33; 34]). This includes the notion of \(p\)_-form_ symmetries [35], which act not on local operators (the case \(p=0\)) but more generally on extended operators of dimension \(p\). These symmetries couple to background fields that are \((p+1)\)-form gauge fields. Such higher-form symmetries are found in a plethora of quantum field theories. The most well-known examples include a pair of U(1) 1-form symmetries in 4d Maxwell theory, and a \(\mathbb{Z}/N\) 1-form symmetry in 4d SU(\(N\)) Yang-Mills theory; the latter exhibits a mixed anomaly with charge-parity symmetry when the SU(\(N\)) \(\theta\)-angle equals \(\pi\), which led to new insights regarding the vacua of QCD [36]. Other important classes of generalised symmetry include _non-invertible_ and _categorical_ symmetries, as well as _subsystem_ symmetries. These shall play no role in the present paper. Soon after the notion of \(p\)-form symmetries was formalised in [35], it was realised that higher-form symmetries of different degrees can'mix', leading to the notion of \(p\)_-group_ symmetry. The most down-to-earth example of this symmetry structure, which in general is described using higher category theory (see _e.g._[37]), is the notion of a _2-group_ symmetry whereby a 1-form symmetry \(H^{[1]}\) and a 0-form symmetry \(G^{[0]}\) combine non-trivially. This occurs, for example [38; 39], when a bunch of Wilson lines (which are charged under a discrete 1-form centre symmetry) can be screened by a dynamical fermion (which is charged under a 0-form flavour symmetry). 2-group symmetries were first observed, in their gauged form, in string theory in the context of the Green-Schwarz mechanism, in which a U(1) 1-form symmetry combines non-trivially with a diffeomorphism [40; 41; 42; 43]. Their appearance as global symmetries in field theory was appreciated first by Sharpe [44], before many new examples were identified in Refs. [45; 46]. Further instances of 2-group symmetry have since been discovered and analysed in 4d [38; 47; 48; 49; 49], 5d [46; 48; 50; 51; 52], and 6d [53; 54; 55] quantum field theories. A 2-group symmetry structure has recently been studied in the Standard Model [56], in the limit of zero Yukawa couplings. In this paper we study how the cobordism classification of anomalies can be applied to 2-group symmetries. As a first step, we here specialise to 2-groups for which both \(G^{[0]}\) and \(H^{[1]}\) are compact, connected, abelian groups _ergo_ tori \(\cong\) U(1)\({}^{n}\); we refer to such a structure, at times, as a 'toric 2-group'. In this case, there are 1-form and 2-form Noether currents \(j_{G}^{(1)}\) and \(j_{H}^{(2)}\) associated with the 0-form and 1-form symmetries respectively, and the non-trivial 2-group'mixing' between these symmetries is reflected in a fusion rule for these currents, schematically \[j_{G}^{(1)}(x)j_{G}^{(1)}(0)\sim K(x)j_{H}^{(2)}(0)+\dots, \tag{3}\] for a known (singular) function \(K(x)\). Toric 2-group symmetries appear, for example, in chiral U(1) gauge theories in 4d when there are mixed 'operator-valued' anomalies between global chiral symmetry currents and the gauge current [45], as well as in 3d [57] and in hydrodynamics [58]. Toric 2-group symmetries thus naturally appear in fermionic systems, for which anomalies can be more fruitfully analysed using cobordism (compared to purely bosonic anomalies). For such a toric 2-group \(\mathbb{G}\), we study the bordism theory of manifolds equipped with 'tangential \(\mathbb{G}\)-structure'. This can be defined analogously to a tangential \(\mathcal{S}\)-structure for ordinary symmetry type \(\mathcal{S}\), given now the particular 2-group \(\mathbb{G}\) plus an appropriate form of spin structure. When the 2-group symmetry does not further mix with the spacetime symmetry, this reduces to studying the spin bordism groups of the topological space that classifies \(\mathbb{G}\)-2-bundles. Thankfully much is known about the classifying space of \(\mathbb{G}\)-2-bundles that will be of use to us, given the physics applications we have in mind. Quite generally, the classifying space of a 2-group \(\mathbb{G}\) can be computed as \(B|\mathbb{G}|\), where \(|\mathbb{G}|\) is the geometric realization of the nerve of the 2-group \(\mathbb{G}\) viewed as a category [59; 60], which is itself a topological _1_-group -- albeit often an infinite-dimensional one. This means that Freed and Hopkins' classification of invertible field theories with fixed symmetry type can be applied directly, taking the tangential structure to be \(\text{Spin}\times|\mathbb{G}|\). In the special case of a pure 1-form symmetry valued in an abelian group \(H^{[1]}=A\), this formula for the classifying space of the corresponding 2-group reduces to \(B(BA)\), recovering the well-known result for classifying abelian gerbes [61]. This special case, which is of significant physical interest, has already been used in the physics literature to study 1-form anomalies, for many examples with variously Spin, SO, and O structures in Ref. [18]. An extension of this to a pure 2-form symmetry and its anomalies in 6d gauge theories was studied via cobordism in Ref. [24]. More generally for any 2-group \(\mathbb{G}\), the classifying space \(B|\mathbb{G}|\) always sits in a 'defining fibration' \[B\left(BH^{[1]}\right)\to B|\mathbb{G}|\to BG^{[0]}\,, \tag{4}\] which allows one to leverage the usual spectral sequence methods to compute _e.g._ the cohomology and/or the (co)bordism groups of \(B|\mathbb{G}|\). For 'non-trivial' 2-groups, _i.e._ those for which the fibration (4) is non-trivial, there is to our knowledge only a scattering of calculations in the literature; in particular, the case \(B(B\mathbb{Z}/2^{[1]})\to B|\mathbb{G}|\to B\mathrm{O}^{[0]}\) was studied by Wan and Wang [18] (Section 4), and \(B(B\mathrm{U}(1)^{[1]})\to B|\mathbb{G}|\to B\mathrm{SU}(2)^{[0]}\) was studied by Lee and Tachikawa [24] (Appendix B.6). In this paper we extend these works with a dedicated cobordism analysis of theories with 2-group symmetry and their anomalies. The content of this paper is as follows. After reviewing 2-groups and their appearances in quantum field theory in SS2, we then describe the topological spaces \(B|\mathbb{G}|\) that classify 2-group background fields, and show how these spaces can be computed, in SS3. By considering appropriate tangential structures for 2-group symmetries, this then leads to a definition of the cobordism groups relevant for describing 2-group symmetries and their anomalies (SS3.2). We then apply this cobordism theory to study a small selection of examples in detail, focussing on toric 2-group symmetries as an especially tractable case. The main examples that we study in this paper are as follows. In SS4, which is a warm up example, we study anomalies involving the pair of 1-form symmetries in 4d Maxwell theory, defined on non-spin manifolds. As well as the \(\mathbb{Z}\)-valued 'local' mixed anomaly between the two 1-form symmetries [35], there is a trio of \(\mathbb{Z}/2\)-valued global anomalies (involving gravity) that only appear on non-spin manifolds. In SS5 we turn to our main example of toric 2-group symmetries, with non-trivial Postnikov class, occurring in 4d abelian chiral theories. We see, using cobordism, how turning on the Postnikov class transmutes a \(\mathbb{Z}\)-valued local anomaly into a torsion-valued global anomaly a la the Green-Schwarz mechanism - we precisely recover the physics results derived by Cordova, Dumitrescu and Intriligator in Ref [45]. Finally, in SS6 we drop down to two dimensions, where anomalous 0-form symmetries can be recast as 2-group symmetries by using the trivially conserved 1-form symmetry whose 2-form current is simply the volume form [44]. Our cobordism analysis highlights the prominent role played by spin structures; when the \(\mathrm{U}(1)^{[0]}\) anomaly coefficient is odd, the Green-Schwarz mechanism cannot in fact absorb the anomaly completely but leaves a \(\mathbb{Z}/2\)-valued anomaly, which we identify as an anomaly in the spin structure; we also show how this \(\mathbb{Z}/2\)-anomaly is trivialised upon passing to \(\text{Spin}_{c}\) bordism.3 Footnote 3: These examples emphasize that bordism computations are useful tools for keeping track of such subtle mod 2 effects, by automatically encoding various characteristic classes’ normalisation that can differ in spin _vs._ non-spin theories – for example, the class \(c_{1}^{2}\) is even on a spin manifold but needn’t be on a non-spin one. ## 2 Review: 2-groups and their classifying spaces In this Section we recall the notion of a 2-group \(\mathbb{G}\) (SS2.1), and briefly reprise their appearance as generalised symmetry structures in quantum field theory (SS2.2). ### What is a 2-group? We begin by sketching some equivalent definitions of a 2-group, for which our main reference is Ref. [60] by Baez and Stevenson. First and most concise is probably the definition of a 2-group \(\mathbb{G}\) as a higher category. Loosely, a 2-group \(\mathbb{G}\) is in this language a 2-category that contains a single object, with all 1-morphisms and 2-morphisms being invertible. (Slightly) less abstractly, a 2-group can also be described using 'ordinary' category theory, without recourse to higher categories, and this can be done in two ways; either as a groupoid in the category of groups, or as a group in the category of groupoids.4 Expanding on the former viewpoint, a 2-group is itself a category whose objects form a group \(\tilde{G}^{[0]}\), and whose morphisms also form a group, where the latter can be written as a semi-direct product between \(\tilde{G}^{[0]}\) and another group \(\tilde{H}^{[1]}\), _viz._ Footnote 4: It will be implicitly assumed that all groups and groupoids discussed carry a topology. \[\text{Obj}(\mathbb{G})=\tilde{G}^{[0]},\qquad\text{Mor}(\mathbb{G})=\tilde{H} ^{[1]}\rtimes\tilde{G}^{[0]}, \tag{1}\] such that all the various maps involved in the definition of a groupoid are continuous group homomorphisms (see SS3 of [60]). We will usually be interested in situations where both \(\tilde{G}^{[0]}\) and \(\tilde{H}^{[1]}\) are Lie groups. This definition involving a pair of groups \((\tilde{G}^{[0]},\tilde{H}^{[1]})\) is closely related to yet another, somewhat more practical, definition of a 2-group as a topological crossed module. The data specifying a topological crossed module consists of a quadruplet \[\mathbb{G}=(\tilde{G}^{[0]},\tilde{H}^{[1]},t,\alpha), \tag{2}\] where \(\tilde{G}^{[0]}\) and \(\tilde{H}^{[1]}\) can be identified with the groups that appear in (1), and \(t:\tilde{H}^{[1]}\to\tilde{G}^{[0]}\), \(\alpha:\tilde{G}^{[0]}\to\text{Aut}(\tilde{H}^{[1]})\) are continuous homomorphisms such that the pair of conditions \[t\left(\alpha(g)(h)\right)=gt(h)g^{-1},\qquad\alpha(t(h))(h^{\prime})=hh^{\prime} h^{-1} \tag{3}\] hold \(\forall g\in\tilde{G}^{[0]},\;h,\,h^{\prime}\in\tilde{H}^{[1]}\), which can be thought of as equivariance conditions on the maps. To define a 2-group from this data, one takes the group of objects \(\mathrm{Obj}(\mathbb{G})=\tilde{G}^{[0]}\) and the group of morphisms \(\mathrm{Mor}(\mathbb{G})=\tilde{H}^{[1]}\rtimes\tilde{G}^{[0]}\), where the semi-direct product is defined using the map \(\alpha\) as \((h,g)\cdot(h^{\prime},g^{\prime})=(h\alpha(g)(h^{\prime}),gg^{\prime})\), and the various source, target, and composition maps from \(\mathrm{Mor}(\mathbb{G})\), as well as identity maps from \(\mathrm{Obj}(\mathbb{G})\), are defined using simple formulae (for which we refer the interested reader to SS3 of [60].) We can tie together this circle of definitions by linking this final crossed module definition of \(\mathbb{G}\) to the first definition of \(\mathbb{G}\) as a 2-category; in that picture, \(\tilde{G}^{[0]}\) is the group of 1-morphisms, and \(\tilde{H}^{[1]}\) is the group of 2-morphisms from the identity 1-morphism to all other 1-morphisms [62]. When \(\tilde{G}^{[0]}\) and \(\tilde{H}^{[1]}\) are both Lie, and when the maps \(t\) and \(\alpha\) are both smooth, \(\mathbb{G}\) is a Lie 2-group. The final description of a 2-group \(\mathbb{G}\) that we just gave, as a topological crossed module, is still of somewhat limited use in physics because the groups \(\tilde{G}^{[0]}\) and \(\tilde{H}^{[1]}\) are not preserved under equivalence of 2-groups as 2-categories. Moreover, a physical system with a 2-group symmetry (possibly with associated background fields) is only sensitive to the equivalence class of the 2-group at long distance, not the 2-group itself, as described by Kapustin and Thorngren [62]. It is therefore more convenient to work with equivalence classes of 2-groups from the beginning, which we describe next. An equivalence class of 2-groups, with a particular 2-group \(\mathbb{G}=(\tilde{G}^{[0]},\tilde{H}^{[1]},t,\alpha)\) above as a representative, can also be described by a quadruplet \((G^{[0]},H^{[1]},\alpha,\beta)\) where [62] \[G^{[0]}:=\mathrm{coker}\,t,\qquad H^{[1]}:=\ker t. \tag{4}\] The homomorphism \(\alpha:G^{[0]}\to\mathrm{Aut}(H^{[1]})\) is so named because it descends from the homomorphism \(\alpha:G^{[0]}\to\mathrm{Aut}(H^{[1]})\). The new component is the so-called _Postnikov class_ \[\beta\in H^{3}(BG^{[0]};H^{[1]}). \tag{5}\] More precisely, two crossed modules \(\mathbb{G}=(\tilde{G}^{[0]},\tilde{H}^{[1]},t,\alpha)\) and \(\mathbb{G}^{\prime}=(\tilde{G}^{\prime[0]},\tilde{H}^{\prime[1]},t^{\prime}, \alpha^{\prime})\) are said to be equivalent when \(\tilde{G}^{\prime[0]}\simeq\tilde{G}^{[0]}\), \(\tilde{H}^{\prime[1]}\simeq\tilde{H}^{[1]}\) and there are homomorphisms (not necessarily isomorphisms) \(h:\tilde{H}^{[1]}\to\tilde{H}^{\prime[1]}\) and \(g:\tilde{G}^{[0]}\to\tilde{G}^{\prime[0]}\) that makes the diagram \[\begin{CD}1@>{}>{}>H^{[1]}@>{}>{}>\tilde{H}^{[1]}@>{t}>{}>\tilde{G}^{[0]}@>{}>{ }>G^{[0]}@>{}>{}>1\\ @V{}V{h}V@V{}V{g}V{}V@V{}V{}V\\ 1@>{}>{}>H^{\prime[1]}@>{}>{}>\tilde{H}^{\prime[1]}@>{t^{\prime}}>{}>\tilde{G}^{ \prime[0]}@>{}>{}>G^{\prime[0]}@>{}>{}>1\end{CD} \tag{6}\] of two exact sequences commutative and compatible with the actions of \(\tilde{G}^{[0]}\) on \(\tilde{H}^{[1]}\) and \(\tilde{G}^{\prime[0]}\) on \(\tilde{H}^{\prime[1]}\). These equivalence classes are classified by the group cohomology \(\mathcal{H}^{3}(G^{[0]},H^{[1]})\)[63], which is isomorphic to the ordinary cohomology \(H^{3}(BG^{[0]};H^{[1]})\). For completeness, we also define the notion of _2-group homomorphisms_, in terms of ordinary group homomorphisms between different elements of the associated crossed modules [60]. Represent two 2-groups \(\mathbb{G}\) and \(\mathbb{G}^{\prime}\) by crossed modules \((\tilde{G}^{[0]},\tilde{H}^{[1]},t,\alpha)\) and \((\tilde{G}^{\prime[0]},\tilde{H}^{\prime[1]},t^{\prime},\alpha^{\prime})\). A 2-group homomorphism \(f:\mathbb{G}\to\mathbb{G}^{\prime}\), which is a functor such that \(f:\mathrm{Obj}(\mathbb{G})\to\mathrm{Obj}(\mathbb{G}^{\prime})\) and \(f:\mathrm{Mor}(\mathbb{G})\to\mathrm{Mor}(\mathbb{G}^{\prime})\) are continuous homomorphisms of topological groups, can be represented by a pair of maps \(h:\tilde{H}^{[1]}\to\tilde{H}^{\prime[1]}\) and \(g:\tilde{G}^{[0]}\to\tilde{G}^{\prime[0]}\) such that the diagram (7) is commutative and that \(h,g\) are compatible with \(\alpha,\alpha^{\prime}\): \[h(\alpha(a)(b))=\alpha^{\prime}(g(a))h(b), \tag{8}\] for all \(a\in\tilde{G}^{[0]}\), \(b\in\tilde{H}^{[1]}\). ### 2-group symmetries in quantum field theory Physically, an equivalence class \(\mathbb{G}=(G^{[0]},H^{[1]},\alpha,\beta)\) of 2-groups describes a symmetry structure appearing in quantum field theories which have both a 0-form symmetry group \(G^{[0]}\) and a 1-form symmetry group \(H^{[1]}\). When the two symmetries are not independent, the Postnikov class \(\beta\) of the corresponding 2-group symmetry is non-trivial, and the 0-form and 1-form parts cannot be analysed in isolation. While we will eventually specialise to the case of toric 2-groups in the bulk of this paper, we first recap how 2-groups appear in field theory more broadly. It is convenient to distinguish two broad categories of 2-groups which in general have different physical origin: 1. A _continuous 2-group_\(\mathbb{G}\) is one with a continuous 1-form symmetry group \(H^{[1]}\). This class of 2-groups arises in field theory when, for example, the gauge transformation law for the 1-form symmetry background gauge field is modified so that there is no operator-valued 't Hooft anomaly involving the background gauge field for the 0-form symmetry, in an analogue of the Green-Schwarz mechanism for global symmetries [45]. The 0-form symmetry can here be abelian or non-abelian, connected or disconnected, and compact or non-compact. In this paper we focus our attention on the special case where both the 0-form and 1-form symmetry are connected, compact, and abelian groups, which we refer to as a '_toric 2-group_'. 2. A _discrete 2-group_\(\mathbb{G}\) is one with a discrete 1-form symmetry group \(H^{[1]}\). This class of 2-group symmetries arises in gauge theories when the gauge Wilson lines, which are charged objects under the 1-form symmetry \(H^{[1]}\), are not completely screened in the presence of the background gauge field for the 0-form symmetry group \(G^{[0]}\). The local operator \(\mathcal{O}\) that screens the gauge Wilson lines when the 0-form background gauge field is turned off is also charged under the 0-form symmetry. Hence, when the background gauge field of \(G^{[0]}\) is turned on, \(\mathcal{O}\) transmutes a gauge Wilson line into a flavour Wilson line [39; 46; 47]. ## 3 Cobordism with 2-group structure In this Section we describe the background gauge fields on associated principal \(\mathbb{G}\)-2-bundles which play an important role in field theories with 2-group symmetries. We focus on describing the topological spaces \(B|\mathbb{G}|\) that classify these background fields, and show how to calculate such classifying spaces in elementary examples, before introducing the cobordism groups that are central to this paper in SS3.2. ### Background fields and their classifying spaces A theory with a 2-group symmetry \(\mathbb{G}\) can be coupled to a background gauge field (which a physicist might wish to decompose into components of an ordinary 1-form gauge field and a 2-form gauge field),5 which is a connection on a principal \(\mathbb{G}\)-2-bundle in the sense defined in [59]. Footnote 5: A more explicit description of the background fields will be sketched in §5 where we discuss the 2-group symmetries appearing in 4d abelian chiral gauge theory. Mathematically, Bartels moreover shows that such 2-bundles over a manifold \(X\) are classified by the Cech cohomology group \(\check{H}^{1}(X,\mathbb{G})\) with coefficients valued in the 2-group \(\mathbb{G}\). Baez and Stevenson prove [60] that there is a bijection \[\check{H}^{1}(X,\mathbb{G})\cong[X,B|\mathbb{G}|], \tag{10}\] where \([A,B]\) denotes the set of homotopy classes of maps from \(A\) to \(B\), and \(|\mathbb{G}|\) is the geometric realization of the nerve \(\mathcal{N}\mathbb{G}\) of the 2-group \(\mathbb{G}\) when viewed as a groupoid as in (1). The nerve \(\mathcal{N}\mathbb{G}\) of the category \(\mathbb{G}\) with \(\mathrm{Obj}(\mathbb{G})=G\) and \(\mathrm{Mor}(\mathbb{G})=H\rtimes G\) is a set of simplices that we can construct out of these objects and morphisms - we will see some explicit examples shortly. Quantum field theories with 2-group symmetry \(\mathbb{G}\) are thus defined on spacetime manifolds \(X\) equipped with maps to \(B|\mathbb{G}|\), just as a theory with an ordinary symmetry \(G\) is defined on spacetimes equipped with maps to \(BG\). #### 3.1.1 Elementary examples of \(B|\mathbb{G}|\) Since the classifying space \(B|\mathbb{G}|\) will play a central role in what follows, we pause to better acquaint the reader with \(B|\mathbb{G}|\) and how it can be computed by looking at some simple examples. The calculations in this subsection are purely pedagogical - readers who are familiar with (or not interested in) such constructions might wish to skip ahead to SS3.1.2. #### Pure 0-form symmetries In the case that there is no 1-form symmetry, and the 2-group symmetry simply defines an ordinary 0-form symmetry \(G\), we expect to recover the usual classifying space of \(G\). In this case, the 2-group corresponds to the quadruplet \(\mathbb{G}=(G,0,0,0)\), to use the topological crossed module notation, and indeed \(B|(G,0,0,0)|=BG\). To see this from the nerve construction, we first view \(\mathbb{G}\) as a category, which is very simple in this case: \(\mathrm{Obj}(\mathbb{G})=G\) with only identity morphisms at each element. So there are no non-degenerate \(n\)-simplices when \(n>0\), while the set of 0-simplices is \(G\). Hence, the geometric realisation of the nerve \(|\mathbb{G}|\) is simply the group \(G\) itself, and its classifying space is \(BG\), as claimed. #### Pure 1-form symmetries The case of a 'pure 1-form symmetry', in which there is no 0-form symmetry at all, corresponds to 2-groups of the form \(\mathbb{G}=(0,H,0,0)=:H[1]\), where \(H\) is the (abelian) 1-form symmetry group. The corresponding classifying space is [59] \[B|H[1]|=B(BH), \tag{3.2}\] which we often denote simply \(B^{2}H\), which coincides with the well-known classifying space of an abelian gerbe [61]. For example, when \(H=\mathrm{U}(1)\), \(B|H[1]|\) is an Eilenberg-Maclane space \(K(\mathbb{Z},3)\). An example: pure \(\mathbb{Z}/2\) 1-form symmetry.For an explicit example that illustrates how to actually calculate the geometric realization of the nerve of such a \(\mathbb{G}=H[1]\), we consider the simplest case where \(H=\mathbb{Z}/2\). Viewing this as a category as in (2.1), we have that \(\text{Obj}(\mathbb{Z}/2[1])\) consists of only one element because \(G\) is the trivial group, denoted simply by \(\bullet\), while the group \(\text{Mor}(\mathbb{Z}/2[1])\) is just isomorphic to \(H=\mathbb{Z}/2\). Diagrammatically, this structure can be represented as The nerve of \(\mathbb{Z}/2[1]\) is then a simplicial set \(\mathcal{N}\mathbb{Z}/2[1]\) built out of these objects and morphisms, whose low dimensional components are given as follows. \[(\mathcal{N}\mathbb{Z}/2[1])_{0} =\{\bullet\} (0\text{-simplex})\] \[(\mathcal{N}\mathbb{Z}/2[1])_{1} =\left\{\bullet\stackrel{{-1}}{{\longrightarrow}} \bullet\right\} (1\text{-simplex})\] \[(\mathcal{N}\mathbb{Z}/2[1])_{2} =\left\{\bullet\stackrel{{-1}}{{\longrightarrow}} \bullet\stackrel{{-1}}{{\longrightarrow}}\bullet\right\} (2\text{-simplex})\] etc. This is enough information for us to work out the CW complex cell structure for its geometric realisation, which recall is the topological space that we denote \(|\mathbb{Z}/2[1]|\). The \(0\)-cell is just a point. The \(1\)-cell comes from the union of the \(0\)-cell and an interval, with the gluing rule given by the \(1\)-simplex \(\bullet\)\(-1\)_i.e._ identifying both ends of the interval with the \(0\)-cell. Hence, the \(1\)-cell has the topology of a \(1\)-sphere \(S^{1}\). Similarly, the form of the \(2\)-simplex, written more suggestively as [MISSING_PAGE_POST] #### 2-groups with a trivial map Moving up in complexity, let us now consider a 2-group with both the constituent 1-groups \(\tilde{G}\) and \(\tilde{H}\) being non-trivial, but with at least one of the maps in the crossed module definition being trivial. To wit, consider a particular 2-group \(\mathbb{G}=(\tilde{G},\tilde{H},t,\alpha)\), written using the notation of (2.2), where the map \(\alpha:\tilde{G}\to\operatorname{Aut}(\tilde{H})\) is trivial _i.e._\(\alpha(g)\) is the identity automorphism for any \(g\in\tilde{G}\). An example: \(\tilde{G}=\mathbb{Z}/2\) and \(\tilde{H}=\mathbb{Z}/2\).Probably the simplest example of a 2-group of this kind is one where \(\tilde{G}\) and \(\tilde{H}\) are both \(\mathbb{Z}/2\). Since \(\operatorname{Aut}(\tilde{H})\cong\operatorname{Aut}(\mathbb{Z}/2)\) is trivial, \(\alpha\) is automatically trivial. If \(\tilde{G}\) and \(\tilde{H}\) do not interact at all, meaning that the map \(t\) is also trivial (the module is 'uncrossed'), then the 2-group factorises as a product of a 0-form and a 1-form symmetry: \[|\mathbb{G}|\cong\mathbb{Z}/2\times B\mathbb{Z}/2,\] whose classifying space is just \(B\mathbb{Z}/2\times B^{2}\mathbb{Z}/2\). Since \(t:\tilde{H}\to\tilde{G}\) must be a homomorphism, the only other option for \(t\) is the identity map \(t:-1\mapsto-1\). Setting \(\mathbb{G}=(\mathbb{Z}/2,\mathbb{Z}/2,\operatorname{id},0)\), then as a category \(\operatorname{Obj}(\mathbb{G})\cong\mathbb{Z}/2\) and \(\operatorname{Mor}(\mathbb{G})=G\times H\cong\mathbb{Z}/2\times\mathbb{Z}/2\), whose action can be fully captured in the following diagram: Since the morphisms shown in the diagram are all the morphisms in this category, we can identify the morphisms \((1,1)\) and \((1,-1)\) as the identity morphisms at the objects \(1\) and \(-1\), respectively. The components of the nerve \(\mathcal{NG}\) in low dimensions are given by \[\mathcal{NG}_{0} =\{\bullet_{-1},\bullet_{1}\}\] \[\mathcal{NG}_{1} =\left\{\bullet_{-1}\xrightarrow{(-1,-1)}\bullet_{1},\bullet_{1} \xrightarrow{(-1,1)}\bullet_{-1}\right\}\] \[\mathcal{NG}_{2} =\{\bullet_{-1}\to\bullet_{1}\to\bullet_{-1},\bullet_{1}\to \bullet_{-1}\to\bullet_{1}\}\] From these data, it is easy to see that the 0-cell is just a pair of points, the 1-cell is a circle \(S^{1}\),, and the 2-cell is a 2-sphere \(S^{2}\), constructed by joining \(\bullet_{-1}\)\(\bullet_{1 type to be7 Footnote 7: If we do not require a spin structure, we could use SO instead of Spin in defining the symmetry type \(\mathcal{S}\) – as we do in §4. \[\mathcal{S}=\text{Spin}\times|\mathbb{G}|\;. \tag{3.4}\] Directly applying Freed-Hopkins' classification [3] of invertible, reflection-positive field theories to this symmetry type, suggests that the cobordism group \[H^{d+2}_{I\mathbb{Z}}(MT(\text{Spin}\times|\mathbb{G}|)\cong\text{Tors}\, \Omega^{\text{Spin}}_{d+1}(B|\mathbb{G}|)\times\text{Hom}\left(\Omega^{\text{ Spin}}_{d+2}(B|\mathbb{G}|),\mathbb{Z}\right) \tag{3.5}\] correctly classifies anomalies in \(d\)-dimensional fermionic theories with the 2-group symmetry \(\mathbb{G}\). Our detection of anomaly theories is therefore distilled into computing the spin bordism groups of \(B|\mathbb{G}|\), for which we can use the same methods as for an ordinary symmetry. In particular, our general strategy will be to build up the spin bordism groups of \(B|\mathbb{G}|\) by using the 'defining fibration' described above, \(B^{2}H^{[1]}\to B|\mathbb{G}|\to BG^{[0]}\), as follows. We first use the Serre spectral sequence (SSS) [66] to compute the cohomology of \(B|\mathbb{G}|\) from the fibration. Then, one can use this newly computed cohomology to compute the spin bordism groups of \(B|\mathbb{G}|\), for example via the Adams spectral sequence (ASS) [67]. Alternatively, we can convert the result to homology using the universal coefficient theorem, and use it as an input of the Atiyah-Hirzebruch spectral sequence (AHSS) [68] \[E^{2}_{p,q}=H_{p}(B|\mathbb{G}|;\Omega^{\text{Spin}}_{q}(\text{pt}))\;. \tag{3.6}\] for another fibration \(\text{pt}\to B|\mathbb{G}|\to B|\mathbb{G}|\). Kapustin and Thorngren already used the SSS associated to (3.3) in [62] to compute the cohomology of \(B|\mathbb{G}|\). Moreover, this approach has been used to calculate bordism groups relevant to 2-group symmetries with non-trivial Postnikov class; Wan and Wang did so using the ASS in Ref. [18], while Lee and Tachikawa used the AHSS in Ref [24]. We usually adopt the AHSS-based approach in this paper. ## 4 Maxwell revisited As a (rather lengthy) warm-up example, let us discuss Maxwell theory. We consider a 4d U(1) gauge theory without matter, and variants thereof in which we couple the action to various TQFTs without changing the underlying symmetry structure. The action for vanilla Maxwell theory is \[S_{\text{Maxwell}}=-\frac{1}{4e^{2}}\int_{M_{4}}\text{d}a\wedge\star\text{d}a\,, \tag{4.1}\] where \(a\) denotes the dynamical U(1) gauge field and \(\star\) denotes the Hodge dual, assuming a metric \(g\) on \(M_{4}\). This theory has two U(1) 1-form global symmetries [35], referred to as 'electric' and'magnetic' 1-form symmetries. Since each is associated with a continuous group, one can write down the corresponding conserved 2-form currents, which are \[j_{e}=\frac{2}{e^{2}}\mathrm{d}a,\qquad j_{m}=\star\frac{\mathrm{d}a}{2\pi}\,. \tag{4.2}\] The former is conserved (\(\mathrm{d}\star j_{e}=0\)) by Maxwell's equations \(\mathrm{d}\star\mathrm{d}a=0\), and the latter by \(\mathrm{d}^{2}=0\) (and using \(\star\star=1\)).8 Footnote 8: For Maxwell theory in \(d\) dimensions, \(j_{e}\) is always a 2-form current and so the electric symmetry is always 1-form, while \(j_{m}\) is more generally a \((d-2)\)-form, and so the magnetic symmetry is generally a \((d-3)\)-form symmetry. It was already observed in [35] that there is a mixed 't Hooft anomaly between the two 1-form symmetries, which from our perspective is a local anomaly that can be represented by an anomaly polynomial \(\mathrm{d}B_{e}\wedge\mathrm{d}B_{m}\), where \(B_{e}\) and \(B_{m}\) are background 2-form gauge fields for the electric and magnetic 1-form symmetries respectively. In Ref. [69], a variety of 5d SPT phases that have Maxwell-like theories on the 4d boundary, protected by electric and magnetic 1-form symmetries, were derived. Here we show how these results are simply captured by the cobordism classification. ### Cobordism classification of 1-form anomalies Maxwell theory has two global U(1) 1-form symmetries, and no 0-form global symmetries. The classifying space of this global symmetry structure \(\mathbb{G}\) is that of an abelian \(H\)-gerbe with \(H=\mathrm{U}(1)_{e}\times\mathrm{U}(1)_{m}\). From (3.2), \[B|\mathbb{G}|=B^{2}\mathrm{U}(1)_{e}\times B^{2}\mathrm{U}(1)_{m}\,. \tag{4.3}\] Since there are no fermions, we do not need a spin structure, and so can define pure Maxwell theory on any smooth, orientable 4d spacetime manifold.9 The 5d invertible field theories with this symmetry type are therefore classified by the generalized cohomology group Footnote 9: We do not account for any time-reversal symmetry, that would enable us to pass from oriented to unoriented bordism. \[\mathrm{Hom}\left(\mathrm{Tor}\,\Omega_{5}^{\mathrm{SO}}(B|\mathbb{G}|), \mathbb{R}/\mathbb{Z}\right)\hookrightarrow H_{I\mathbb{Z}}^{6}\left(MT( \mathrm{SO}\times|\mathbb{G}|)\right)\twoheadrightarrow\mathrm{Hom}\left( \Omega_{6}^{\mathrm{SO}}(B|\mathbb{G}|),\mathbb{Z}\right)\,, \tag{4.4}\] where this sequence splits, but not canonically. In terms of anomaly theories, the factor on the right of this SES captures local anomalies, and the factor on the left captures global anomalies; the abelian group that detects all anomalies is isomorphic to the direct product of the two. #### Local anomalies In Appendix B.1 we compute that \(\Omega_{6}^{\rm SO}\left((B^{2}{\rm U}(1))^{2}\right)=\mathbb{Z}\), and so \[\text{Hom}\left(\Omega_{6}^{\rm SO}(B^{2}{\rm U}(1)_{e}\times B^{2}{\rm U}(1)_{ m}),\mathbb{Z}\right)\cong\mathbb{Z}\,, \tag{4.5}\] corresponding to the integral of the degree-6 anomaly polynomial \(dB_{e}\cup dB_{m}\) on a generator of \(\Omega_{6}^{\rm SO}\left((B^{2}{\rm U}(1))^{2}\right)\). This detects the local mixed 't Hooft anomaly between the two 1-form symmetries discussed in [35]. #### Global anomalies In Appendix B.1 we also compute that \(\Omega_{5}^{\rm SO}\left((B^{2}{\rm U}(1))^{2}\right)=(\mathbb{Z}/2)^{3}\). Thus, the group classifying global anomalies is \[\text{Hom}\left(\text{Tor}\,\Omega_{5}^{\rm SO}(B^{2}{\rm U}(1)\times B^{2}{\rm U }(1)),\mathbb{R}/\mathbb{Z}\right)\cong(\mathbb{Z}/2)^{3}\,. \tag{4.6}\] In terms of characteristic classes, these three factors of \(\mathbb{Z}/2\) correspond to \[w_{2}\cup w_{3},\quad w_{2}\cup\tau_{e},\quad w_{2}\cup\tau_{m},\] where \(w_{2,3}\) denote the second and third Stiefel-Whitney classes of the tangent bundle, and \(\tau_{e,m}\) is a unique generator of \(H^{3}(B^{2}{\rm U}(1)_{e,m};\mathbb{Z}/2)\) (_c.f._ Appendix A.1) which can be thought of as a mod 2 reduction of an integral cohomology class represented by \(\frac{dB_{e,m}}{2\pi}\). ### Phases of non-spin Maxwell theory The \(w_{2}w_{3}\) invariant corresponds to a global gravitational anomaly seen on non-spin manifolds, related to that in [11]. The \(w_{2}\tau_{e,m}\) bordism invariants correspond loosely to 't Hooft anomalies for each of the 1-form symmetries, that prevents them from being gauged on certain (non-spin) gravitational backgrounds. We discuss these in more detail next. #### The \(w_{2}\cup\tau_{e}\) phase To see the global anomaly that is captured by the \(w_{2}\cup\tau_{e}\),10 we must go beyond the vanilla Maxwell theory described by (4.1), and couple 4d Maxwell to a topological term. The modified action is \[\begin{split} S&=-\frac{1}{4e^{2}}\int_{M_{4}}\mathrm{d} a\wedge\star\mathrm{d}a+\pi\mathrm{i}\int_{M_{4}}w_{2}(TM_{4})\cup\rho_{2}\left[\frac{ \mathrm{d}a}{2\pi}\right]_{\mathbb{Z}}\\ &=S_{\mathrm{Maxwell}}+\pi\mathrm{i}\int_{M_{4}}w_{2}(TM_{4}) \cup c_{1},\end{split} \tag{4.7}\] where \([\cdot]_{\mathbb{Z}}\) denotes an integral cohomology class and \(\rho_{2}\) is mod 2 reduction, and where we shall write \(c_{1}\) for both the first Chern class of the \(\mathrm{U}(1)\) gauge bundle and its mod 2 reduction. The topological term couples a background magnetic 2-form gauge field to the \(\mathbb{Z}/2\) part of the magnetic 1-form symmetry, and then equates it to the second Stiefel-Whitney class of the tangent bundle. The electric 1-form symmetry shifting \(a\mapsto a+\lambda\), where \(\lambda\) is a closed 1-form, remains intact. This theory describes fermionic-monopole electrodynamics; the topological term \(\int w_{2}\cup c_{1}\) forces all monopoles to become fermionic [70; 71]. To see this, following [11], consider adding a magnetic monopole via an 't Hooft line operator of charge 1 along \(\ell\subset M_{4}\) that we excise from \(M_{4}\) with the boundary condition that \(\int_{S^{2}_{\ell}}c_{1}=1\) on a small 2-sphere \(S^{2}_{\ell}\) around \(\ell\). The theory is now defined on the complement \(M^{\prime}_{4}\) of \(\ell\) in the presence of the 't Hooft operator, which is a manifold with boundary \(\partial M^{\prime}_{4}\cong S^{2}_{\ell}\times\ell\). For the integral \(\int_{M^{\prime}_{4}}w_{2}(TM^{\prime}_{4})\cup c_{1}\) to make sense on a manifold with boundary, we need a trivialisation of the integrand on \(\partial M^{\prime}_{4}\).11 Now, since \(c_{1}\) is non-trivial on \(S^{2}_{\ell}\) due to the non-zero monopole charge, we have to trivialise the \(w_{2}(TM^{\prime}_{4})\) factor. A trivialisation of the second Stiefel-Whitney class is nothing but a spin structure. Since there is a unique spin structure on \(S^{2}\), a spin structure on \(S^{2}_{\ell}\times\ell\) is the same as a spin structure along \(\ell\). Therefore, to define the additional phase, we must choose a spin structure along the monopole worldline: the monopole is a fermion. Footnote 11: This is because for an \(n\)-manifold with boundary \(M^{\prime}\), the fundamental homology class \([M^{\prime}]\) is in \(H_{n}(M^{\prime},\partial M^{\prime})\). The integration of a cohomology class on \(M^{\prime}\) is in fact a pairing between a cohomology class and the fundamental class \([M^{\prime}]\), so when we write \(\int_{M^{\prime}}c\) for a cohomology class \(c\in H^{n}(M^{\prime})\), we need to first find a class in \(H^{n}(M^{\prime},\partial M^{\prime})\) that “corresponds” to \(c\) to make sense of the pairing. This is always possible because the long exact sequence in cohomology \[\ldots\to H^{n-1}(\partial M^{\prime})\to H^{n}(M^{\prime},\partial M^{\prime} )\to H^{n}(M^{\prime})\to 0\] implies that any class \(c\in H^{n}(M^{\prime})\) can always be lifted to a class \(C\in H^{n}(M^{\prime},\partial M^{\prime})\). The choice of this lift is the choice of trivialisation of \(c\) on \(\partial M^{\prime}\). In order to see the 't Hooft anomaly afflicting \(\mathrm{U}(1)_{e}\), we now couple the theory to a background electric 2-form gauge field \(B_{e}\), and promote the 1-form \(\mathrm{U}(1)_{e}\) global symmetry transformation above to a 'local' transformation by relaxing the condition that the 1-form parameter \(\lambda\) has to be closed. In fact, \(\lambda\) does not strictly need to be a 1-form on \(M_{4}\) - more precisely, we now consider shifting the gauge field \(a\) by any connection \(\lambda\). For a connection \(\lambda\) with non-zero curvature \(\mathrm{d}\lambda\), the action (4.7) shifts under \(a\mapsto a+\lambda\), by \[\delta S=\pi\mathrm{i}\int_{M_{4}}w_{2}\cup\rho_{2}\left[\frac{\mathrm{d} \lambda}{2\pi}\right]_{\mathbb{Z}}, \tag{4.8}\] which encodes the 't Hooft anomaly associated with certain 'large 2-form gauge transformations', and on certain gravitational backgrounds. For example, take \(M_{4}\) to be \(\mathbb{C}P^{2}\). As usual, we can parametrize \(\mathbb{C}P^{2}\) with three complex coordinates \(z_{1}\), \(z_{2}\), and \(z_{3}\), such that \(\sum_{i=1}^{3}z_{i}^{*}z_{i}=1\) and with the equivalence \(z_{i}\sim e^{i\alpha}z_{i}\) for \(\alpha\in\mathbb{R}/\mathbb{Z}\). Define the 2-form \(\omega:=\frac{\mathrm{i}}{2}\partial\widetilde{c}\log(z_{1}^{2}+z_{2}^{2})\), which is just the volume form on the \(S^{2}\cong\mathbb{C}P^{1}\subset\mathbb{C}P^{2}\) submanifold defined by \(z_{3}=0\). Its cohomology class \([\omega]_{\mathbb{Z}}\) can be taken as a generator for \(H^{2}(\mathbb{C}P^{2};\mathbb{Z})\), and likewise \(a:=\rho_{2}[\omega]_{\mathbb{Z}}\) can be taken as a generator for \(H^{2}(\mathbb{C}P^{2};\mathbb{Z}/2)\). We also have that \(w_{2}(T\mathbb{C}P^{2})=a\). If we shift \(a\) by a connection \(\lambda\) with \(\left[\frac{\mathrm{d}\lambda}{2\pi}\right]_{\mathbb{Z}}=[n\omega]_{\mathbb{ Z}}=n[\omega]_{\mathbb{Z}}\), the shift in the action is \[\delta S=\pi\mathrm{i}\langle[M_{4}],a^{2}\rangle=n\pi\mathrm{i} \tag{4.9}\] where \(\langle\cdot,\cdot\rangle\) here denotes the pairing between mod 2 homology and cohomology, thus realising the \(\mathbb{Z}/2\)-valued global anomaly for odd values of \(n\). As usual, there is a dual description of this anomaly in terms of the 5d SPT phase that captures it via inflow to the 4d boundary. If \(M_{4}\) were nullbordant, the phase would here be \(S_{\mathrm{SPT}}=\pi\mathrm{i}\int_{X}w_{2}\cup\tau_{e}\) where recall \(\tau_{e}=\rho_{2}\left[\frac{dB_{e}}{2\pi}\right]_{\mathbb{Z}}\), for a 5-manifold \(X\) such that \(\partial X=M_{4}\) and to which the background fields are extended. To see this, first note that one could cancel the anomalous shift (4.8) by adding a 'counter-term' \(S_{\mathrm{ct.}}=-\pi\mathrm{i}\int_{M_{4}}\widehat{w_{2}}\wedge\frac{B_{e}}{2\pi}\) where \(\widehat{w_{2}}\) is a closed 2-form constructed from \(w_{2}\),12 recalling that the gauge transformation for \(B_{e}\) is \(B_{e}\mapsto B_{e}+\mathrm{d}\lambda\). However, this '4d action' is not properly quantised. (It is 'half-quantised', precisely because there is a \(\mathbb{Z}/2\) anomaly.) One must instead write it as the 5d action \(S_{\mathrm{ct.}}=-\pi\mathrm{i}\int_{X}w_{2}\cup\tau_{e}\). Thus the fermionic-monopole electrodynamics has exactly the right anomaly to be a boundary state of the SPT phase given by \(S_{\mathrm{SPT}}=\pi\mathrm{i}\int_{X}w_{2}\cup\tau_{e}\) (see also Ref. [69]).13 Footnote 12: To construct \(\widehat{w_{2}}\), first note that there is an integral lift \(W_{2}\in H^{2}(M_{4};\mathbb{Z})\) of \(w_{2}\in H^{2}(M_{4};\mathbb{Z}/2)\) because \(w_{3}=0\) for any orientable 4-manifold. (One sees this from the long exact sequence in cohomology associated to the coefficient sequence \(0\to\mathbb{Z}\xrightarrow{\times 2}\mathbb{Z}\xrightarrow{\rho_{2}} \mathbb{Z}/2\to 0\), for which the Bockstein connecting homomorphism \(\beta:H^{2}(\cdot;\mathbb{Z}/2)\to H^{3}(\cdot;\mathbb{Z}):w_{2}\mapsto\beta(w _{2})\) where \(\rho_{2}\beta(w_{2})=w_{3}\) ). Next, given the integral class \(W_{2}\), construct any complex line bundle over \(M_{4}\) with \(c_{1}=W_{2}\), and take \(\widehat{w_{2}}\) to be the curvature 2-form of that bundle. Bordism generator for the \(\boldsymbol{w_{2}}\cup\boldsymbol{\tau_{e}}\) anomaly.Of course, it is important to note that \(\mathbb{C}P^{2}\) is _not_ nullbordant in \(\Omega_{4}^{\text{SO}}\), given the signature \(\sigma=1\) is a 4d bordism invariant. Given we saw the anomaly explicitly on \(\mathbb{C}P^{2}\), one cannot in fact realise it via the counterterm \(S_{\text{ct.}}=-\pi\mathrm{i}\int_{X}w_{2}\cup\tau_{e}\) because there is no 5-manifold \(X\) bounded by \(M_{4}\). But as is well-known [72], the phase of the partition function \(Z\) on such a non-nullbordant manifold suffers from an ambiguity, since it can always be shifted by a choice of generalized theta angle, _i.e._ a coupling to a non-trivial TQFT corresponding to an element in \(\text{Hom}(\Omega_{4}^{\text{SO}}(\cdot),\mathbb{R}/\mathbb{Z})\). Thus, one can fix \(\arg Z[M_{4},B_{e}]=\theta_{0}\) to a reference phase, and then \(\arg Z\) is uniquely defined on any other 4-manifold \((M_{4}^{\prime},B_{e}^{\prime})\) that is bordant to \((M_{4},B_{e})\). This phase can be calculated by constructing a 5d mapping torus \(\tilde{X}\) by taking a cylinder that interpolates between \((M_{4},B_{e})\) and \((M_{4}^{\prime},B_{e}^{\prime})\), and gluing its ends to make a closed 5-manifold. This \(\tilde{X}\) will be a representative of the generator of the \(\mathbb{Z}/2\) factor in \(\Omega_{5}^{\text{SO}}((B^{2}\text{U}(1))^{2})\) that we have claimed is dual to \(w_{2}\cup\tau_{e}\). In more detail, take \(M_{4}=\mathbb{C}P^{2}\) and let \(B_{e}^{0}\) denote any reference choice of background 2-form gauge field for the electric 1-form symmetry on \(M_{4}\). Next take the product manifold \(\mathbb{C}P^{2}\times[0,2\pi]\) with a product metric. Over the interval \(I=[0,2\pi]\) one Figure 1: A map of the four possible ’t Hooft anomalies that can afflict a 4d theory with a pair of U(1) 1-form global symmetries, defined on orientable manifolds without a spin structure. Each ’t Hooft anomaly corresponds to a factor in the cobordism group \(H^{6}_{I\mathbb{Z}}\cong\mathbb{Z}\times(\mathbb{Z}/2)^{3}\), and is exhibited by Maxwell theory on its own (top, corresponding to the local \(\mathbb{Z}\)-valued anomaly) or coupled to a particular TQFT (left, bottom, and right, corresponding to the trio of global \(\mathbb{Z}/2\)-valued anomalies). The red arrows illustrate the relations between these four theories. For the global anomalies, these versions of 4d electromagnetism can also be realised as the boundary theories for 5d SPT phases. implements (for example linearly) the 'large' 2-form gauge transformation \(B_{e}\mapsto B^{\prime}_{e}:=B_{e}+n\omega\), \(n\in\mathbb{Z}\).14 Since \(B_{e}\) and \(B^{\prime}_{e}\) are gauge-equivalent, the manifold \(\mathbb{C}P^{2}\times[0,2\pi]\) can be glued at 0 and \(2\pi\) to make a closed 5-manifold with U(1)-2-bundle: Footnote 14: We refer to this as a ‘large’ gauge transformation because the gauge parameter \(n\omega\) is closed but not exact, having a ‘winding number’ of \(n\). \[\tilde{X}:=S^{1}\times\mathbb{C}P^{2},\qquad B_{e}(\theta,z_{i})=B_{e}^{0}+ \frac{t}{2\pi}n\omega, \tag{4.10}\] where \(t\in I\). To verify that this mapping torus can be taken as a generator of the bordism group \(\Omega_{5}^{\text{SO}}\left((B^{2}\text{U}(1))^{2}\right)\), it is enough to evaluate the bordism invariant 'anomaly theory' \(w_{2}\cup\tau_{e}\) on \([\tilde{X}]\) and find a non-trivial value for the phase. Physically, evaluating \(\langle w_{2}\cup\tau_{e},[\tilde{X}]\rangle\) computes the phase accrued by the partition function on \((\Sigma,B_{e})\) upon undergoing the 2-form gauge transformation \(B_{e}\to B^{\prime}_{e}\). Doing the computation, we have that \(\tau_{e}=\left[\frac{\text{d}t}{2\pi}\right]\cup[n\omega]=y\cup nx\in H^{3}( \tilde{X};\mathbb{Z}/2)\), where \(x\) is the non-trivial element of \(H^{2}(\tilde{X};\mathbb{Z}/2)\) obtained by pulling back \(a\) along the projection \(\tilde{X}\to\mathbb{C}P^{2}\), and \(y\) is the non-trivial element of \(H^{2}(\tilde{X};\mathbb{Z}/2)\) obtained by pulling back the generator of \(H^{1}(S^{1};\mathbb{Z}/2)\) along \(\tilde{X}\to S^{1}\). For \(n\) odd, we have that \(\tau_{e}=y\cup x\), the unique non-trivial element of \(H^{3}(\tilde{X};\mathbb{Z}/2)\). Thus \(w_{2}\cup\tau_{e}=x^{2}y\in H^{5}(\tilde{X};\mathbb{Z}/2)\) for \(n\) odd, and so the anomaly theory evaluates to 1 mod 2 on this mapping torus, while it is trivial for \(n\) even. #### The \(w_{2}\cup\tau_{m}\) phase An identical account can be given for the magnetic 1-form symmetry, if one replaces the topological coupling \(\pi\mathrm{i}\int\!w_{2}\cup c_{1}\) in (4.7) by \(\pi\mathrm{i}\int\!w_{2}\cup c_{1}^{\text{dual}}\), where \(c_{1}^{\text{dual}}\) denotes the first Chern class of the electromagnetic dual of the gauge field. In that case, the 4d theory suffers from a 't Hooft anomaly afflicting the magnetic 1-form symmetry, which obstructs U(1)\({}_{m}\)[1] from being gauged on non-spin manifolds such as \(\mathbb{C}P^{2}\). The 5d anomaly theory is \(\int_{M_{5}}w_{2}\cup\tau_{m}\). Physically, this same anomaly can be understood from a different perspective. Suppose one couples vanilla Maxwell theory (4.1) to a charge-1 fermion, defined on all orientable 4-manifolds using a \(\text{Spin}_{c}\) structure. The electric 1-form symmetry is explicitly broken, but one can still probe anomalies involving the magnetic 1-form symmetry and the gravitational background. The magnetic 1-form symmetry remains intact, even though we effectively have a 'charge-\(\frac{1}{2}\)' monopole due to the constraint \(c_{1}=\frac{w_{2}\text{ mod }2}{2}\) that follows the definition of \(\text{Spin}_{c}\), because this monopole is not dynamical but rather acts as a constraint on the field configurations that are summed in the path integral. The symmetry type is now \(\text{Spin}_{c}\times\mathbb{G}_{m}\), with \(B|\mathbb{G}_{m}|=B^{2}\text{U}(1)_{m}\), for which the cobordism group \(H^{6}_{I\mathbb{Z}}(MT(\text{Spin}_{c}\times\mathbb{G}_{m}))\cong\mathbb{Z}/2 \times\mathbb{Z}^{2}\).15 Of course, all the anomalies involving the electric 1-form symmetry are absent, as is the \(w_{2}w_{3}\) anomaly because the \(\text{Spin}_{c}\) requirement trivialises \(w_{3}\), but the \(w_{2}\cup\tau_{m}\) global anomaly remains. Footnote 15: The two factors of \(\mathbb{Z}\) arise simply because we have included the ‘U(1) gauge symmetry’ in our definition of the symmetry type, because it is now entangled with the spacetime symmetry. If one computed the reduced \(\text{Spin}_{c}\) bordism these factors would go, leaving only the \(\mathbb{Z}/2\)-valued \(w_{2}\cup\tau_{m}\) ’t Hooft anomaly for the magnetic 1-form symmetry. #### The \(w_{2}\cup w_{3}\) phase The final \(\mathbb{Z}/2\)-valued global anomaly that is detected by (4.6) corresponds to the 5d SPT phase \(\int_{M_{5}}w_{2}\cup w_{3}\). To exhibit this anomaly, one starts from vanilla Maxwell theory and couples \(w_{2}(TM_{4})\) as a background gauge field to both the electric and magnetic U(1) 1-form symmetries. This SPT phase is purely gravitational and has been extensively analysed in the literature. Its boundary states include the theory of all-fermion electrodynamics [11; 73] and a fermionic theory with an emergent SU(2) gauge symmetry [11]. ### Scalar QED and 1-form anomaly interplay Now let us couple Maxwell theory, defined as before with an SO structure, to a charge-2 boson. This coupling to matter breaks the electric 1-form symmetry down to a discrete remnant, \[\text{U}(1)_{e}[1]\to\mathbb{Z}/2_{e}[1]\,, \tag{4.11}\] while the full U(1)\({}_{m}\)[1] magnetic 1-form symmetry is preserved. This version of scalar QED therefore furnishes us with two global 1-form symmetries, one discrete and one continuous, and no 0-form symmetries. The classifying space of the global symmetry \(\mathbb{G}^{\prime}\) is therefore \[B|\mathbb{G}^{\prime}|=B^{2}\mathbb{Z}/2_{e}\times B^{2}\text{U}(1)_{m}\,, \tag{4.12}\] and the 5d invertible field theories with this symmetry type are classified by the sequence (4.4) but with \(|\mathbb{G}|\) replaced by \(|\mathbb{G}^{\prime}|\). In Appendix B.2 we compute the relevant bordism groups to stitch together this generalized cohomology group. Local anomalies:We compute from Appendix B.2 that \[\text{Hom}\left(\Omega_{6}^{\text{SO}}(B^{2}\mathbb{Z}/2_{e}\times B^{2} \text{U}(1)_{m}),\mathbb{Z}\right)=0\,, \tag{4.13}\] and so there are _no_ local anomalies; clearly, the mixed local 't Hooft anomaly '\(dB_{e}\cup dB_{m}\)' can no longer be realised now that one of the 1-form symmetries is broken to a discrete group, because the associated 2-form gauge field now has zero curvature. Global anomalies:On the other hand, we find that \(\Omega_{5}^{\rm SO}\left(B^{2}\mathbb{Z}/2\times B^{2}{\rm U}(1)\right)=(\mathbb{Z }/2)^{4}\). The group classifying global anomalies is now \[{\rm Hom}\left({\rm Tor}\,\Omega_{5}^{\rm SO}(B^{2}\mathbb{Z}/2_{e}\times B^{2} {\rm U}(1)_{m}),\mathbb{R}/\mathbb{Z}\right)\cong(\mathbb{Z}/2)^{4}\,, \tag{4.14}\] and we notice that there is an extra factor of \(\mathbb{Z}/2\) corresponding to an extra global anomaly. At the level of characteristic classes, we can represent these four global anomalies in terms of products of Stiefel-Whitney classes and cohomology classes of \(H^{\bullet}(B^{2}{\rm U}(1)\times B^{2}\mathbb{Z}/2;\mathbb{Z}/2)\) that are in degree-5. There are five of them in total: \[w_{2}\cup w_{3},\quad w_{2}\cup\tau_{m},\quad w_{2}\cup\ {\rm{Sq}^{1}}u_{2}, \quad u_{2}\cup\tau_{m},\quad\ {\rm{Sq}^{2}}\ {\rm{Sq}^{1}}u_{2},\] where \(u_{2}\) is the unique generator of \(H^{2}(B^{2}\mathbb{Z}/2;\mathbb{Z}/2)\) (see Appendix A.1). However, once pulled back to an orientable 5-manifold \(X\), Wu's relation [74] tells us that \(\ {\rm{Sq}^{2}}\ {\rm{Sq}^{1}}u_{2}=w_{2}(TX)\cup\ {\rm{Sq}^{1}}u_{2}\), so it is not an independent bordism invariant, and we end up with four invariants, as expected from the bordism group computation. #### The anomaly interplay Given an appropriate map \(\pi\) between two spectra \(MTH\) and \(MTH^{\prime}\), there is an induced pullback map between the corresponding cobordism theories, \(\pi^{*}:H^{d+2}_{I\mathbb{Z}}\left(MTH^{\prime}\right)\to H^{d+2}_{I \mathbb{Z}}\left(MTH\right)\), that can be used to relate anomalies between the two theories. This idea of 'anomaly interplay' has recently been used, for example, to relate local anomalies in 4d \({\rm U}(2)\) gauge theory to Witten's \({\rm SU}(2)\) anomaly [75], to study anomalies in \(\mathbb{Z}/k\) symmetries in 2d [76; 77] with applications to bootstrapping conformal field theories, and to derive anomalies in non-abelian finite group symmetries in 4d [31]. Physically, the idea of 'pulling back anomalies' from one symmetry to another is not new, but goes back to Elitzur and Nair's analysis of global anomalies [78], following Witten [79]. In all these cases the symmetry type takes the form \(\{{\rm Spin}\ {\rm or}\ {\rm SO}\}\times G^{[0]}\), where \(G^{[0]}\) is a 0-form symmetry. In that case, a map of spectra \(\pi\) is induced by any group homomorphism \(\pi:G\to G^{\prime}\). In the present case, where we have theories with 1-form global symmetries, it is straightforward to adapt this notion of anomaly interplay (which is just pullback in cobordism). Again, the crucial fact we use is that the _nerves_ associated with the 1-form symmetries are themselves just ordinary (topological) groups, between which we can define group homomorphisms. Letting \(\pi:\mathbb{Z}/2\rightarrow{\rm U}(1):(1,-1)\rightarrow(1,e^{\rm i\pi})\) denote the subgroup embedding, there is an associated map between symmetry types, \(\pi:{\rm SO}\times|\mathbb{G}^{\prime}|\to{\rm SO}\times|\mathbb{G}|\) and an induced pullback map between the cobordism theories, \[\pi^{*}:H^{6}_{I\mathbb{Z}}(MT({\rm SO}\times|\mathbb{G}^{\prime}|))\longrightarrow H ^{6}_{I\mathbb{Z}}(MT({\rm SO}\times|\mathbb{G}|))\,. \tag{4.15}\] Note that, since \(H^{*}_{I\mathbb{Z}}\) is a contravariant functor, the map between anomaly theories goes in the opposite direction to the subgroup embedding that we started with. Moreover, there is a pullback diagram for the whole short exact sequence characterizing \(H^{6}_{I\mathbb{Z}}\), which encodes the notion of 'anomaly interplay': \[\begin{CD}0@>{}>{}>(\mathbb{Z}/2)^{4}@>{}>{}>(\mathbb{Z}/2)^{4}@>{}>{}>0@>{}>{ }>0\\ @V{}V{}V@V{\pi^{*}}V{}V@V{}V{}V\\ 0@>{}>{}>(\mathbb{Z}/2)^{3}@>{}>{}>(\mathbb{Z}/2)^{3}\times\mathbb{Z}@>{}>{}> \mathbb{Z}@>{}>{}>0\,.\end{CD} \tag{4.16}\] This anomaly interplay diagram can be used to track 't Hooft anomalies in the 1-form global symmetries, through the 'integrating in' of a charge-2 boson. Since all the generators of the cobordism groups can be represented by characteristic classes, we can represent the pullback \(\pi^{*}\) by its action on the characteristic classes. The non-trivial action of \(\pi^{*}\) is encoded in \[\pi^{*}: \tau_{e}\cup\tau_{m}\mapsto u_{2}\cup\tau_{m}, \tag{4.17}\] \[w_{2}\cup\tau_{e}\mapsto w_{2}\cup\ {\rm{Sq}}^{1}u_{2}. \tag{4.18}\] Trivially, \(\pi^{*}\) maps \(w_{2}\cup\tau_{m}\) and \(w_{2}\cup w_{3}\) to themselves. The most interesting pullback relation is Eq. (4.17), which says that a \(\mathbb{Z}\)-valued local anomaly pulls back to a \(\mathbb{Z}/2\)-valued global anomaly. This is somewhat analogous to the interplay studied in [23; 75] between \({\rm U}(2)\) local anomalies and \({\rm SU}(2)\) global anomalies for 4d 0-form symmetries, where the maps there corresponded to pulling back exponentiated \(\eta\)-invariants. To see how the interplay works in this example involving 1-form symmetries, which is arguably simpler than the story for chiral fermion anomalies, we follow the general methodology set out in Ref. [23]. To wit, we start with a 5-manifold \(M_{5}\) representative of a class in \(\Omega^{\rm SO}_{5}(B^{2}\mathbb{Z}/2_{e}\times B^{2}{\rm U}(1)_{m})\) that is dual to \(u_{2}\tau_{m}\) (_i.e._ on which \(u_{2}\tau_{m}\) evaluates to 1 mod 2). We choose \[M_{5}=S_{e}^{2}\times S_{m}^{2}\times S_{\theta}^{1}, \tag{4.19}\] equipped with a \(\mathbb{Z}/2\) background 2-bundle that has support only on \(S_{e}^{2}\), and a \({\rm U}(1)_{m}\) background 2-bundle that has support only on \(S_{m}^{2}\times S_{\theta}^{1}\). Specifically, the \(\mathbb{Z}/2\) 2-form connection has non-trivial 2-holonomy round \(S_{e}^{2}\), _viz_\(\int_{S_{e}^{2}}u_{2}=1\) mod 2. The \({\rm U}(1)_{m}\) 2-form connection \(B_{m}\) on \(S^{2}_{m}\times S^{1}_{\theta}\) can be written \(B_{m}=B^{0}_{m}+\frac{\theta}{2\pi}k\omega_{m}\), \(k\in\mathbb{Z}\), where \(\omega_{m}\) is the volume form on \(S^{2}_{m}\) such that \(\int_{S^{2}_{m}}\omega_{m}=1\), \(B^{0}_{m}\) is any connection on the \(S^{2}_{m}\) factor, and \(\theta\in[0,2\pi)\) parametrizes \(S^{1}_{\theta}\). The important thing is the flux relation \[\int_{S^{2}_{m}\times S^{1}_{\theta}}dB_{m}=k\int_{S^{1}_{\theta}}\frac{d\theta }{2\pi}\int_{S^{2}_{m}}\omega_{m}=k\,. \tag{4.20}\] Recalling \(\tau_{m}=[dB_{m}/2\pi]_{\mathbb{Z}}\), integrating gives \(\langle x_{2}\cup\tau_{m},[M_{5}]\rangle=k\) mod 2. When \(k\) is odd, \(M_{5}\) is not nullbordant, and the background fields cannot be simultaneously extended to any 6-manifold that \(M_{5}\) bounds. Now, to show that this global anomaly is the pullback under \(\pi^{*}\) of the local anomaly \(\tau_{e}\cup\tau_{m}\), we simply use the subgroup embedding \(\pi:\mathbb{Z}/2\rightarrow\mathrm{U}(1)\) to embed the \(\mathbb{Z}/2\) 2-connection in an \(\mathrm{U}(1)\) 2-connection \(B_{e}\). One can take \[B_{e}=\frac{n}{2}\omega_{e},\qquad n\in 2\mathbb{Z}+1,\qquad\int_{S^{2}_{e}} \omega_{e}=1. \tag{4.21}\] The 5-manifold (4.19) equipped with these structures \(B_{e}\) and \(B_{m}\) can be regarded as the pushforward in bordism of the \(M_{5}\) that we started with, \(\pi_{*}M_{5}\). This is nullbordant in \(\Omega^{\mathrm{SO}}_{5}(B^{2}\mathrm{U}(1)_{e}\times B^{2}\mathrm{U}(1)_{m})\); it can be realised as the boundary of a six-manifold \(M_{6}=D^{3}_{e}\times S^{2}_{m}\times S^{1}_{\theta}\), where \(D^{3}_{e}\) is one half of a 3-sphere \(S^{3}\) that is bounded by \(S^{2}_{e}\), to which the electric 2-form connection (4.21) can now be extended with \(\int_{S^{3}}\frac{dB_{e}}{2\pi}=2n\).16 Using Stokes' theorem, we thus have Footnote 16: We emphasize that \(B_{e}\) is _no longer_ a flat connection when extended into the \(S^{3}\) bulk, even though it restricts to a flat connection on the boundary of \(D^{3}_{e}\). \[\int_{S^{2}_{e}\times S^{2}_{m}\times S^{1}_{\theta}}\frac{B_{e}}{2\pi}\wedge \frac{dB_{m}}{2\pi}=\int_{D^{3}_{e}}\frac{dB_{e}}{2\pi}\int_{S^{2}_{m}\times S ^{1}_{\theta}}\frac{dB_{m}}{2\pi}=nk\,, \tag{4.22}\] obtaining the same phase from \(\tau_{e}\tau_{m}\) as we did from \(u_{2}\tau_{m}\). On the other hand, we only need to see that the characteristic class \(\tau_{e}\) reduces to the class \(\mathrm{\,Sq}^{1}u_{2}\) when we restrict to a flat 2-bundle with mod 2 2-holonomy, in order to show that \(w_{2}\cup\tau_{e}\) pulls back to \(w_{2}\cup\mathrm{\,Sq}^{1}u_{2}\). To see this most clearly, it is best to use the language of cochains instead of differential forms. By identifying \(\mathrm{U}(1)\) with \(\mathbb{R}/\mathbb{Z}\), we represent the \(\mathrm{U}(1)\) 2-form gauge field \(B_{e}\) by a real 2-cochain \(b_{e}\). If the gauge field is flat, \(\delta b_{e}\) must be trivial as a cochain valued in \(\mathbb{R}/\mathbb{Z}\). In other words, \(\delta b_{e}\) is an integral 3-cochain, whose cohomology class can be identified with \(\tau_{e}\). To use this flat \(\mathrm{U}(1)\) 2-cochain to describe a \(\mathbb{Z}/2\) 2-form gauge field, we further impose that it must have mod 2 2-holonomy, _i.e._\(b_{e}\) must be half-integral-valued, not just any real cochain. Then \(\tilde{b}_{e}:=2b_{e}\) is an integral cochain whose mod 2 reduction defines a cohomology class in \(H^{2}(M_{5};\mathbb{Z}/2)\) that coincides with \(u_{2}\). As \(\,\mathrm{Sq}^{1}u_{2}=\beta_{2}(u_{2})\), where \(\beta_{2}\) is the Bockstein homomorphism in the long exact sequence for cohomology \[\ldots\to H^{n}(M_{5};\mathbb{Z}/2)\to H^{n}(M_{5};\mathbb{Z}/4)\to H^{n}(M_{5} ;\mathbb{Z}/2)\xrightarrow{\beta_{2}}H^{n+1}(M_{5};\mathbb{Z}/2)\to\ldots\] induced by the short exact sequence \(0\to\mathbb{Z}/2\to\mathbb{Z}/4\to\mathbb{Z}/2\to 0\), it can be represented by the mod 2 reduction of \(\frac{1}{2}\delta\tilde{b}_{e}\). But this is exactly the same as \(\delta b_{e}\) which represents \(\tau_{e}\). Therefore, the mod 2 reduction of \(\tau_{e}\) is \(\,\mathrm{Sq}^{1}u_{2}\) when we embed the \(\mathbb{Z}/2\) 1-form symmetry inside the U(1) 1-form symmetry. There will be no further examples of anomaly interplay in the present paper; in particular, we do not consider any examples with non-trivially fibred 2-groups (although see footnote 28 for some comments along these lines). ## 5 QED anomalies revisited The Maxwell examples in the previous Section exhibited only 1-form global symmetries. In this Section, we move on to theories with both 0-form and 1-form global symmetries that are fused together in a non-trivial 2-group structure. Quantum electrodynamics (QED) in 4d with certain fermion content furnishes us with such a theory. Here both the 0-form and 1-form symmetry groups are U(1), and the 2-group structure is non-trivial when there is a mixed 't Hooft anomaly between the global and gauged 0-form U(1) currents, as was discovered in [45]. There is a further possible 't Hooft anomaly in the 2-group structure that comes from the usual cubic U(1) anomaly for the 0-form symmetry, but, rather than being a \(\mathbb{Z}\)-valued local anomaly, the 2-group structure transmutes this cubic anomaly to a discrete, \(\mathbb{Z}/m\)-valued global anomaly [45], with \(m\) given by the modulus of the integral Postnikov class of the 2-group. After reviewing the physics arguments for these statements, we derive them from the cobordism perspective. Our spectral sequence calculations reproduce the order of the finite \(\mathbb{Z}/m\) anomaly. ### From the physics perspective Consider a system of Weyl fermions coupled to a U(1)\({}_{a}\) gauge group with dynamical gauge field \(a\). Assuming that a fermion with unit charge is present, the electric 1-form symmetry is broken completely. This leaves only the magnetic U(1)\({}^{[1]}\) 1-form symmetry, with 2-form current \(j^{m}=\star\frac{f}{2\pi}\) as in (4.2), where \(f=\mathrm{d}a\). Given enough Weyl fermions, one can find a global U(1)\({}^{[0]}\) 0-form symmetry that does not suffer from the ABJ anomaly. It is possible, however, that upon coupling a background gauge field \(A\) to this global 0-form symmetry, there is an operator-valued mixed anomaly that is captured by the anomaly polynomial term \[\Phi_{6}\supset\frac{\kappa}{16\pi^{3}}f\wedge F\wedge F\,, \tag{109}\] where \(F=dA\) is the field strength for the background gauge field \(A\). It describes, via the usual descent procedure, the shift in the effective action under the background gauge transformation \(A\mapsto A+\mathrm{d}\lambda^{(0)}\), with \(\lambda^{(0)}\) a \(2\pi\)-periodic scalar, by \[\delta S=\frac{\mathrm{i}\kappa}{2}\int_{M_{4}}\lambda^{(0)}\frac{f\wedge F}{4 \pi^{2}}\,. \tag{110}\] It was realised in Ref. [45] that we should not interpret this term as an anomaly, but rather as a non-trivial 2-group structure. To see why this is the case, we first couple a 2-form background gauge field \(B^{(2)}\), which satisfies the usual normalisation condition \[\int_{M_{3}}\frac{\mathrm{d}B^{(2)}}{2\pi}\in\mathbb{Z} \tag{111}\] on closed 3-manifolds, to the magnetic 1-form symmetry via the coupling \[S_{\text{coupling}}=\frac{\mathrm{i}}{2\pi}\int_{M_{4}}f\wedge B^{(2)}. \tag{112}\] Then the potential anomaly (110) can be cancelled by modifying the background gauge transformation for \(B^{(2)}\) from an ordinary 1-form gauge transformation \(B^{(2)}\mapsto B^{(2)}+\mathrm{d}\lambda^{(1)}\), that is independent of the 0-form gauge transformation of \(A\), to a 2-group gauge transformation \[A\mapsto A+\mathrm{d}\lambda^{(0)},\qquad B^{(2)}\mapsto B^{(2)}+\mathrm{d} \lambda^{(1)}+\frac{\hat{\kappa}}{2\pi}\lambda^{(0)}F, \tag{113}\] provided that we identify \(\hat{\kappa}\) with \(-\kappa/2\).17 Here the gauge transformation parameter \(\lambda^{(1)}\) is a properly normalised \(\mathrm{U}(1)\) gauge field. Footnote 17: This is well-defined because it can be shown that \(\kappa\) is always even. This modified transformation mixes the magnetic \(\mathrm{U}(1)^{[1]}\) 1-form symmetry with the \(\mathrm{U}(1)^{[0]}\) 0-form symmetry, and encodes the 2-group structure at the level of the background fields. In our quadruplet notation, the 2-group is \[\mathbb{G}=(\mathrm{U}(1),\mathrm{U}(1),0,\hat{\kappa}), \tag{114}\] with the Postnikov class \(\hat{\kappa}\in H^{3}(B{\rm U}(1);{\rm U}(1))\cong\mathbb{Z}\), where we use the fact that \(H^{3}(B{\rm U}(1);{\rm U}(1))\) with _continuous_\({\rm U}(1)\) coefficients is isomorphic to \(H^{4}(B{\rm U}(1);\mathbb{Z})\cong\mathbb{Z}\) via the universal coefficient theorem. The nerve of such a 2-group is the extension \[|\mathbb{G}|={\rm U}(1)^{[0]}\times_{\hat{\kappa}}{\rm U}(1)^{[1]}. \tag{102}\] The non-trivial 2-group structure can also be seen without turning on the background gauge fields explicitly. If we write \(j\) for the 1-form current associated with \({\rm U}(1)^{[0]}\), then the Ward identity takes the non-trivial form \[\partial^{\mu}j_{\mu}(x)j_{\nu}(y)=\frac{\hat{\kappa}}{2\pi}\partial^{\lambda} \delta^{(4)}(x-y)j_{\nu\lambda}^{m}(y)\,, \tag{103}\] where \(j_{\nu\lambda}^{m}\) are the components of the magnetic 2-form current \(j^{m}\). This Ward identity shows how the 0-form and 1-form currents are fused, realising Eq. (3). The 2-group symmetry \({\rm U}(1)^{[0]}\times_{\hat{\kappa}}{\rm U}(1)^{[1]}\) can still suffer from an 't Hooft anomaly. Recall that, when \(\hat{\kappa}=0\), the anomaly polynomial for our system of Weyl fermions is \[\Phi_{6}=\frac{{\cal A}_{3}}{6}c_{1}(F)^{3}-\frac{{\cal A}_{\rm mixed}}{24}p_ {1}(R)c_{1}(F), \tag{104}\] where \(c_{1}(F)=F/2\pi\) is the first Chern class of the \({\rm U}(1)^{[0]}\) bundle, and \(p_{1}(R)=\frac{1}{8\pi^{2}}{\rm Tr}\,R\wedge R\) is the first Pontryagin class of the tangent bundle, where \(R\) is the curvature 2-form. Before we continue, it is important to discuss the role of these anomaly coefficients in the (co)bordism context, still for the case \(\hat{\kappa}=0\). Naively, one might think that the cubic anomaly coefficient \({\cal A}_{3}\) and the mixed \({\rm U}(1)\)-gravitational anomaly coefficient \({\cal A}_{\rm mixed}\) are the two integers that classify anomalies according to cobordism, _i.e._ that \({\cal A}_{3}\) and \({\cal A}_{\rm mixed}\) can be chosen as generators of the group \({\rm Hom}\left(\Omega_{6}^{\rm Spin}\left(B{\rm U}(1)\right),\mathbb{Z} \right)\cong\mathbb{Z}\times\mathbb{Z}\). However, this is not the correct identification because \({\cal A}_{3}\) and \({\cal A}_{\rm mixed}\) are not quite independent. It can be shown that \[\alpha_{1}:=c_{1}(F)^{3}\qquad\mbox{and}\qquad\alpha_{2}:=\frac{c_{1}(F)}{6} \left(c_{1}(F)^{2}-\frac{1}{4}p_{1}(R)\right) \tag{105}\] are independent integral cohomology classes.18 In terms of these basis generators, we can write \(\Phi_{6}\) as Footnote 18: That \(\alpha_{1}\) is integral is evident from the definition of \(c_{1}(F)\). To see that \(\alpha_{2}\) is integral, we observe that it is the anomaly polynomial for a Weyl fermion with charge \(+1\), and then apply the Atiyah–Singer index theorem. \[\Phi_{6}=\frac{1}{6}\left({\cal A}_{3}-{\cal A}_{\rm mixed}\right)\alpha_{1}+{ \cal A}_{\rm mixed}\alpha_{2}. \tag{106}\] Again, by the Atiyah-Singer index theorem, \(\Phi_{6}\) must be integral. Since \(\alpha_{1}\) and \(\alpha_{2}\) are integral basis generators, we can deduce that the two integers \((r,s)\) that label \(\text{Hom}\left(\Omega_{6}^{\text{Spin}}\left(B\text{U}(1)\right),\mathbb{Z}\right)\) are \[(r,s)=\left(\frac{1}{6}\left(\mathcal{A}_{3}-\mathcal{A}_{\text{mixed}}\right), \mathcal{A}_{\text{mixed}}\right). \tag{5.12}\] Continuing, let's switch the Postnikov class \(\hat{\kappa}\) back on, and couple the background 2-form gauge field \(B^{(2)}\) to the \(\text{U}(1)^{[1]}\) 1-form symmetry. We are free to add a Green-Schwarz counter-term \[S_{\text{GS}}=\frac{in}{2\pi}\int_{M_{4}}B^{(2)}\wedge F,\qquad n\in\mathbb{Z} \tag{5.13}\] to the action. Under the 2-group transformation (5.5), the effective action shifts if the anomaly coefficients \(\mathcal{A}_{3}\) and \(\mathcal{A}_{\text{mixed}}\) are non-zero, by \[\delta S=\text{i}\frac{\mathcal{A}_{3}+6n\hat{\kappa}}{6}\int_{M_{4}}\lambda^ {(0)}\frac{F\wedge F}{4\pi^{2}}-\text{i}\frac{\mathcal{A}_{\text{mixed}}}{24} \int_{M_{4}}\lambda^{(0)}p_{1}(R). \tag{5.14}\] Since one may choose the counterterm coefficient \(n\) to be any integer, one sees that \(\mathcal{A}_{3}\) is only well-defined modulo \(6\hat{\kappa}\)[45]. It follows that the integer \(r=\frac{1}{6}\left(\mathcal{A}_{3}-\mathcal{A}_{\text{mixed}}\right)\) that we claim classifies the anomaly is not really valued in \(\mathbb{Z}\), being well-defined only modulo \(|\hat{\kappa}|\). This corresponds to a _global_ anomaly in the 2-group symmetry, that is valued in the cyclic group \[\mathbb{Z}/m,\qquad m=|\hat{\kappa}|. \tag{5.15}\] We emphasize that, even though \(\mathcal{A}_{3}\) is well-defined modulo \(6\hat{\kappa}\) (as was derived in [45]), the discrete group that classifies the global anomaly has order \(|\hat{\kappa}|\) (and not \(6|\hat{\kappa}|\)). The mixed gravitational anomaly remains a \(\mathbb{Z}\)-valued local anomaly. ### From the bordism perspective In this Section we show how the 't Hooft anomalies afflicting the 2-group global symmetry, that we have just described, can be precisely understood using cobordism. To do so, we need to compute the generalized cohomology groups \[H^{6}_{I\mathbb{Z}}\left(MT\left(\text{Spin}\times|\mathbb{G}|\right)\right), \qquad|\mathbb{G}|=U(1)^{[0]}\times_{\hat{\kappa}}U(1)^{[1]}\,, \tag{5.16}\] which are built from \(\Omega_{5}^{\text{Spin}}(B|\mathbb{G}|)\) and \(\Omega_{6}^{\text{Spin}}(B|\mathbb{G}|)\), for each value of the Postnikov class \(\hat{\kappa}\). These abelian groups detect and classify all possible anomalies for this 2-group symmetry type. To compute these bordism groups, we follow the general strategy outlined in SS3.2. We first apply the cohomological Serre spectral sequence to the fibration \[K(\mathbb{Z},3)\to B|\mathbb{G}|\to K(\mathbb{Z},2) \tag{111}\] to calculate the cohomology of \(B|\mathbb{G}|\) in relevant low degrees. Then we will convert the result into homology groups, which are fed into the Atiyah-Hirzebruch spectral sequence for the fibration \(\text{pt}\to B|\mathbb{G}|\to B|\mathbb{G}|\), for which the second page is \(E_{p,q}^{2}=H_{p}\left(B|\mathbb{G}|;\Omega_{q}^{\text{Spin}}(\text{pt})\right)\), to compute the spin bordism. So, to begin, the \(E_{2}\) page of the cohomological Serre spectral sequence for the fibration (111) is given by \[E_{2}^{p,q}=H^{p}\big{(}K(\mathbb{Z},2);H^{q}(K(\mathbb{Z},3);\mathbb{Z})\big{)}. \tag{112}\] Using \(H^{\bullet}(K(\mathbb{Z},3);\mathbb{Z})\cong\{\mathbb{Z},0,0,\mathbb{Z},0,0, \mathbb{Z}/2,0,\mathbb{Z}/3,\mathbb{Z}/2,\ldots\}\), given in Table 3 of Appendix A.1, we can construct the \(E_{2}\) page as shown in the left-hand side of Fig. 2. In fact, what is shown there is the \(E_{4}\) page, since the entries are sparse enough that there are no non-trivial differentials in the region we are interested in until page \(E_{4}\). The differentials \(\alpha\), \(\beta\), and \(\gamma\) shown on the left-hand diagram of Fig. 2 are linear Figure 2: Fourth and fifth pages of the Serre spectral sequence to compute the integral cohomology of the fibration \(K(\mathbb{Z},3)\to B|\mathbb{G}|\to K(\mathbb{Z},2)\). in the Postnikov class \(\hat{\kappa}\in\mathbb{Z}\) (see the Appendix of Ref. [62]). More precisely, we write \[H^{3}(K(\mathbb{Z},3);\mathbb{Z})\cong\text{Hom}(\mathbb{Z}^{[1]},\mathbb{Z}) \cong\mathbb{Z}\] using the universal coefficient theorem and the fact that \(K(\mathbb{Z},3)=B^{2}\text{U}(1)^{[1]}=B^{3}\mathbb{Z}^{[1]}\). Here, we include the superscript to emphasise that this \(\mathbb{Z}\) comes from our U(1) 1-form symmetry part. From this, we can write the entry \(E_{4}^{0,3}\) in the Serre spectral sequence as \(\text{Hom}(\mathbb{Z}^{[1]},\mathbb{Z})\). Then, the differential \(\alpha\) is given by 'contraction' (adopting the terminology of [62]) with the Postnikov class \[\begin{split}\alpha:\text{Hom}(\mathbb{Z}^{[1]},\mathbb{Z})& \to H^{4}(B\text{U}(1)^{[0]};\mathbb{Z})\\ x&\mapsto x\circ\hat{\kappa}\end{split} \tag{5.19}\] where we make use of the fact that the Postnikov class is a cohomology class \[\hat{\kappa}\in H^{3}(B\text{U}(1)^{[0]},\text{U}(1)^{[1]})\cong H^{4}(B\text {U}(1)^{[0]};\mathbb{Z}^{[1]})\;.\] Similar arguments apply for the differentials \(\beta\) and \(\gamma\). Thus, \(\alpha\), \(\beta\), and \(\gamma\) map 1 to \(\pm\hat{\kappa}\), resulting in the \(E_{5}\) page as shown, where \(m:=|\hat{\kappa}|\). We can then read off the integral cohomology groups to be \[H^{\bullet}(B|\mathbb{G}|;\mathbb{Z})\cong\{\mathbb{Z},0,\mathbb{Z},0, \mathbb{Z}/m,0,e(\mathbb{Z}/2,\mathbb{Z}/m),0,e(\mathbb{Z}/6,\mathbb{Z}/m),\ldots\} \tag{5.20}\] where the notation \(e(A,B)\) denotes an extension of \(A\) by \(B\), _viz._ a group that fits in the short exact sequence \(B\hookrightarrow e(A,B)\twoheadrightarrow A\). We compute the mod 2 cohomology by the same method in Appendix A.2. Heuristically, one can also argue for the form of \(\alpha\) and \(\beta\) as follows (_c.f._ Appendix B.6 of Ref. [24]). Let us start with a representative of a generator of the cohomology group \(H^{3}(K(\mathbb{Z},3),\mathbb{Z})\cong\mathbb{Z}\), and ask what it becomes when \(K(\mathbb{Z},3)\) is the fibre of \(B|\mathbb{G}|\). Given the normalisation condition (5.3), the 3-form \[\tilde{h}:=\text{d}B^{(2)}/2\pi \tag{5.21}\] represents the generator of \(H^{3}(K(\mathbb{Z},3);\mathbb{Z})\cong\mathbb{Z}\). This is true when \(K(\mathbb{Z},3)\) stands on its own. However, when we pass to the 2-group \(\mathbb{G}\) and take \(K(\mathbb{Z},3)\) to be the fibre of \(B|\mathbb{G}|\), then \(\tilde{h}\) is not gauge-invariant under (5.5) for a general Postnikov class, and cannot be a representative of any cohomology class for \(B|\mathbb{G}|\). We can remedy this by modifying the definition of \(\tilde{h}\) to \[h:=\tilde{h}-\frac{\hat{\kappa}}{2\pi}A\wedge F. \tag{109}\] The trade off is that the gauge-invariant \(h\) is no longer closed; instead \[\mathrm{d}h=-\frac{\hat{\kappa}}{4\pi^{2}}F\wedge F=-\hat{\kappa}c_{1}\cup c_{1}, \tag{110}\] where \(c_{1}:=\frac{F}{2\pi}\) is the first Chern class of the \(U(1)^{[0]}\) bundle, and \(c_{1}\cup c_{1}\) can be taken as the generator of \(H^{4}(K(\mathbb{Z},2);\mathbb{Z})\). The 2-group relation (110) implies that both the differentials \(\alpha\) and \(\beta\) in Fig. 2 map 1 to \(-\hat{\kappa}\).19 Footnote 19: The extra minus sign comes from the convention used to define the Postnikov class. From this cohomological starting point, we can proceed to compute the spin bordism groups. We find it is helpful to split the discussion into the cases where \(m\) is even or odd, for which the bordism group calculations are tackled using different tricks. As a warm up, we first consider the simplest case where \(m\) is zero, corresponding to a 0-form and 1-form symmetry that do not mix. #### 5.2.1 Zero Postnikov class We first consider the trivial toric 2-group where \(B|\mathbb{G}|\) is a simply a product space \(K(\mathbb{Z},2)\times K(\mathbb{Z},3)\). As \(B|\mathbb{G}|\) is a product, we can use the Kunneth theorem to determine the homology groups of \(K(\mathbb{Z},2)\times K(\mathbb{Z},3)\) from the homology groups of each factor, given by Eq. (104) and Table 3 in Appendix A.1. We obtain \[H_{\bullet}(K(\mathbb{Z},2)\times K(\mathbb{Z},3);\mathbb{Z})\cong\{\mathbb{ Z},0,\mathbb{Z},\mathbb{Z},\mathbb{Z},\mathbb{Z}\times\mathbb{Z}/2,\mathbb{Z}, \mathbb{Z}\times\mathbb{Z}/6,\mathbb{Z}\times\mathbb{Z}/2,\ldots\} \tag{111}\] and construct the second page of the AHSS as shown in Fig. 3, with non-trivial differentials on the \(E^{2}\) page indicated by coloured arrows. The differentials on the zeroth and the first rows are the composition \(\widetilde{\mathrm{Sq}^{2}}\circ\rho_{2}\) and \(\widetilde{\mathrm{Sq}^{2}}\), respectively, where \(\rho_{2}\) is reduction modulo 2. To compute the action of these differentials, we need to know how the Steenrod squares act on the mod 2 cohomology ring of the product \(K(\mathbb{Z},2)\times K(\mathbb{Z},3)\). From the mod 2 cohomology rings of \(K(\mathbb{Z},2)\) and \(K(\mathbb{Z},3)\) given by (105) and (106), we obtain \[H^{\bullet}(K(\mathbb{Z},2)\times K(\mathbb{Z},3);\mathbb{Z}/2)\cong\mathbb{ Z}/2[c_{1},\tau_{3},\ \mathrm{Sq}^{2}\tau_{3},\ \mathrm{Sq}^{4}\ \mathrm{Sq}^{2}\tau_{3},\ldots], \tag{112}\] where \(c_{1}\) and \(\tau_{3}\) are the unique generators in degree 2 and 3, respectively. In our diagram, the red maps correspond to \(\mathrm{Sq}^{2}c_{1}=c_{1}^{2}\), the blue maps to \(\mathrm{Sq}^{2}\tau_{3}\) being a gener ator of \(H^{\bullet}(K(\mathbb{Z},3);\mathbb{Z}/2)\), and the magenta maps to \(\text{Sq}^{2}(c_{1}\tau_{3})=c_{1}^{2}\tau_{3}\). The differential depicted in the \(E^{3}\) page from \(E^{3}_{7,0}\) in the diagram must be non-trivial by comparing \(\Omega^{\text{Spin}}_{6}(K(\mathbb{Z},2)\times K(\mathbb{Z},3))\) with the result computed with the Adams spectral sequence. We can then read off the bordism groups in lower degrees, which we collect below in Table 1. We piece together the cobordism group \[H^{6}_{I\mathbb{Z}}(MT(\text{Spin}\times\text{U}(1)\times B\text{U}(1)))\cong \mathbb{Z}\times\mathbb{Z}, \tag{5.26}\] which classifies anomalies for this symmetry type. The pair of \(\mathbb{Z}\)-valued local anomalies anomalies just corrresponds to the usual cubic \(c_{1}^{3}\) and mixed gravitational \(c_{1}p_{1}\) anomalies associated with the U(1) 0-form symmetry. The presence of the 1-form symmetry here plays no role in anomaly cancellation for this dimension. #### 5.2.2 Even Postnikov class Now we turn to the case where \(m=|\hat{\kappa}|\) is a non-zero even integer. Continuing from the cohomology calculation above, summarised in Eq. (5.20), there are two options for the extension \(e(\mathbb{Z}/2,\mathbb{Z}/m)\); either the trivial extension \(\mathbb{Z}/2\times\mathbb{Z}/m\) or the non-trivial one \(e(\mathbb{Z}/2,\mathbb{Z}/m)\cong\mathbb{Z}/(2m)\), which are non-isomorphic. By comparing the mod 2 cohomology calculated from applying the universal coefficient theorem to (5.20) and the one calculated directly from the Serre spectral sequence, one can show that the correct extension is the direct product \(\mathbb{Z}/2\times\mathbb{Z}/m\). The integral homology for \(B|\mathbb{G}|\) is then \[H_{\bullet}\left(B|\mathbb{G}|;\mathbb{Z}\right)\cong\{\mathbb{Z},0, \mathbb{Z},\mathbb{Z}/m,0,\mathbb{Z}/m\times\mathbb{Z}/2,0,e(\mathbb{Z}/6, \mathbb{Z}/m),\ldots\} \tag{5.27}\] and the mod 2 homology is \[H_{\bullet}(B|\mathbb{G}|;\mathbb{Z}/2)\cong\{\mathbb{Z}/2,0, \mathbb{Z}/2,\mathbb{Z}/2,\mathbb{Z}/2,\mathbb{Z}/2\times\mathbb{Z}/2,\ldots \}\, \tag{5.28}\] The \(E^{2}\) page of the AHSS is then given by Fig. 4. We observe that there are non-vanishing differentials already on the \(E^{2}\) page (which will not be the case when we turn to the case of odd \(m\)). These non-trivial \(E^{2}\) differentials make the spectral sequence easier to compute, as follows. The differentials \(\alpha\) and \(\beta\) on the second page are duals \(\widetilde{\text{Sq}^{2}}\) of the Steenrod square \(\text{Sq}^{2}\), while the differential \(\gamma\) is a mod 2 reduction followed by the dual of \(\text{Sq}^{2}\)[80]. The Steenrod square dual to \(\beta\) sends a unique generator in \(H^{2}(B|\mathbb{G}|;\mathbb{Z}/2)\) to a unique generator in \(H^{4}(B|\mathbb{G}|;\mathbb{Z}/2)\) (_c.f._ Appendix A.2), so \(\beta\) must be the non-trivial map from \(\mathbb{Z}/2\) to \(\mathbb{Z}/2\), killing off both factors. Similarly, the Steenrod square dual to \(\alpha\) acts on the unique generator of \(H^{3}(B|\mathbb{G}|;\mathbb{Z}/2)\) that comes from the generator \(\tau_{3}\in H^{3}(K(\mathbb{Z},3);\mathbb{Z}/2)\) of the fibre, which we will also label by \(\tau_{3}\). The image is a generator of \(H^{5}(B|\mathbb{G}|;\mathbb{Z}/2)\) that comes from \(\text{Sq}^{2}\tau_{3}\) of \(H^{5}(K(\mathbb{Z},3);\mathbb{Z}/2)\). It generates the \(\mathbb{Z}/2\) factor in the mod 2 cohomology that is a reduction from the \(\mathbb{Z}/2\) factor in the integral cohomology, and not the \(\mathbb{Z}/m\) factor since it is the \(\mathbb{Z}/2\) factor that arise from the fibre's contribution. Therefore, both \(\alpha\) and \(\gamma\) must be non-trivial, with \(\ker\gamma\cong\mathbb{Z}/m\), as indicated in the \(E^{3}\) page in Fig. 4. Before continuing, we pause here to emphasize the importance of using (spin) cobordism, rather than just cohomology, to study 't Hooft anomalies in these theories. At the level of cohomology, the existence of a non-trivial \(\mathbb{Z}/2\)-valued cohomology class \(\text{Sq}^{2}\tau_{3}\in H^{5}(K(\mathbb{Z},3);\mathbb{Z}/2)\) might suggest there is a non-trivial SPT phase on 5-manifolds \(M_{5}\), or equivalently a non-trivial anomaly theory for the corresponding 4d theory, with partition function \[Z=\exp\left(\pi\text{i}\int_{M_{5}}\text{Sq}^{2}\tau_{3}\right), \tag{101}\] where \(\text{Sq}^{2}\tau_{3}\) is pulled back to \(M_{5}\). But, one can use Wu's relation to trade the Steenrod square operation for a Stiefel-Whitney class [74], _viz._ \[Z=\exp\left(\pi\text{i}\int_{M_{5}}w_{2}(TM_{5})\cup\tau_{3}\right). \tag{102}\] If we now restrict to \(M_{5}\) being a spin manifold, then \(w_{2}(TM_{5})\) is trivial and we immediately learn that this SPT phase is trivial. Of course, this 'trivialisation' of the SPT phase corresponding to the cohomology class in \(H^{5}(K(\mathbb{Z},3);\mathbb{Z}/2)\) is automatically captured by the spectral sequence computation for spin bordism, by the non-triviality of the map \(\gamma\) on turning from the second to the third page of the AHSS. Continuing with the bordism computation, we now find that the entries in the range \(p+q\leqslant 5\) stabilise on this \(E^{3}\) page, whence we can read off the spin bordism groups for 2-group symmetry \(\mathbb{G}=\text{U}(1)^{[0]}\times_{\hat{\kappa}}\text{U}(1)^{[1]}\) with even Postnikov class \(\hat{\kappa}\) up to degree-5, which are listed in Table 1. There is still an undetermined group extension in \(\Omega_{3}^{\text{Spin}}(B|\mathbb{G}|)\) but we will argue from the physics point of view in Section 6 that it must be the non-trivial extension \(\mathbb{Z}/(2m)\). In summary, our results are those given in Table 1. #### 5.2.3 Odd Postnikov class We now turn to the case where \(m\) is odd, for which the spin bordism computation turns out to be rather more difficult, technically. Firstly, when \(m\) is odd, the extension \(e(\mathbb{Z}/2,\mathbb{Z}/m)\) is unambiguously \(\mathbb{Z}/m\times\mathbb{Z}/2\) (which is, for odd \(m\), isomorphic to \(\mathbb{Z}/(2m)\)). The integral homology of \(B|\mathbb{G}|\) is as written in Eq. (100), and the mod 2 homology groups are now \[H_{\bullet}\left(B|\mathbb{G}|;\mathbb{Z}/2\right)\cong\{\mathbb{Z}/2,0,\mathbb{Z}/ 2,0,0,\mathbb{Z}/2,\ldots\} \tag{5.31}\] by the universal coefficient theorem. Now we feed these results into the AHSS for the trivial point fibration \(\text{pt}\to B|\mathbb{G}|\to B|\mathbb{G}|\). The \(E^{2}\) has no non-trivial differentials in the range of interest. The \(E^{3}\) page is shown in Fig. 5, with a single potentially non-trivial differential in the range of interest, \(\text{d}_{3}:E^{3}_{5,0}\to E^{3}_{2,2}\). If this differential were trivial, then \(\Omega^{\text{Spin}}_{5}(B|\mathbb{G}|)\) would equal \(\mathbb{Z}/m\times\mathbb{Z}/2\), and \(\Omega^{\text{Spin}}_{4}(B|\mathbb{G}|)\) would equal \(\mathbb{Z}\times\mathbb{Z}/2\). The factor of \(\mathbb{Z}/m\) in \begin{table} \begin{tabular}{|c|c c c c c c|} \hline \(i\) & 0 & 1 & 2 & 3 & 4 & 5 & 6 \\ \hline \(\Omega^{\text{Spin}}_{i}(B|\mathbb{G}|)\), \(\hat{\kappa}\neq 0\) & \(\mathbb{Z}\) & \(\mathbb{Z}/2\) & \(\mathbb{Z}\times\mathbb{Z}/2\) & \(\mathbb{Z}/(2|\hat{\kappa}|)\) & \(\mathbb{Z}\) & \(\mathbb{Z}/|\hat{\kappa}|\) & \(\mathbb{Z}\times\text{Tors}\) \\ \(\Omega^{\text{Spin}}_{i}(B|\mathbb{G}|)\), \(\hat{\kappa}=0\) & \(\mathbb{Z}\) & \(\mathbb{Z}/2\) & \(\mathbb{Z}\times\mathbb{Z}/2\) & \(\mathbb{Z}\times\mathbb{Z}\) & \(\mathbb{Z}\times\mathbb{Z}\times\text{Tors}\) \\ \hline \end{tabular} \end{table} Table 1: Spin bordism groups for the twisted 2-group symmetry \(\mathbb{G}=\text{U}(1)^{[0]}\times_{\hat{\kappa}}\text{U}(1)^{[1]}\). Here, Tors denotes an undetermined pure torsion part. For comparison, we also include the bordism groups in the case \(\hat{\kappa}=0\), which corresponds to a direct product of 0-form and 1-form symmetries. One can thus see how, for both 2d and 4d quantum field theories, local anomalies in the untwisted case morph into global anomalies when the Postnikov class \(\hat{\kappa}\) is turned on, thanks to a Green–Schwarz-like mechanism. \(\Omega_{5}^{\rm Spin}(B|\mathbb{G}|)\) would correspond to the global anomaly that we described in the previous Section, which we know to be valued in \(\mathbb{Z}/m\) as in (5.15). The extra factor of \(\mathbb{Z}/2\) would presumably correspond to a further global anomaly that we have not seen so far by physics arguments. On the other hand, if this differential were the non-trivial map, then \(\Omega_{5}^{\rm Spin}(B|\mathbb{G}|)\) would equal \(\mathbb{Z}/m\), agreeing precisely with the physics account, and \(\Omega_{4}^{\rm Spin}(B|\mathbb{G}|)\) would just equal \(\mathbb{Z}\). It turns out that this all-important \(d_{3}\) differential is _non_-trivial. But being a differential on the third page, there are no straightforwardly-applicable formulae (analogous to the formulae in terms of Steenrod squares that are available on the second page) that we can use to compute it directly. Indeed, our usual AHSS plus ASS techniques are not sufficient to constrain this differential. This differential can nonetheless be evaluated using different arguments,20 which we present in Appendix C. The gist of the argument is as follows. Footnote 20: We are very grateful to Arun Debray for sharing this ingenious argument to compute this differential. Appendix C is written by Arun Debray. The central character is a long exact sequence in bordism groups,21 analogous to the Gysin long exact sequence in ordinary homology, of the form Footnote 21: See Appendix E of Ref [32] for a related theorem. The proof of the theorem used in this paper, and that of Ref [32], will appear in future work [81]. \[\cdots\to\Omega_{d}^{\rm Spin}(S(V))\to\Omega_{d}^{\rm Spin}(B|\mathbb{G}|) \to\Omega_{d-2}^{\rm Spin}((B|\mathbb{G}|)^{V-2})\to\Omega_{d}^{\rm Spin}(S( V))\to\ldots \tag{5.32}\] Here \(V\to B|\mathbb{G}|\) is the pullback bundle (which is rank 2) of the tautological line bundle \(L\to B{\rm U}(1)^{[0]}\) along the quotient map \(q:B|\mathbb{G}|\to B{\rm U}(1)^{[0]}\), \(S(V)\to B|\mathbb{G}|\) is the associated sphere bundle of \(V\), and \((B|\mathbb{G}|)^{V-2}\) is the Thom spectrum of the virtual bundle. This long exact sequence in bordism is extremely powerful: by proving that the groups \(\Omega_{5}^{\rm Spin}(S(V))\) and \(\Omega_{3}^{\rm Spin}((B|\mathbb{G}|)^{V-2})\) both lack free and 2-torsion summands, one learns that the group \(\Omega_{5}^{\rm Spin}(B|\mathbb{G}|)\) in the middle, which is our bordism group of interest, also lacks 2-torsion. The \(\mathbb{Z}/2\) factor discussed above is therefore absent (from which we learn that it must be killed by the differential \(d_{3}\) which is therefore non-trivial). Putting things together, we can thus extract all the spin bordism groups for 2-group symmetry \(\mathbb{G}={\rm U}(1)^{[0]}\times_{\hat{\kappa}}{\rm U}(1)^{[1]}\), with odd Postnikov class \(\hat{\kappa}\), up to degree-5. #### 5.2.4 The free part When \(m\) is non-trivial, regardless of its parity, we can also show that the free part of \(\Omega_{6}^{\rm Spin}(B|\mathbb{G}|)\) is given by \(\mathbb{Z}\) coming from the entry \(E_{2,4}^{r}\). First, observe that the free part of the \(\Omega_{6}^{\rm Spin}(B|\mathbb{G}|)\), if there is one at all, can only come from this entry as any other entry from the diagonal \(p+q=6\) is either trivial or pure torsion. Next, we need to show that \(E_{2,4}^{\infty}\) is non-trivial. The only differential that could kill it along the way is \(d_{5}:E_{7,0}^{5}\to E_{2,4}^{5}\). But this differential must be trivial because \(E_{7,0}^{5}\) is pure torsion (as \(E_{7,0}^{2}\), given by \(H_{7}(B|\mathbb{G}|;\mathbb{Z})\cong e(\mathbb{Z}/6,\mathbb{Z}/m)\) in (5.27), is pure torsion), and \(\text{Hom}(G,\mathbb{Z})=0\) if \(G\) is pure torsion. Therefore, there is one free factor in the diagonal \(E_{p,q}^{\infty}\) with \(p+q=6\), and we obtain \[\text{Hom}\left(\Omega_{6}^{\text{Spin}}(B|\mathbb{G}|),\mathbb{Z}\right) \cong\mathbb{Z}. \tag{5.33}\] From these results, we piece together the cobordism group that classifies anomalies for this global 2-group symmetry, valid for even and odd \(m\neq 0\): \[\boxed{H_{I\mathbb{Z}}^{6}\left(MT\left(\text{Spin}\times|\mathbb{G}|\right) \right)\cong\mathbb{Z}/m\times\mathbb{Z}\,.} \tag{5.34}\] In terms of the anomaly coefficients, the space of anomaly theories is classified by \[(r,s)=\left(\frac{1}{6}\left(\mathcal{A}_{3}-\mathcal{A}_{\text{mixed}} \right)\text{ mod }m,\mathcal{A}_{\text{mixed}}\right)\in\mathbb{Z}/m\times \mathbb{Z}\,,\qquad m=|\hat{\kappa}|\,, \tag{5.35}\] in agreement with the results of the previous Subsection. To reiterate, the local anomaly associated with \(c_{1}^{3}\) (_i.e._ whose coefficient is the sum of \(U(1)^{[0]}\) charges cubed) becomes a global anomaly when the 0-form and 1-form symmetries are fused into a 2-group defined so that the mixed 't Hooft anomaly (\(\sim f\wedge F\wedge F\)) vanishes. The mixed gravitational anomaly associated with \(p_{1}c_{1}\) remains a \(\mathbb{Z}\)-valued local anomaly. The result for the spin bordism group in degree-3 in Table 1 is also interesting, implying the presence of an novel global anomaly structure for the corresponding symmetry in two dimensions. We discuss this from a physics perspective in SS6. ## 6 Abelian 2-group enhancement in two dimensions In this Section, we consider 2d avatars of the 4d 2-group anomalies that we have been discussing. In this lower-dimensional version we will find some interesting differences. In SS5.2, we computed the spin bordism groups for the 2-group symmetry \(\mathbb{G}=\text{U}(1)^{[0]}\times_{\hat{\kappa}}\text{U}(1)^{[1]}\). In particular, we obtained \[\Omega_{3}^{\text{Spin}}(B|\mathbb{G}|)\cong\mathbb{Z}/(2m)\] where \(m=|\hat{\kappa}|\). With the fourth bordism group given by \(\mathbb{Z}\), we can put things together to get the cobordism group \[H^{4}_{I\mathbb{Z}}\left(M\left(\text{Spin}\times\mathbb{G}\right)\right)\cong \mathbb{Z}/(2m)\times\mathbb{Z}\,. \tag{108}\] This is similar to the result for \(H^{6}_{I\mathbb{Z}}\) that was pertinent to anomalies in 4d, except for the fact that the global anomaly is here 'twice as fine' as the \(\mathbb{Z}/m\) anomaly in 4d.22 That extra division by 2 will correspond to a subtle new global anomaly associated with the spin structure, which will be our main interest in this Section. Footnote 22: The \(\mathbb{Z}\)-valued local anomaly here is simply the gravitational anomaly associated with \(-\frac{1}{24}p_{1}(R)\text{Tr}\,_{\mathbf{R}}1\subset\Phi_{4}\) in the degree-4 anomaly polynomial, which can always be cancelled by adding neutral fermions and so plays no further role in our discussion. Before we discuss the global \(\mathbb{Z}/(2m)\) anomaly in more depth, let us first discuss the physics interpretation of a 2-group symmetry \(\mathbb{G}\) in two dimensions, which is rather different to the 4d case. Recall that in 4d, the 1-form symmetry \(\text{U}(1)^{[1]}\) was identified with the magnetic 1-form symmetry, with 2-form current proportional to \(\star f\). But in general \(d\geqslant 3\) spacetime dimensions, the magnetic symmetry is a \((d-3)\)-form symmetry, and so there is no such symmetry in 2d. There is, however, a trivially conserved '1-form symmetry' \(\text{U}(1)^{[1]}_{\text{top}}\) whose 2-form current is simply \(j_{\text{top}}=\text{vol}_{2}\), the volume form. This symmetry does not act on any line operators in the theory and so one should not think of it as a physical 1-form symmetry - but it will play a role in what follows. Now suppose there is a global 0-form symmetry \(G^{[0]}\) with background gauge field \(A\) with curvature \(F\). If we first cancel the pure gravitational anomaly by adding neutral fermions, the local anomalies in \(G^{[0]}\) are captured by the degree-4 anomaly polynomial \[\Phi_{4}=\text{Tr}\,_{\mathbf{R}}\left(\frac{F\wedge F}{8\pi^{2}}\right)\,. \tag{109}\] The usual 't Hooftian interpretation of \(\Phi_{4}\neq 0\) would be that, if one tries to gauge \(G^{[0]}\), the anomaly 'breaks' the symmetry \(G^{[0]}\) in the quantum theory. However, Sharpe re-interprets \(\Phi_{4}\neq 0\) as indicating a weaker 2-group symmetry structure in the 2d quantum theory [44], corresponding to the extension \(B\text{U}(1)\hookrightarrow\mathbb{G}\twoheadrightarrow G^{[0]}\). In other words, in 2d one can invoke the auxiliary \(\text{U}(1)\) 1-form symmetry to trade an anomaly in a 0-form symmetry for a 2-group structure. This is of course just a simpler version of the 4d situation described in SS5.1, in which a mixed 't Hooft anomaly corresponding to a term \(\sim f\,\wedge\,F\,\wedge\,F\supset\Phi_{6}\) was re-interpreted as signalling a weaker 2-group symmetry. But in this Section we will see that, in the case that one needs a spin structure to define the field theory, it is _not always possible_ to completely absorb the 't Hooft anomaly associated with \(\Phi_{4}\) by a well-defined 2-group structure. Rather, there can be a residual \(\mathbb{Z}/2\)-valued anomaly left over, which one should interpret as an 't Hooft anomaly in the 2-group symmetry itself. ### 'Spin structure anomalies' in two dimensions To see how this works, let us now be more precise and set \(G^{[0]}=\mathrm{U}(1)^{[0]}\). Letting \(\{q_{i}\}\) denote the \(\mathrm{U}(1)^{[0]}\) charges of a set of (say, left-handed) chiral fermions, the anomaly polynomial is \[\Phi_{4}=\frac{1}{2}\mathcal{A}_{2}\,c_{1}\cup c_{1}\,,\qquad\mathcal{A}_{2}:= \sum_{i}q_{i}^{2}\,, \tag{110}\] where \(c_{1}=F/2\pi\) is the first Chern class of the \(\mathrm{U}(1)^{[0]}\) bundle. By inflow, one can use the associated Chern-Simons form \(I_{3}=\frac{\mathcal{A}_{2}}{8\pi^{2}}A\wedge F\) to compute the variation of the effective action under the background gauge transformation \(A\to A+d\lambda^{(0)}\), \[\delta S_{0}=-\mathrm{i}\frac{\mathcal{A}_{2}}{4\pi}\int_{M_{2}}\lambda^{(0)} F\,. \tag{111}\] Now, let us try to implement the philosophy above, and trivialise this anomaly by promoting \(\mathrm{U}(1)^{[0]}\) to a 2-group global symmetry by fusing with \(\mathrm{U}(1)^{[1]}_{\mathrm{top}}\). We turn on a background 2-form gauge field \(B^{(2)}\) that couples to the trivially conserved current \(j_{\mathrm{top}}=\mathrm{vol}_{2}\), via the coupling \[S_{\mathrm{coupling}}=\mathrm{i}\int_{M_{2}}\star j_{\mathrm{top}}B^{(2)}= \mathrm{i}\int_{M_{2}}B^{(2)}, \tag{112}\] because \(\star\mathrm{vol}_{2}=1\). We consider the \(\mathrm{U}(1)^{[0]}\) and \(\mathrm{U}(1)^{[1]}_{\mathrm{top}}\) symmetries to be fused via the toric 2-group structure \(\mathbb{G}=\mathrm{U}(1)^{[0]}\times_{\hat{\kappa}}U(1)^{[1]}_{\mathrm{top}}\), where \(\hat{\kappa}\in\mathbb{Z}\) is the Postnikov class. This prescribes the by-now-familiar transformation (112) on the background gauge fields, under which \[\delta S_{\mathrm{coupling}}=\mathrm{i}\frac{\hat{\kappa}}{2\pi}\int_{M_{2}} \lambda^{(0)}F\,. \tag{113}\] We see that the local anomaly associated with the \(F\wedge F\) term in \(\Phi_{4}\) is completely removed iff \[\hat{\kappa}=\frac{\mathcal{A}_{2}}{2}. \tag{114}\] But, since the Postnikov class \(\hat{\kappa}\) is neccessarily integral, this is possible only when \(\mathcal{A}_{2}\) is an even integer. Thus, there is an order 2 anomaly remaining when \(\mathcal{A}_{2}\) is odd. This chimes perfectly with our computation of the bordism group \(\Omega^{\mathrm{Spin}}_{3}(B|\mathbb{G}|)\) in SS5.2, that we quoted at the beginning of this Section, which implies that there is in general a mod \(2|\hat{\kappa}|\) global anomaly. If one imagines fixing the 2-group structure, in other words fixing an integer \(\hat{\kappa}\), then one can vary the fermion content and ask whether there is a global 't Hooft anomaly for the 2-group \(\mathbb{G}\). We can think of the coupling term (6.5) as a Green-Schwarz counterterm, which effectively shifts the anomaly coefficient by \[\mathcal{A}_{2}\to\mathcal{A}_{2}+2n\hat{\kappa}\,,\qquad n\in\mathbb{Z}, \tag{6.8}\] and so the residual anomaly is clearly \[\mathcal{A}_{2}\ \text{mod}\ 2m\,,\qquad m=|\hat{\kappa}|. \tag{6.9}\] This is the \(\mathbb{Z}/(2m)\)-valued global anomaly in the 2-group symmetry that is detected by \(\Omega_{3}^{\text{Spin}}(B|\mathbb{G}|)\). Even if we choose the minimal value for \(\hat{\kappa}\), a mod 2 anomaly persists iff \[\sum_{i}q_{i}^{2}=1\ \text{mod}\ 2\implies N_{o}=1\ \text{mod}\ 2\,, \tag{6.10}\] where \(N_{o}\) is the number of fermions with odd \(\text{U}(1)^{[0]}\) charges. One can offer a different perspective on this anomaly by thinking about a generator for \(\Omega_{3}^{\text{Spin}}(B|\mathbb{G}|)\), which makes it more transparent how this mod 2 anomaly is related to the requirement of a spin structure. If we look back at the AHSS in Fig. 5, we see that the factor of \(\mathbb{Z}/2\) that ends up in \(\Omega_{3}^{\text{Spin}}(B|\mathbb{G}|)\) comes from the \(E_{2,1}^{2}\) element, which stabilizes straight away from the \(E^{2}\) page. Since \(E_{2,1}^{2}=H_{2}\left(B|\mathbb{G}|;\Omega_{1}^{\text{Spin}}(\text{pt})\right)\), this suggests a generator for this \(\mathbb{Z}/2\) factor can be taken to be a mapping torus \[M^{3}=S^{2}\times S^{1}, \tag{6.11}\] with \(c_{1}(F)\in 2\mathbb{Z}+1\) on the \(S^{2}\) factor, corresponding to an odd-charged monopole, and the non-bounding spin structure on the \(S^{1}\), corresponding to a non-trivial element of \(\Omega_{1}^{\text{Spin}}(\text{pt})\cong\mathbb{Z}/2\). This would suggest that the transformation by \((-1)^{F}\), which counts fermion zero modes, is anomalous on \(S^{2}\) with an odd-charged monopole configuration for the background gauge field.23 Footnote 23: We do not believe, however, that this anomaly can be detected from the torus (in this case, just a circle) Hilbert space, using the methods set out in Ref. [82]. This is because a system with one Weyl fermion with unit \(\text{U}(1)\) charge (together with a Weyl fermion of opposite chirality to cancel the gravitational anomaly) will have an even number of Majorana zero modes, which allows construction of a \(\mathbb{Z}/2\)-graded Hilbert space even in the sector twisted by \(-1\in\text{U}(1)\). (The failure to construct a \(\mathbb{Z}/2\)-graded Hilbert space is one sign of anomalies involving the spin structure, as studied in [82].) We can of course see this from an elementary calculation. For our set of left-handed chiral fermions with charges \(\{q_{i}\}\), the index of the Dirac operator on \(S^{2}\), which recall is \(\text{Ind}(i\not{D}_{2})=n_{L}-n_{R}\) the number of (LH minus RH) zero modes of the Dirac operator, is equal to \[\text{Ind}(i\not{D}_{2})=\int_{S^{2}}\Phi_{2}=\int_{S^{2}}\text{Tr}\,_{\mathbf{R} }\left(\frac{F}{2\pi}\right)=c_{1}(F)\sum_{i}q_{i} \tag{108}\] by the 2d Atiyah-Singer index theorem. Thus, the total number of zero modes \(N:=n_{L}+n_{R}\), which is congruent to \(n_{L}-n_{R}\) mod 2, satisfies \[N=c_{1}(F)\sum_{i}q_{i}\text{ mod }2\,. \tag{109}\] Choosing \(c_{1}(F)\) to be odd on \(S^{2}\), as we are free to do, we see that \((-1)^{F}\), which counts these zero modes, flips the sign of the partition function when \[\sum_{i}q_{i}=1\text{ mod }2\implies N_{o}=1\text{ mod }2. \tag{110}\] For such fermion content, the 2-group symmetry and \((-1)^{F}\) are therefore equivalently anomalous. (From the 2-group perspective, the anomalous transformation corresponds simply to the element \(e^{\text{i}\pi}\in\text{U}(1)^{[0]}\).24) Footnote 24: The existence of this global \(\mathbb{Z}/2\)-valued anomaly, despite the fact the anomalous U(1) gauge transformation is clearly connected to the identity, further evidences the fact that global anomalies do not require the existence of ‘large gauge transformations’ captured by, say, \(\pi_{d}(G)\) (when restricting the spacetime topology to be a sphere). This disagreement between homotopy- and bordism-based criteria for global anomalies was discussed in Ref. [25]. In this case, the global anomaly arises because an otherwise local anomaly is only incompletely dealt with by the Green–Schwarz mechanism. This should be contrasted with the situation in four dimensions that we examined in SS5.1. In that case, the corresponding formula for the number of fermion zero modes, on say \(M_{4}=S^{4}\), is \(\text{Ind}(i\not{D}_{4})=\frac{1}{2}c_{1}^{2}\sum_{i}q_{i}^{2}\). But, for a spin 4-manifold, the Atiyah-Singer index theorem implies that \(c_{1}^{2}\) is an _even_ integer, and moreover cancelling the local mixed gravitational anomaly \(\implies\sum_{i}q_{i}=0\implies\sum_{i}q_{i}^{2}=0\text{ mod }2\). The index and therefore the number of zero modes is then necessarily even, meaning there is no analogous anomaly in \((-1)^{F}\). This is why the spin bordism calculation exposes 'only' a mod \(|\hat{\kappa}|\) anomaly in the 4d case, but the finer mod \(2|\hat{\kappa}|\) in 2d.25 Footnote 25: A more mundane way to understand this difference is therefore simply as a consequence of the normalisation of the various characteristic classes on spin manifolds of the relevant dimension. ### Non-spin generalisation We have argued that the order 2 anomaly just described, for a 2d theory with 2-group symmetry \(\mathbb{G}=\text{U}(1)^{[0]}\times_{\hat{\kappa}}\text{U}(1)^{[1]}\), is intrinsically related to the requirement of a spin structure and the use of spin bordism. More broadly, all the 2-group anomalies that we study are sensitive to the full choice of tangential structure - this is especially the case when computing the order of a finite global anomaly. To develop this idea further, we consider a theory defined with the same 2-group structure, but now using a \(\mathrm{Spin}_{c}\) structure. Recall that in \(d\) dimensions the group \(\mathrm{Spin}_{c}(d)\) is \([\mathrm{Spin}(d)\times\mathrm{U}(1)]/\mathbb{Z}/2\), defined by identifying the element \((-1)^{F}\in\mathrm{Spin}(d)\) with the order 2 element of a \(\mathrm{U}(1)\) symmetry, that we here take to be an auxiliary global symmetry used to define spinors; using a \(\mathrm{Spin}_{c}\) rather than a \(\mathrm{Spin}\) structure allows us to define our fermionic theory on non-spin manifolds (indeed, all orientable manifolds up to and including dimension 4 admit a \(\mathrm{Spin}_{c}\) structure). Having already computed the integral homology (5.27) of \(B|\mathbb{G}|\) in SS5.2, and using the well-known result for the \(\mathrm{Spin}_{c}\)-bordism groups of a point [83] (see the first row in Table 2), it is straightforward to use the AHSS \[E_{p,q}^{2}=H^{p}\big{(}B|\mathbb{G}|;\Omega_{q}^{\text{Spin}_{c}}(\text{pt}) \big{)}\implies\Omega_{q}^{\text{Spin}_{c}}(B|\mathbb{G}|) \tag{101}\] to compute the \(\text{Spin}_{c}\) bordism groups of interest. The \(E^{2}\) page is shown in Fig. 6. Things could not be much simpler; in the region of interest, all differentials are vanishing and all the elements on \(E^{2}\) up to the \(p+q=6\) diagonal stabilize to the last page. The bordism groups are recorded in the second row of Table 2. There is an extension problem that we cannot resolve using the AHSS for \(\Omega_{5}\), but it is not relevant to our discussion here. It is curious that \(\Omega_{3}^{\text{Spin}_{c}}(B|\mathbb{G}|)\cong\mathbb{Z}/m\) compared to \(\Omega_{3}^{\text{Spin}}(B|\mathbb{G}|)\cong\mathbb{Z}/(2m)\). One might naively expect the same classification because the anomaly coefficient \(\mathcal{A}_{2}\) is shifted by \(2n\hat{\kappa}\) for any \(n\in\mathbb{Z}\) from the 2-group transformation of the Green-Schwarz counter-term \(\text{i}n\int_{M_{2}}B^{(2)}\). The puzzle can be resolved by correctly identifying the four integers in \(\Omega_{4}^{\text{Spin}_{c}}(B\text{U}(1))\cong\mathbb{Z}^{4}\) that properly classify all perturbative anomalies, before turning on the 2-group structure. Let \(F\) be the field strength for the background gauge field of the U(1) 0-form global symmetry as before, and let \(G\) denote the field strength associated to the \(\text{Spin}_{c}\) connection.26 Consider \(N\) left-moving fermions with charges \(\{q_{i}\}\) under the ordinary U(1) global symmetry and with charges \(\{g_{i}\}\) under the \(\text{Spin}_{c}\) connection. There is no constraint on the \(q_{i}\), but the \(g_{i}\) must all be odd integers. The anomaly polynomial associated to this matter content is given by Footnote 26: We use the convention for \(\text{Spin}_{c}\) connections, typical in the physics literature (see _e.g._[84]), whereby \(2G\) corresponds to a properly normalised U(1) connection, and \(G\) itself can be ‘half-integral’. \[\Phi_{4}=\frac{1}{2}\sum_{i=1}^{N}q_{i}^{2}c_{1}(F)^{2}+\sum_{i=1}^{N}g_{i}q_ {i}c_{1}(F)c_{1}(G)+\frac{1}{2}\sum_{i=1}^{N}g_{i}^{2}c_{1}(G)^{2}-N\frac{p_{ 1}(R)}{24}, \tag{102}\] where \(p_{1}(R)\) is the first Pontryagin class of the tangent bundle. The four anomaly coefficients that accompany each term in \(\Phi_{4}\) are not strictly independent from one another, so they cannot be taken as labels for each factor of \(\mathbb{Z}\) in \(\Omega_{4}^{\text{Spin}}(B\text{U}(1))\). To extract a set of correct integer labels, we first choose an integral basis for the anomaly polynomial in terms of linear combinations of \(c_{1}(F)^{2}\), \(c_{1}(F)c_{1}(G)\), \(c_{1}(G)^{2}\), and \(p_{1}(R)\). One option is \[\alpha_{1}=c_{1}(F)^{2},\quad\alpha_{2}=\frac{1}{2}c_{1}(F^{2})+c_{1}(F)c_{1} (G),\quad\alpha_{3}=\frac{1}{2}c_{1}(G)^{2}-\frac{p_{1}(R)}{24},\quad\alpha_{4} =\frac{p_{1}(R)}{3}. \tag{103}\] We know that \(\alpha_{4}\) is integral because \(\int_{M_{4}}p_{1}(R)/3\) is just the signature of \(M_{4}\) which is integral (and takes a minimal value of 1, _e.g._ on \(M_{4}=\mathbb{C}P^{2}\)) for any orientable 4-manifold \(M_{4}\). To see the integrality of \(\alpha_{3}\), we apply the Atiyah-Singer index theorem for a \(\text{Spin}_{c}\) connection coupled to a single left-moving fermion with \(q=0\) and \(g=1\). Next, applying the same index theorem to a single left-moving fermion with \(q=1\) and \(g=1\) tells us that \(\alpha_{2}\) is an integer. Finally, the class \(\alpha_{1}\) is integral by definition, and indeed it takes a minimal value of 1. To see this, take \(M_{4}=\mathbb{C}P^{2}\) with \(F\) such that \(\int\!F/2\pi=1\) on a \(\mathbb{C}P^{1}\) subspace. Hence \(c_{1}(F)\) is the generator \(x\) of \(H^{2}(\mathbb{C}P^{2};\mathbb{Z})\), so \(c_{1}(F)^{2}=x^{2}=1\in H^{4}(\mathbb{C}P^{2};\mathbb{Z})\) generates the top cohomology, which evaluates to 1 on the fundamental class \([\mathbb{C}P^{2}]\). In terms of these basis elements, the anomaly polynomial can be written as \(\Phi_{4}=\sum_{i=1}^{4}n_{i}\alpha_{i}\), where the variables \(n_{i}\) are \[n_{1}=\frac{1}{2}\sum_{i=1}^{N}q_{i}(q_{i}-g_{i}),\quad n_{2}=\sum_{i=1}^{N}q_ {i}g_{i},\quad n_{3}=\sum_{i=1}^{N}g_{i}^{2},\quad n_{4}=\frac{1}{8}\sum_{i=1 }^{N}(g_{i}^{2}-1), \tag{111}\] and our arguments above tell us that each \(n_{i}\) must be an integer. This is clearly the case for \(n_{2}\) and \(n_{3}\). To see that \(n_{1}\) is an integer, recall that \(g_{i}\) are all odd. So when \(q_{i}\) is odd, \((q_{i}-g_{i})\) is even, and vice versa. So their product must be even. Similarly, to see that \(n_{4}\) is an integer, we observe that \(g_{i}-1\) and \(g_{i}+1\) are both even and differ by 2, so one of them must be divisible by 4. Hence their product is divisible by 8. When we turn the 2-group structure back on, the shift in anomaly coefficients induced by the Green-Schwarz \(S\supset\text{i}n\int_{M_{2}}B^{(2)}\) counter-term amounts to a shift \[n_{1}\mapsto n_{1}+n\hat{\kappa}. \tag{112}\] It is now obvious that the local anomaly labelled by the integer \(n_{1}\) is truncated down to a mod \(|\hat{\kappa}|\) anomaly. If one first takes care to cancel the anomalies involving the \(\text{Spin}_{c}\) connection, which amounts to choosing charges \(\{g_{i}\}\) such that \(n_{2}\), \(n_{3}\), and \(n_{4}\) vanish, then we find that the 2-group structure can indeed be used to completely capture the remaining 't Hooft anomaly in the ordinary \(\text{U}(1)^{(0)}\) 0-form global symmetry, in the sense proposed by Sharpe [44]. This should be contrasted with the residual order 2 anomaly (108) that cannot be cancelled in this way for the theory defined with a spin structure.27 Footnote 27: At the level of (108), in the \(\text{Spin}_{c}\) case we can think of the charges \(q_{i}\) for the ordinary \(\text{U}(1)^{(0)}\) 0-form symmetry as being ‘twisted’ by the \(\text{Spin}_{c}\) charges \(\{g_{i}\}\), which are necessarily non-zero. This makes the ‘effective’ anomaly coefficient \(2n_{1}\), analogous to \(\mathcal{A}_{2}\) appearing in (108), be always even. Conclusion and outlook In this paper, we explored anomalies in 2-group global symmetries \(\mathbb{G}\) in field theories in \(d=4\) and \(d=2\) dimensions using the cobordism classification. We compute the bordism groups \(\Omega_{d+1}^{H}(B|\mathbb{G}|)\) and \(\Omega_{d+2}^{H}(B|\mathbb{G}|)\) of the classifying space \(B|\mathbb{G}|\) of the nerve of \(\mathbb{G}\), with the tangential structure \(H=\mathrm{SO}\) or Spin. The torsion part of the former group gives us the global anomaly while the free part of the latter gives the perturbative anomaly. Throughout the work, we focus on 2-groups \(\mathbb{G}\) whose 0-form and 1-form symmetry parts are compact, connected abelian groups _ergo_ tori, which we refer to as 'toric 2-groups'. As a warm-up example, we first looked at Maxwell's theory in 4d, on a general orientable manifold (not necessarily spin). The 2-group symmetry in this theory contains only the 1-form symmetry part, being a product of the \(\mathrm{U}(1)\) electric 1-form symmetry and the \(\mathrm{U}(1)\) magnetic 1-form symmetry. The familiar mixed anomaly [35] between these two \(\mathrm{U}(1)\) factor can be given in terms of a bordism invariant in the free part of \(\Omega_{6}\). There are also three global anomalies, probed by the torsion subgroup of \(\Omega_{5}\), which can only be seen on non-spin manifolds and can be interpreted as mixed anomalies between the 1-form symmetries and gravity. We also studied anomalies when one of the \(\mathrm{U}(1)\) factor breaks explicitly to its cyclic subgroup due to the presence of a charged scalar, and related them to anomalies in pure Maxwell's theory via anomaly interplay. When both the 0-form and 1-form symmetry constituents of \(\mathbb{G}\) are \(\mathrm{U}(1)\), with the Postnikov class given by \(\hat{\kappa}\in\mathbb{Z}\), we used a variety of techniques to prove that the relevant spin bordism groups are \[\Omega_{5}^{\mathrm{Spin}}(B|\mathbb{G}|)\cong\mathbb{Z}/|\hat{\kappa}|,\quad \mathrm{Hom}\left(\Omega_{6}^{\mathrm{Spin}}(B|\mathbb{G}|),\mathbb{Z}\right) \cong\mathbb{Z}\;. \tag{110}\] The free part of the sixth bordism group signals the existence of a perturbative anomaly in 4d, which is just the mixed anomaly between the \(\mathrm{U}(1)\) 0-form symmetry and gravity. On the other hand, the perturbative cubic anomaly for \(\mathrm{U}(1)\) 0-form symmetry that would be there were \(\hat{\kappa}\) to vanish now transmutes into a discrete global anomaly captured by the torsion part of the 5th bordism group. These cobordism computations are in perfect agreement with the physics argument put forward by Cordova, Dumitrescu, and Intriligator [45]. We next discussed the fate of anomalies for a \(\mathrm{U}(1)\) 0-form symmetry in 2d, for both spin and non-spin manifolds. We revisited the construction, originally due to Sharpe [44], whereby this 0-form symmetry is enhanced to a 2-group symmetry by mixing it with a \(\mathrm{U}(1)\) 1-form symmetry whose current is given by the top form on the underlying manifold. When the anomaly coefficient \(\mathcal{A}_{2}\) for the pure \(\mathrm{U}(1)\) anomaly is even, we found that, by choosing the Postnikov class of the enhanced 2-group to be \(\hat{\kappa}={\cal A}_{2}/2\), the anomaly can be fully absorbed away. On the other hand, in the presence of the spin structure, when \({\cal A}_{2}\) is odd and restricting to spin-manifolds, the most one can do is reduce the anomaly down to an order 2 discrete anomaly, that we intrepret as an anomaly in the spin structure. There are many future directions to pursue that are natural extensions of this work. One obvious alley is to look at more general classes of 2-group beyond our 'toric' assumption, _e.g._ 2-groups whose 0-form symmetry is a non-abelian Lie group, and/or 2-groups whose 0-form or 1-form symmetry is a finite group [49; 50; 51; 52; 54; 55; 56; 57; 58; 55]. One can rigorously analyse anomalies in these 2-group symmetries using the cobordism classification. Doing so involves an extension of the mathematical machinery developed in this paper, and would offer a non-trivial check on other methods of seeing the anomalies that have been explored recently in the literature [86]. One can also study how these anomalies relate to the anomalies studied in this paper through a generalised version of anomaly interplay.28 Footnote 28: A version of anomaly interplay valid for theories with 2-group symmetry can be sketched as follows. In principle, one ought to start by defining a smooth 2-homorphism between a pair of 2-groups \(\mathbb{G}\) and \(\mathbb{G}^{\prime}\), which would induce a map between the corresponding bordism spectra. But our description of tangential structures in §3.2 suggests a shortcut: it is good enough to start with an ordinary (1-)homomorphism between the topological groups \(\pi:|\mathbb{G}|\to|\mathbb{G}^{\prime}|\), discarding all information not captured by the nerves. This induces a map between cobordism groups going the other way. These ideas will be developed in future work. Another avenue to pursue concerns a higher-dimensional generalisation of the 2d 2-groups discussed in SS 6, as follows. We saw that one can cancel (almost) all pure U(1) anomalies in 2d by enhancing the U(1) 0-form symmetry to a 2-group by making use of the top-form U(1) current. One then wonders whether there is an analogous story for 'top-group' symmetry structures in higher dimensions, _e.g._ 4-group symmetry in 4d, in which a 'topological 3-form symmetry' with \(j_{4}=\mathrm{vol}_{4}\) is fused with the ordinary 0-form symmetry to kill the cubic anomaly. Moreover, the remnant order 2 anomaly in the 2d case is a direct result of the non-trivial role played by the spin structure. It remains to be seen whether such an interplay between 'top-group' symmetries and spin-structure anomalies persists in higher dimensions. For all these anomalies, there is of course a dual interpretation concerning symmetry protected topological (SPT) phases of matter that are protected by these generalised symmetry types, which are rigorously classified by cobordism. Furthermore, it is conceivable that by extending the bordism computations to more generalised symmetries one could uncover novel anomalies/phases, and suggest corresponding new constraints on the dynamics of theories with such symmetry. ## Acknowledgments We are deeply grateful to Arun Debray for several stimulating discussions, and for supplying a mathematical argument that enabled us to complete this paper. We thank Pietro Benetti Genolini, Philip Boyle Smith, and Ben Gripaios for helpful discussions. We are also grateful to Inaki Garcia Etxebarria and Sakura Schafer-Nameki for asking questions that partly motivated this project. This work was partially carried out while both authors were at DAMTP, University of Cambridge, in which time NL was supported by David Tong's Simons Investigator Award, and we were both supported by the STFC consolidated grant ST/P000681/1. JD has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme under grant agreement 833280 (FLAY), and by the Swiss National Science Foundation (SNF) under contract 200020-204428. NL is supported by the Royal Society of London and by the STFC consolidated grant in 'Particles, Strings and Cosmology' number ST/T000708/1. ## Appendix A Additional cohomology computations In this Appendix, we collect results and computations for various cohomology groups. ### Cohomology groups of Eilenberg-Maclane spaces In this Subsection, we collect classic results for cohomology groups of various Eilenberg-Maclane spaces \(K(G,n)\). The mod 2 cohomology results below are obtained by Serre in the classic paper [87]. \(\underline{K(\mathbb{Z}/2,2)}\) The mod 2 cohomology ring of an Eilenberg-Maclane space \(B^{2}\mathbb{Z}/2=K(\mathbb{Z}/2,2)\) is given by \[H^{\bullet}(K(\mathbb{Z}/2,2);\mathbb{Z}/2)\cong\mathbb{Z}/2[u_{2},\ \mathrm{Sq}^{1}u_{2},\ \mathrm{Sq}^{2}\ \mathrm{Sq}^{1}u_{2},\ldots,\ \mathrm{Sq}^{2^{k}}\ \mathrm{Sq}^{2^{k-1}}\ldots\ \mathrm{Sq}^{2}\ \mathrm{Sq}^{1}u_{2},\ldots],\] (A.1) where \(u_{2}\) is the unique generator in \(H^{2}\). \(\underline{K(\mathbb{Z},2)}\) The classifying space \(B\mathrm{U}(1)\) of \(\mathrm{U}(1)\) is a \(K(\mathbb{Z},2)\) space. We have \[H^{\bullet}(K(\mathbb{Z},2);\mathbb{Z})=H^{\bullet}(B\mathrm{U}(1);\mathbb{Z} )\cong\mathbb{Z}[c_{1}],\] (A.2) where \(c_{1}\) is the unique generator in \(H^{2}\) called the universal first Chern class. The mod 2 version is given simply by \[H^{\bullet}(K(\mathbb{Z},2);\mathbb{Z}/2)\cong\mathbb{Z}/2[c_{1}], \tag{110}\] where \(c_{1}\) is now the mod 2 reduction of the universal first Chern class. The action of the Steenrod squares are given by \(\text{Sq}^{1}c_{1}=0,\ \text{Sq}^{2}c_{1}=c_{1}^{2}\) which follows directly from the axioms. \(K(\mathbb{Z},3)\) The mod 2 cohomology ring of an Eilenberg-Maclane space \(K(\mathbb{Z},3)\) is given by \[H^{\bullet}\left(K(\mathbb{Z},3);\mathbb{Z}/2\right)\cong\mathbb{Z}/2[\tau_{3},\text{Sq}^{2}\tau_{3},\text{Sq}^{4}\text{Sq}^{2}\tau_{3},\text{Sq}^{8}\text{Sq }^{4}\text{Sq}^{2}\tau_{3},\ldots], \tag{111}\] where \(\tau_{3}\) is the unique generator in \(H^{3}(K(\mathbb{Z},3);\mathbb{Z}/2)\). Integral cohomology groups of \(K(\mathbb{Z},3)\) can be obtained by applying the universal coefficient theorem to the homology groups computed in Ref. [88]. The resulting cohomology groups are shown in Table 3 below. ### Mod 2 cohomology ring of the classifying space of an abelian 2-group In this Subsection, we use the Serre spectral sequence to calculate the mod 2 cohomlogy ring of \(B|\mathbb{G}|\) when \(\mathbb{G}\) is the abelian 2-group \(\text{U}(1)^{[0]}\times_{\hat{\kappa}}\text{U}(1)^{[1]}\) studied in SS5.2 with the Postnikov class \(\hat{\kappa}\). The cohomological Serre spectral sequence induced by the fibration \(K(\mathbb{Z},3)\to B|\mathbb{G}|\to K(\mathbb{Z},2)\) is \[E_{2}^{p,q}=H^{p}\left(K(\mathbb{Z},2);H^{q}\left(K(\mathbb{Z},3);\mathbb{Z}/2 \right)\right)\Rightarrow H^{p+q}\left(B|\mathbb{G}|;\mathbb{Z}/2\right). \tag{112}\] The entries on the \(E_{2}\) page can be computed from the results given in SSA.1. There is no non-trivial differentials on the \(E_{2}\) or \(E_{3}\) pages in the range of degrees we are interested in, and the \(E_{2}\) entries in this range propagate to the \(E_{4}\) page, shown in Fig. 7. As explained in SS5.2, the differentials \(\alpha\) and \(\beta\) on the \(E_{4}\) page are given by contraction with the Postnikov class \(\hat{\kappa}\). Thus, they are non-trivial if and only if \(\hat{\kappa}\) is odd. In this case, the resulting \(E_{5}\) page is shown on the right-hand side of Fig. 7. When \(\hat{\kappa}\) is even, \begin{table} \begin{tabular}{|c|c c c c c c c c|} \hline \(n\) & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\ \hline \(H^{n}(K(\mathbb{Z},3);\mathbb{Z})\) & \(\mathbb{Z}\) & 0 & 0 & \(\mathbb{Z}\) & 0 & 0 & \(\mathbb{Z}/2\) & 0 & \(\mathbb{Z}/3\) \\ \hline \end{tabular} \end{table} Table 3: Integral cohomology groups up to degree 8 of \(K(\mathbb{Z},3)\) the entries stabilise already on the The entries in the range of interest stabilise on this page. On the other hand, when \(\hat{\kappa}\) is even, the entries stabilise already on the \(E_{4}\) page. In either case, we can read off the mod 2 cohomology groups to be On the pages of the Serre spectral sequence, there are non-trivial \(\,\mathrm{Sq}^{2}\) actions on the mod 2 cohomology rings of the fibre \(K(\mathbb{Z},3)\) and the base \(K(\mathbb{Z},2)\) that converge to \(\,\mathrm{Sq}^{2}\) actions on \(H^{\bullet}\left(B|\mathbb{G}|;\mathbb{Z}/2\right)\) (Theorem 6.15 of [89]). In particular, in the even \(\hat{\kappa}\) case, we have \[\begin{split}\mathrm{Sq}^{2}:H^{2}\left(B|\mathbb{G}|;\mathbb{Z} /2\right)&\to H^{4}\left(B|\mathbb{G}|;\mathbb{Z}/2\right)\\ c_{1}&\mapsto c_{1}^{2}\end{split}\] (A.6) and \[\begin{split}\mathrm{Sq}^{2}:H^{3}\left(B|\mathbb{G}|;\mathbb{Z} /2\right)&\to H^{5}\left(B|\mathbb{G}|;\mathbb{Z}/2\right)\\ \tau_{3}&\mapsto\,\mathrm{Sq}^{2}\tau_{3}\end{split}\] (A.7) where \(c_{1}\), \(\tau_{3}\), \(c_{1}^{2}\), and \({\rm~{}Sq}^{2}\tau_{3}\), are now taken to be the unique generators for \(H^{i}(B|\mathbb{G}|;\mathbb{Z}/2)\), \(i=2,3,4,5\), respectively. ## Appendix B Additional bordism calculations ### Maxwell Here we compute the bordism groups for the symmetry type \[H={\rm SO}\times\mathbb{G},\qquad B|\mathbb{G}|=B^{2}{\rm U}(1)_{e}\times B^{2 }{\rm U}(1)_{m}\,,\] (B.1) relevant to 'free Maxwell theory', _i.e._ U(1) gauge group with no dynamical matter, which has intact electric and magnetic U(1) 1-form global symmetries, discussed in SS4. Since U(1) \(\simeq S^{1}\simeq B\mathbb{Z}\), the classifying space for a \(U(1)\) 1-form symmetry \(B^{2}{\rm U}(1)\simeq K(\mathbb{Z},3)\) is an Eilenberg-Maclane space whose cohomology is known. Using the Kunneth theorem we find the integral homology of \(B|\mathbb{G}|=B^{2}{\rm U}(1)\times B^{2}{\rm U}(1)\), as recorded in Table 5. We also need the oriented bordism groups of a point [90], which are recorded in Table 6. We next write down the AHSS for oriented bordism associated to the trivial fibration of \(|\mathbb{G}|\) by a point, for which the second page is given by \[E_{p,q}^{2}=H_{p}\left((B^{2}{\rm U}(1))^{2};\Omega_{q}^{\rm SO}({\rm pt}) \right)\,.\] (B.2) This is shown in Fig. 8. We want to read off the fifth and sixth bordism groups, which receive contributions from the \(p+q=5\) (blue) and \(p+q=6\) (orange) diagonals respectively. For the \begin{table} \begin{tabular}{|c|c c c c c c c c|} \hline \(i\) & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\ \hline \(\Omega_{i}^{\rm SO}({\rm pt})\) & \(\mathbb{Z}\) & 0 & 0 & 0 & \(\mathbb{Z}\) & \(\mathbb{Z}/2\) & 0 & 0 & \(\mathbb{Z}^{2}\) \\ \hline \end{tabular} \end{table} Table 6: Oriented bordism groups for a point [90]. \begin{table} \begin{tabular}{|c|c c c c c c c c|} \hline \(i\) & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 \\ \hline \(H_{i}(K(\mathbb{Z},3);\mathbb{Z})\) & \(\mathbb{Z}\) & 0 & 0 & \(\mathbb{Z}\) & 0 & \(\mathbb{Z}/2\) & 0 & \(\mathbb{Z}/3\) \\ \(H_{i}(K(\mathbb{Z},3);\mathbb{Z}/2)\) & \(\mathbb{Z}/2\) & 0 & 0 & \(\mathbb{Z}/2\) & 0 & \(\mathbb{Z}/2\) & \(\mathbb{Z}/2\) & 0 \\ \hline \(H_{i}((B^{2}{\rm U}(1))^{2};\mathbb{Z})\) & \(\mathbb{Z}\) & 0 & 0 & \(\mathbb{Z}^{2}\) & 0 & \((\mathbb{Z}/2)^{2}\) & \(\mathbb{Z}\) & \((\mathbb{Z}/3)^{2}\) \\ \(H_{i}((B^{2}{\rm U}(1))^{2};\mathbb{Z}/2)\) & \(\mathbb{Z}/2\) & 0 & 0 & \((\mathbb{Z}/2)^{2}\) & 0 & \((\mathbb{Z}/2)^{2}\) & \((\mathbb{Z}/2)^{3}\) & 0 \\ \hline \end{tabular} \end{table} Table 5: Integral and mod 2 homology groups of \((B^{2}{\rm U}(1))^{2}=(K(\mathbb{Z},3))^{2}\), relevant to Maxwell theory. sixth bordism group, there is a single factor \(E_{6,0}^{2}=\mathbb{Z}\) to consider. The only possible differential is \(d^{5}:E_{6,0}^{5}\to E_{0,5}^{5}\cong\mathbb{Z}/2\), whose kernel is obviously isomorphic to \(\mathbb{Z}\), meaning that \(E_{6,0}^{\infty}=\mathbb{Z}\). Thus, \[\Omega_{6}^{\rm SO}\left((B^{2}{\rm U}(1))^{2}\right)=\mathbb{Z}\,. \tag{104}\] For the fifth bordism group, the \(E_{5,0}^{2}=(\mathbb{Z}/2)^{2}\) element stabilizes to the last page, because the only possible differential is \(d^{4}:E_{5,0}^{4}\to E_{0,4}^{4}:\mathbb{Z}/2\mapsto\mathbb{Z}\) which is therefore the zero map. For the \(E_{0,5}^{2}\) element, it is again the \(d^{5}:E_{6,0}^{5}\to E_{0,5}^{5}\) map that is important. We can in fact deduce that this differential must be zero simply because it hits the zeroth column which records the bordism groups of a point, and since there is a canonical split \(\Omega_{\bullet}^{\rm SO}(X)\cong\Omega_{\bullet}^{\rm SO}(\text{pt})\oplus \widetilde{\Omega}_{\bullet}^{\rm SO}(X)\) for \(X\) connected, where the second factor is the reduced bordism of \(X\), the \(E_{0,5}\) factor must stabilize to the last page.29 This also makes it obvious that there can be no non-trivial extension, and so we can read off \[\Omega_{5}^{\rm SO}\left(\left(B^{2}{\rm U}(1)\right)^{2}\right)=\left(\mathbb{Z} /2\right)^{3}.\] (B.4) It is easy to read off the lower-degree bordism groups too, and we summarize the results in Table 7. ### Scalar QED with charge-2 boson Next we compute the bordism groups for \[H={\rm SO}\times\mathbb{G}^{\prime},\qquad B|\mathbb{G}^{\prime}|=B^{2} \mathbb{Z}/2_{e}\times B^{2}{\rm U}(1)_{m}\,,\] (B.5) relevant to QED with a charge-2 boson, which has the effect of breaking the \({\rm U}(1)_{e}\) 1-form symmetry down to a discrete \(\mathbb{Z}/2_{e}\) subgroup. Using the Kunneth theorem we find the integral homology of \(B^{2}\mathbb{Z}/2\times B^{2}{\rm U}(1)\), as recorded in Table 8. We next write down the AHSS for oriented bordism associated to the trivial fibration \({\rm pt}\to B^{2}\mathbb{Z}/2\times B^{2}{\rm U}(1)\to B^{2}\mathbb{Z}/2 \times B^{2}{\rm U}(1)\), for which the second page is given by (see Fig. 9), \[E_{p,q}^{2}=H_{p}\left(B^{2}\mathbb{Z}/2\times B^{2}{\rm U}(1);\Omega_{q}^{ \rm SO}({\rm pt})\right)\,.\] (B.6) We want to read off \(\Omega_{5}\) and \(\Omega_{6}\) from the stabilization of elements on the \(p+q=5\) and \(p+q=6\) diagonals respectively. For the fifth bordism group, very similar arguments to those of B.1 tell us that \(E_{5,0}^{\infty}=(\mathbb{Z}/2)^{3}\) and \(E_{0,5}^{\infty}=\mathbb{Z}/2\), and the canonical split \(\Omega_{\bullet}^{\rm SO}(X)\cong\Omega_{\bullet}^{\rm SO}({\rm pt})\oplus \widetilde{\Omega}_{\bullet}^{\rm SO}(X)\) again tells us that the extension problem is trivial, giving: \[\Omega_{5}^{\rm SO}\left(B^{2}\mathbb{Z}/2\times B^{2}{\rm U}(1)\right)=( \mathbb{Z}/2)^{4}\,.\] (B.7) \begin{table} \begin{tabular}{|c|c c c c c c c|} \hline \(i\) & 0 & 1 & 2 & 3 & 4 & 5 & 6 \\ \hline \(\Omega_{i}^{\rm SO}\left(\left(B^{2}U(1)\right)^{2}\right)\) & \(\mathbb{Z}\) & 0 & 0 & \(\mathbb{Z}^{2}\) & \(\mathbb{Z}\) & \((\mathbb{Z}/2)^{3}\) & \(\mathbb{Z}\) \\ \hline \end{tabular} \end{table} Table 7: Oriented bordism groups up to degree 6, for a pair of \({\rm U}(1)\) 1-form symmetries. \begin{table} \begin{tabular}{|c|c c c c c c c c|} \hline \(i\) & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 \\ \hline \(H_{i}(B^{2}\mathbb{Z}/2;\mathbb{Z})\) & \(\mathbb{Z}\) & 0 & \(\mathbb{Z}/2\) & 0 & \(\mathbb{Z}/4\) & \(\mathbb{Z}/2\) & \(\mathbb{Z}/2\) & \(\mathbb{Z}/2\) \\ \(H_{i}(B^{2}\mathbb{Z}/2;\mathbb{Z}/2)\) & \(\mathbb{Z}/2\) & 0 & \(\mathbb{Z}/2\) & \(\mathbb{Z}/2\) & \(\mathbb{Z}/2\) & \((\mathbb{Z}/2)^{2}\) & \((\mathbb{Z}/2)^{2}\) & \((\mathbb{Z}/2)^{2}\) \\ \hline \(H_{i}(B^{2}\mathbb{Z}/2\times B^{2}{\rm U}(1);\mathbb{Z})\) & \(\mathbb{Z}\) & 0 & \(\mathbb{Z}/2\) & \(\mathbb{Z}\) & \(\mathbb{Z}/4\) & \((\mathbb{Z}/2)^{3}\) & \(\mathbb{Z}/2\) & \((\mathbb{Z}/2)^{2\times\mathbb{Z}/3\times\mathbb{Z}/4}\) \\ \(H_{i}(B^{2}\mathbb{Z}/2\times B^{2}{\rm U}(1);\mathbb{Z}/2)\) & \(\mathbb{Z}/2\) & 0 & \(\mathbb{Z}/2\) & \((\mathbb{Z}/2)^{2}\) & \(\mathbb{Z}/2\) & \((\mathbb{Z}/2)^{4}\) & \((\mathbb{Z}/2)^{4}\) & \((\mathbb{Z}/2)^{4}\) \\ \hline \end{tabular} \end{table} Table 8: Integral and mod 2 homology groups of \(B^{2}\mathbb{Z}/2\) and \(B^{2}\mathbb{Z}/2\times B^{2}{\rm U}(1)\). Recall the homology of \(B^{2}{\rm U}(1)\) is recorded in the first two lines of Table 5. For \(\Omega_{6}\), there is a potentially non-vanishing differential on the fifth page, \(d^{5}:E_{7,0}\to E_{2,5}:(\mathbb{Z}/2)^{2}\times\mathbb{Z}/3\times\mathbb{Z}/4 \mapsto\mathbb{Z}/2\). Without knowing this differential, one cannot read off \(\Omega_{6}\) - the most we can conclude is that it equals \(\mathbb{Z}/2\), \(\mathbb{Z}/4\), or \(\mathbb{Z}/2\times\mathbb{Z}/2\). In any case, we have that \(\Omega_{6}^{\mathrm{SO}}\left(B^{2}\mathbb{Z}/2\times B^{2}\mathrm{U}(1)\right)\) is pure torsion, which is all that's relevant for discussing the corresponding anomalies in 4d. ## Appendix C Computation of a key differential This Appendix is written by Arun Debray. In this appendix, we finish the computation from SS5.2.3, showing that if \(\mathbb{G}\) is a 2-group with 0- and 1-form parts both \(\mathrm{U}(1)\) and \(k\)-invariant equal to \(m\) times the generator of \(H^{4}(B\mathrm{U}(1);\mathbb{Z})\cong\mathbb{Z}\), where \(m\) is odd, then \(\Omega_{5}^{\mathrm{Spin}}(B|\mathbb{G}|)\cong\mathbb{Z}/m\). After the calculation in SS5.2.3, the only ambiguity is in the 2-torsion, which could be either 0 or \(\mathbb{Z}/2\), depending on the value of a \(d_{3}\) in the Atiyah-Hirzebruch spectral sequence in Fig. 5. This differential resisted several of our standard techniques to address it, as did an analogous differential in the Adams spectral sequence computing the 2-completion of \(\Omega_{5}^{\mathrm{Spin}}(B|\mathbb{G}|)\). Here we finish the computation of the 2-torsion subgroup of \(\Omega_{5}^{\mathrm{Spin}}(B|\mathbb{G}|)\) in a different way, resolving the differentials only implicitly. Specifically, we fit \(\Omega_{\bullet}^{\mathrm{Spin}}(B|\mathbb{G}|)\) into a long exact sequence (C.2) and compute enough other terms in the long exact sequence to determine \(\Omega_{5}^{\mathrm{Spin}}(B|\mathbb{G}|)\). This long exact sequence is a version of the Gysin sequence for spin bordism, and one of its homomorphisms has a geometric interpretation as a _Smith homomorphism_, a map between bordism groups defined by sending a manifold \(M\) to a smooth representative of the Poincare dual to a characteristic class of \(M\). The general version of (C.2) was written down in [32] to study Smith homomorphisms and their applications to anomalies of quantum field theories; see there and [91; 92] for more on Smith homomorphisms and their role in physics. **Theorem C.1** ([32]).: _Let \(V\to X\) be a vector bundle of rank \(r\) over a CW complex \(X\), \(\pi\colon S(V)\to X\) be the sphere bundle of \(V\), \(X^{V-r}\) be the Thom spectrum of the rank-zero virtual vector bundle \(V-\underline{\mathbb{R}}^{r}\), and \(E_{\bullet}\) be a generalised homology theory. Then there is a long exact sequence_ \[\cdots\longrightarrow E_{n}(S(V))\xrightarrow{\pi*}E_{n}(X)\xrightarrow{\psi }E_{n-r}(X^{V-r})\longrightarrow E_{n-1}(S(V))\longrightarrow\ldots\] (C.1) The map \(\psi\) is the Smith homomorphism, but we will not need that fact; see [32] for more information. Proof.: We will show that the Thom space \(\mathrm{Th}(X,V)\) of \(V\) is the homotopy cofibre of \(\pi\), so there is the Puppe long exact sequence \(\cdots\to E_{n}(S(V))\to E_{n}(X)\to\widetilde{E}_{n}(\mathrm{Th}(X,V))\to E _{n-1}(S(V))\to\cdots\). Once this is established, (C.1) follows: \(X^{V-r}\simeq\Sigma^{-r}\Sigma_{+}^{\infty}\mathrm{Th}(X,V)\), and \(\Sigma_{+}^{\infty}\) induces an isomorphism for any generalised homology theory and \(\Sigma^{-r}\) shifts generalised homology down in grading by \(r\), yielding (C.1).30 Footnote 30: The long exact sequence (C.1), and indeed this whole appendix, could have been done with \(\mathrm{Th}(X,V)\), but once we apply bordism, it is conceptually helpful to replace \(\mathrm{Th}(X,V)\) by \(X^{V-r}\) — \(\widetilde{\Omega}_{k}^{\mathrm{Spin}}(\mathrm{Th}(X,V))\) is a bordism group of \((k-r)\)-dimensional manifolds, and \(\Omega_{k}^{\mathrm{Spin}}(X^{V-r})\) is a bordism group of \(k\)-manifolds. The degree shift must occur somewhere, and we believe our choice is clearest. The homotopy cofibre of an inclusion \(A\hookrightarrow X\) of CW complexes such that the image of \(A\) is a union of cells is the quotient \(X/A\); in general, one can compute the homotopy cofibre of a map \(f\colon A\to X\) between CW complexes by changing \(f\) and \(X\) by a homotopy so that \(f\) is indeed a cellular inclusion. For \(\pi\colon S(V)\to X\) from the theorem statement, replace \(\pi\) with the inclusion \(i\colon S(V)\hookrightarrow D(V)\) of \(S(V)\) into the disc bundle \(D(V)\) of \(V\to X\); the map \(\pi^{\prime}\colon D(V)\to X\) is a homotopy equivalence and \(\pi^{\prime}\circ i=\pi\), and the CW structures on \(X\) and the standard CW structure on \(D^{n}\) can be used to put CW structures on \(S(V)\) and \(D(V)\) such that the image of \(i\) is a union of cells. Thus the homotopy cofibre of \(\pi\colon S(V)\to V\) is the quotient \(D(V)/S(V)\), which is by definition \(\operatorname{Th}(X,V)\). We are interested in \(E_{\bullet}=\Omega_{\bullet}^{\operatorname{Spin}}\), and specifically in 2-torsion. To avoid worrying about torsion for other primes, we _localise at \(2\)_, meaning we tensor with the ring \(\mathbb{Z}_{(2)}\) of rational numbers whose denominators in lowest terms are odd. If \(A\) is a finitely generated abelian group, so that \(A\) is a direct sum of a free abelian group \(\mathbb{Z}^{r}\) and cyclic groups \(\mathbb{Z}/p_{i}^{e_{i}}\) of prime-power order, localising at \(2\) has the effect of replacing \(\mathbb{Z}^{r}\) with \((\mathbb{Z}_{(2)})^{r}\), preserving all \(2^{e_{i}}\)-torsion, and sending all odd-prime-power torsion to zero. Since we are trying to determine the 2-torsion subgroup of \(\Omega_{5}^{\operatorname{Spin}}(B|\mathbb{G}|)\), which is a finitely generated abelian group, we lose no relevant information by localising at \(2\). Moreover, since \(\mathbb{Z}_{(2)}\) is a flat ring, tensoring with \(\mathbb{Z}_{(2)}\) preserves long exact sequences (_i.e._ 2-localisation is exact), so we can still apply (C.1) after 2-localising. For any 2-group \(\mathbb{H}\), the classifying space \(B|\mathbb{H}|\) has a CW structure (_e.g._ because it is the geometric realization of a simplicial space), so we can invoke Theorem C.1 for \(X=B|\mathbb{G}|\). Quotienting \(\mathbb{G}\) by its 1-form symmetry defines a map to the 0-form part of \(\mathbb{G}\), which on classifying spaces induces a map \(q\colon B|\mathbb{G}|\to B\text{U}(1)\). Let \(V\to B|\mathbb{G}|\) be the pullback of the tautological complex line bundle \(L\to B\text{U}(1)\) by \(q\). Then Theorem C.1, together with exactness of 2-localisation, gives us a long exact sequence \[\dots\to\Omega_{5}^{\operatorname{Spin}}(S(V))\otimes\mathbb{Z}_{(2)}\to \Omega_{5}^{\operatorname{Spin}}(B|\mathbb{G}|)\otimes\mathbb{Z}_{(2)}\to \Omega_{3}^{\operatorname{Spin}}((B|\mathbb{G}|)^{V-2})\otimes\mathbb{Z}_{(2)}\to\dots\] (C.2) We will show in Theorem C.3 that \(\Omega_{5}^{\operatorname{Spin}}(S(V))\otimes\mathbb{Z}_{(2)}\) and \(\Omega_{3}^{\operatorname{Spin}}((B|\mathbb{G}|)^{V-2})\otimes\mathbb{Z}_{(2)}\) both vanish; exactness then implies \(\Omega_{5}^{\operatorname{Spin}}(B|\mathbb{G}|)\otimes\mathbb{Z}_{(2)}=0\) as well, meaning \(\Omega_{5}^{\operatorname{Spin}}(B|\mathbb{G}|)\) has no 2-torsion, which is what we want to prove in this appendix. Before doing so, we first need to figure out what \(S(V)\) is: **Lemma C.2**.: _The sphere bundle \(S(V)\) is homotopy equivalent to \(K(\mathbb{Z},3)\), and this homotopy equivalence identifies the bundle map \(S(V)\to B|\mathbb{G}|\) with the map \(K(\mathbb{Z},3)\to B|\mathbb{G}|\) given by inclusion of the 1-form U\((1)\) symmetry._ Proof.: Let \(q\colon B|\mathbb{G}|\to B\text{U}(1)\) be the classifying map for \(V\). Then the sphere bundle of \(V\) is the pullback of the sphere bundle of the tautological line bundle \(L\to B\text{U}(1)\), _i.e._ there is a homotopy pullback square \[\begin{CD}S(V)@>{}>{}>S(L)\\ @V{}V{}V@V{}V{}V\\ B|\mathbb{G}|@>{q}>{}>B\text{U}(1)\end{CD}\] (C.3) The sphere bundle of a complex line bundle is the associated principal \(\mathrm{U}(1)\)-bundle; for \(L\to B\mathrm{U}(1)\), this is the tautological bundle \(E\mathrm{U}(1)\to B\mathrm{U}(1)\), meaning \(S(L)\) is contractible. In general, a homotopy pullback in which one of the two legs is contractible is the homotopy fibre (denoted \(\mathrm{hofb}(\text{-})\)) of the other leg, meaning the map \(S(V)\to B|\mathbb{G}|\) can be identified with the canonical map \(\mathrm{hofb}(q)\to B|\mathbb{G}|\). Recall that \(q\) arose from a short exact sequence of topological groups \[1\longrightarrow G\overset{i}{\longrightarrow}H\overset{p}{\longrightarrow}K\longrightarrow 1 \tag{104}\] by applying the classifying space functor; specifically, \(G=|\mathrm{U}(1)[1]|\), \(H=|\mathbb{G}|\), and \(K=\mathrm{U}(1)\), and \(q=Bp\). The classifying space functor turns short exact sequences of groups into fibre sequences of spaces, meaning that for (104) the homotopy fibre of \(Bp\) is homotopy equivalent to \(BG\), and this homotopy equivalence identifies the canonical map \(\mathrm{hofb}(Bp)\to BH\) with \(Bi\colon BG\to BH\). For our specific choices of \(G\), \(H\), and \(K\), this tells us that up to homotopy equivalence, the map \(S(V)\to B|\mathbb{G}|\) is the map \(B|\mathrm{U}(1)[1]|\simeq K(\mathbb{Z},3)\to B|\mathbb{G}|\) as claimed in the theorem statement. **Lemma C.3**.: \(\Omega_{5}^{\text{Spin}}(K(\mathbb{Z},3))\otimes\mathbb{Z}_{(2)}\) _and \(\Omega_{3}^{\text{Spin}}((B|\mathbb{G}|)^{V-2})\otimes\mathbb{Z}_{(2)}\) both vanish._ Proof.: We check both of these using the Adams spectral sequence; the relevant differentials and extension questions are trivial for degree reasons, so these computations are straightforward analogues of Beaudry and Campbell's computations in Ref. [93]. We direct the reader to [93] for more on our proof strategy and notation. The Adams spectral sequence computes 2-completed spin bordism, not 2-localised spin bordism, but because \(\Omega_{5}^{\text{Spin}}(K(\mathbb{Z},3))\) and \(\Omega_{3}^{\text{Spin}}((B|\mathbb{G}|)^{V-2})\) are finitely generated abelian groups, this distinction is not important: the 2-localisation of a finitely generated abelian group \(A\) vanishes if and only if the 2-completion of \(A\) vanishes, if and only if \(A\) lacks both free and 2-torsion summands. For \(K(\mathbb{Z},3)\), we need as input the \(\mathcal{A}(1)\)-module structure on \(H^{\bullet}(K(\mathbb{Z},3);\mathbb{Z}/2)\) in degrees 6 and below. This was computed by Serre [87, SS10]: \(H^{\bullet}(K(\mathbb{Z},3);\mathbb{Z}/2)\) is a polynomial algebra on generators \(\tau_{3}\) in degree 3, \(\mathrm{Sq}^{2}\tau_{3}\) in degree 5, and other generators in degrees too high to matter to us. Using this, one can compute that if \(\hat{\mathcal{O}}\) denotes the \(\mathcal{A}(1)\)-module \(\mathcal{A}(1)/(\mathrm{Sq}^{1},\mathrm{Sq}^{2}\mathrm{Sq}^{3})\), then there is an isomorphism of \(\mathcal{A}(1)\)-modules \[H^{\bullet}(K(\mathbb{Z},3);\mathbb{Z}/2)\cong\mathbb{Z}/2\oplus\Sigma^{3} \hat{\mathcal{O}}\oplus P, \tag{105}\] where \(P\) is concentrated in degrees 7 and above, hence is irrelevant for our computation. We draw this decomposition in Fig. 10, left. Liulevicius [94, Theorem 3] computes \(\mathrm{Ext}_{\mathcal{A}(1)}(\mathbb{Z}/2)\) and Beaudry-Campbell [93, Figure 29] compute \(\mathrm{Ext}_{\mathcal{A}(1)}(\hat{\mathcal{O}})\), so we can draw the \(E_{2}\)-page of the Adams spectral sequence in Fig. 10, right. The \(E_{2}\)-page vanishes in topological degree 5, _i.e._ for \(t-s=5\), so the \(E_{\infty}\)-page must also vanish in that degree, and we conclude \(\Omega_{5}^{\text{Spin}}(K(\mathbb{Z},3))\otimes\mathbb{Z}_{(2)}=0\). For \((B|\mathbb{G}|)^{V-2}\), using the description of the \(\mathcal{A}(1)\)-module structure on \(H^{\bullet}(B|\mathbb{G}|;\mathbb{Z}/2)\) in low degrees from SSA.2 and the way Stiefel-Whitney classes twist the Steenrod squares of a Thom spectrum (see [93, SS3.3]), we obtain an isomorphism of \(\mathcal{A}(1)\)-modules \[H^{\bullet}((B|\mathbb{G}|)^{V-2};\mathbb{Z}/2)\cong C\eta\oplus P^{\prime},\] (C.6) where \(C\eta:=\Sigma^{-2}\widetilde{H}^{\bullet}(\mathbb{CP}^{2};\mathbb{Z}/2)\) and \(P^{\prime}\) is concentrated in degrees 5 and above, hence is irrelevant to our computation. We draw this isomorphism in Fig. 11, left. \(\text{Ext}_{\mathcal{A}(1)}(C\eta)\) is computed in [93, Example 4.5.6 and Figure 22], so we can draw the \(E_{2}\)-page of the Adams spectral sequence in Fig. 11, right. The \(E_{2}\)-page is empty in topological degree 3, so the \(E_{\infty}\)-page is also empty, and we conclude \(\Omega_{3}^{\rm Spin}((B|\mathbb{G}|)^{V-2})\otimes\mathbb{Z}_{(2)}=0\).
2302.03786
Analyzing the Performance of Deep Encoder-Decoder Networks as Surrogates for a Diffusion Equation
Neural networks (NNs) have proven to be a viable alternative to traditional direct numerical algorithms, with the potential to accelerate computational time by several orders of magnitude. In the present paper we study the use of encoder-decoder convolutional neural network (CNN) as surrogates for steady-state diffusion solvers. The construction of such surrogates requires the selection of an appropriate task, network architecture, training set structure and size, loss function, and training algorithm hyperparameters. It is well known that each of these factors can have a significant impact on the performance of the resultant model. Our approach employs an encoder-decoder CNN architecture, which we posit is particularly well-suited for this task due to its ability to effectively transform data, as opposed to merely compressing it. We systematically evaluate a range of loss functions, hyperparameters, and training set sizes. Our results indicate that increasing the size of the training set has a substantial effect on reducing performance fluctuations and overall error. Additionally, we observe that the performance of the model exhibits a logarithmic dependence on the training set size. Furthermore, we investigate the effect on model performance by using different subsets of data with varying features. Our results highlight the importance of sampling the configurational space in an optimal manner, as this can have a significant impact on the performance of the model and the required training time. In conclusion, our results suggest that training a model with a pre-determined error performance bound is not a viable approach, as it does not guarantee that edge cases with errors larger than the bound do not exist. Furthermore, as most surrogate tasks involve a high dimensional landscape, an ever increasing training set size is, in principle, needed, however it is not a practical solution.
J. Quetzalcoatl Toledo-Marin, James A. Glazier, Geoffrey Fox
2023-02-07T22:53:19Z
http://arxiv.org/abs/2302.03786v1
# Analyzing the Performance of Deep Encoder-Decoder Networks as Surrogates for a Diffusion Equation ###### Abstract Neural networks (NNs) have been demonstrated to be a viable alternative to traditional direct numerical evaluation algorithms, with the potential to accelerate computational time by several orders of magnitude. In the present paper we study the use of encoder-decoder convolutional neural network (CNN) algorithms as surrogates for steady-state diffusion solvers. The construction of such surrogates requires the selection of an appropriate task, network architecture, training set structure and size, loss function, and training algorithm hyperparameters. It is well known that each of these factors can have a significant impact on the performance of the resultant model. Our approach employs an encoder-decoder CNN architecture, which we posit is particularly well-suited for this task due to its ability to effectively transform data, as opposed to merely compressing it. We systematically evaluate a range of loss functions, hyperparameters, and training set sizes. Our results indicate that increasing the size of the training set has a substantial effect on reducing performance fluctuations and overall error. Additionally, we observe that the performance of the model exhibits a logarithmic dependence on the training set size. Furthermore, we investigate the effect on model performance by using different subsets of data with varying features. Our results highlight the importance of sampling the configurational space in an optimal manner, as this can have a significant impact on the performance of the model and the required training time. In conclusion, our results suggest that training a model with a pre-determined error performance bound is not a viable approach, as it does not guarantee that edge cases with errors larger than the bound do not exist. Furthermore, as most surrogate tasks involve a high dimensional landscape, an ever increasing training set size is, in principle, needed, however it is not a practical solution. Machine Learning Deep Learning Diffusion Surrogate Encoder-decoder Neural Networks ## 1 Introduction Diffusion is ubiquitous in physical, biological and engineered systems. In mechanistic computer simulations of the dynamics of such systems, solving the steady state and time-varying diffusion equations with multiple sources and sinks is often the most computationally expensive part of the calculation, especially in cases with multiple diffusing species with diffusion constants differing by multiple orders of magnitude. In real-world problems, the number of sources and sinks, their shape, boundary fluxes and positions differ from instance to instance and may change in time. Boundary conditions may also be complicated and diffusion constants may be anisotropic or vary in space. The resulting lack of symmetry means that many high-speed implicit and frequency-domain diffusion-solver approaches do not work effectively, requiring the use of simpler but slower forward solvers [1]. Deep learning surrogates to solve either the steady-state field or the time-dependent field for a given set of sources and sinks subject to diffusion can increase the speed of such simulations by several orders of magnitude compared to the use of direct numerical solvers. Deep learning techniques have been successfully applied to a wide range of problems by means of solving the corresponding sets of partial differential equations. This has been extensively studied in literature, with references such as [2, 3, 4, 5, 6, 7, 8]. See Ref. [9] for a thorough review. Furthermore, embedding neural networks in differential equations has also shown promising results in the area of Neural-ODEs. Modeling a physical process via an ordinary differential equation (ODE) typically involves equating the rate of change of a quantity, such as a concentration, to an operator applied to that quantity and any additional quantities, known as inhomogeneities or external fields. The ODE is then solved and compared to experimental data to validate the model or fit parameters. The operator in the ODE is traditionally chosen based on the symmetries of the physical process. However, in the case of Neural-ODEs, the operator is replaced with a neural network, which is trained by solving the Neural-ODE and comparing it to experimental data [10, 11]. Diffusion models are an example of Neural-ODEs [12]. Additionally, Physics-Informed Neural Networks (PINNs) have been proposed as a means of addressing forward and inverse problems by embedding physical information into the neural network. This can be achieved by embedding the ODE, initial conditions and boundary conditions into the loss function used to train the neural network [13]. These methods have been successful in various fields, such as computational molecular dynamics [14, 15, 16]. However, the complexity of multiscale systems, which often exhibit different characteristic length and time scales differing by orders of magnitude, in addition to the lack of symmetries, can make the solution of these systems challenging. AI-based surrogates using deep learning methods can be used to accelerate computation by replacing specific classical solvers, while preserving the interpretability of mechanistic models. In the field of multiscale modeling, AI-based surrogates can provide a computationally efficient alternative to traditional approaches such as Monte Carlo methods and molecular dynamics. The development of effective deep neural network (NN) surrogates for diffusion-solver problems is a challenging task. This is due to several factors, such as the high dimensionality of the problem specification, which includes an arbitrary pattern of sources and sinks, varying boundary conditions for each source and sink, and spatially variable or anisotropic diffusivities. Additionally, there are a number of design decisions that must be made, such as the selection of the network architecture, the structure and size of the training set, the choice of loss function, and the selection of hyperparameters for training algorithms. Furthermore, it is essential to define accurate metrics for evaluating the task-specific performance of the trained network, as this may differ from the loss function used during training. In prior work [17], we demonstrated the utility of deep convolutional neural networks in predicting the stationary solution for a diffusion problem on a \(100\times 100\) lattice with two circular random sources of radius 5 units and random fixed flux, located in random positions, under absorbing boundary conditions. Here, we extend this approach by increasing the number of sources and the lattice size to \(512\times 512\). The problem specification is as follows: we assume absorbing boundary conditions and the sources are fully contained within the domain. Each source imposes a constant value on the diffusing field within the source and at its boundary, with one of the sources having a value of \(1\) and the other sources having values randomly selected from a uniform distribution between \((0,1]\) (as depicted in Figure 1(a)). Outside of the sources, the field diffuses with a constant diffusion constant (\(D\)) and linearly decays with a constant decay rate (\(\gamma\)). This simple geometry could represent the diffusion and uptake of oxygen in a volume of tissue between parallel blood vessels of different diameters. Although reflecting or periodic boundary conditions may better represent a portion of a larger tissue, we use the simpler absorbing boundary conditions here. The steady-state field in this case is highly dependent on the distance between the sources, the distance between the sources and the boundary, both relative to the diffusion length (\(l_{D}=(D/\gamma)^{1/2}\)) and on the sources' field strengths. In practice, the steady-state diffusion equation maps an input image consisting of \(512\times 512\) pixels with a value of \(0\) outside the sources and constant values between \(0\) and \(1\) inside the sources, to an output image of the same size, which is a superposition of exponentially decaying functions, with its maximum at the center of each source (as illustrated in Figure 1(c)). In order to evaluate the performance of a neural network (NN) in approximating the steady-state diffusion field for configurations of sources that it had not previously encountered, we trained the NN on explicit numerical solutions of the steady-state diffusion field for \(16,000\) examples per number of sources. It is well-established that the diffusion kernel convolution used in the direct solution of the time-dependent diffusion equation, such as finite-element methods, is a type of convolutional neural network (CNN) [1]. Therefore, in our study, we chose to utilize deep convolutional neural networks (DCNNs) as the architecture. However, there are various types of CNNs that can be employed. In this study, we considered a deep convolutional neural network with the archetype of an encoder-decoder [18]. We argue that the encoder-decoder CNN architecture is particularly well-suited for this task, as it effectively transforms data, rather than compressing it, akin to the Fourier transform. Additionally, we evaluated different loss functions and prefactors to weigh in the source regions in the lattice, which are statistically undersampled in the training set. We also assessed and discussed the impact of different hyperparameters and training set sizes on the model performance. However, in order to accurately evaluate the performance, it is essential to define a suitable metric. We discussed different metrics to evaluate the different models trained. Our results suggest that increasing the size of the training set has the most significant effect on reducing performance fluctuations and overall error. Furthermore, we studied the performance of models versus training set size and found that as the training set size increases, the Figure 1: Snapshot of **a)** initial condition, **b)** stationary state solution and **c)** the prediction of one of our trained models. **a)** We placed \(19\) random value sources of radius \(5\)\(voxels\) in random positions fully within a \(512\times 512pixel\) lattice and used this configuration as the input to the NN. **b)** Stationary solution to the diffusion equation with absorbing boundary conditions for the initial conditions in **a)** where the sources have a constant random flux. The stationary solution **b)** is the target for the NN, while the prediction **c)** is the output of the NN given **a)** as input. We fixed the diffusion constant to \(D=1\)\(voxels^{2}/s\) and the decay rate to \(\gamma=1/400s^{-1}\), which yields a diffusion length equal to \(\sqrt{D/\gamma}\)\(voxels=20voxels\). performance fluctuations decrease, and the computational time for training increases linearly. In addition, the model performance has a logarithmic dependence on the training set size. It is crucial to acknowledge that our findings do not indicate the existence of an optimal training set size. Rather, our results suggest that training a model with a pre-determined error performance bound is not a viable approach, as it does not guarantee that edge cases with errors larger than the bound do not exist. Furthermore, as most surrogate tasks involve a high dimensional landscape, an ever increasing training set size is, in principle, needed, however it is not a practical solution. ## 2 Methods The steady state of the diffusion equation is precisely the convolution of the equation's associated Green's function and the initial state. Hence a naive approach would consider a deep convolutional NN (CNN) where the height and width in each layer is kept fixed and equal to the input. CNNs are good at detecting edges and as the number of convolution layers increases, the CNN can detect more sophisticated elements. So can a CNN predict the stationary solution? We argue that it would require a large number of convolution layers such that each convolution would be proportional to a timestep. Adding many convolutions would lead to a rather harder and slower training, as it would increase the chances of under or overflow in the gradient. Furthermore, the number of convolution layers would need to be variable is some form or fashion in order to mimic the timesteps required to reach the stationary solution given an initial condition. There are ways to address each of the previous setbacks but there's also a trade-off for every decision (_e.g._, increasing the number of convolutional layers increases the model's instability which can be addressed by adding more regularization layers which makes the model larger and requires more time to train). Experience tells us that deep convolutional NNs where the height and width in each layer is kept fixed and equal to the input are not good for our task [17]. We designed and built an encoder-decoder CNN (ED-CNN) as our architecture. Fig. 2 provides a sketch of our NN architecture. We denote by \(|x\rangle\) and \(|\hat{y}\rangle\) the input and output images, that is the initial condition layout of the source cells and the predicted stationary solution of the diffusion equation, respectively. The input \(|x\rangle\) goes through convolutional blocks depicted in Fig. 2 and generates the prediction \(|\hat{y}\rangle\) which we then compare with the ground truth \(|y\rangle\). The core feature of an encoder-decoder is the cone-like structure whereby the information is being compressed when it reaches the bottle neck and then it's decompressed. Consider the following: A CNN can be seen as a set of operators \(\{O_{i}\}\) acting on the input, _i.e._, \[|\hat{y}\rangle=\prod_{t=1}^{n}O_{i}|x\rangle\, \tag{1}\] where the \(i\) tags the convolutional layer. Notice that in the particular case where \(\{O_{i}\}\) are unitary linear operators and all the same \(O=\mathbf{1}-Ad\,t\), Eq. (1) becomes \[|\hat{y}\rangle=(\mathbf{1}-Ad\,t)^{n}|x\rangle. \tag{2}\] Notice that the previous Eq. is akin to a ResNet architecture [19]. By defining \(dt=t/n\), where \(t\) is time, and taking the limit \(n\rightarrow\infty\) leads to \(|\hat{y}\rangle=\exp(At)|x\rangle\). Hence, when \(|\hat{y}\rangle\) equals the ground truth \(|y\rangle\), the CNN has learnt the operator \(\exp(At)\). Notice that taking the derivative with respect to time leads to the diffusion equation: \[\frac{\partial|\hat{y}\rangle}{\partial t}=A|\hat{y}\rangle. \tag{3}\] Typically, numerical methods for searching for the stationary solution of a diffusion equation operate by comparing the solution at time \(t\) with the solution at time \(t+\Delta t\), and halting the process when the difference between \(|y(t)\rangle\) and \(|y(t+\Delta t)\rangle\) is smaller in magnitude than some pre-determined error value. However, the time \(t\) at which the stationary solution is reached depends on the initial conditions and the inhomogeneity term in the diffusion equation. This analogy suggests that, similar to traditional numerical methods, CNNs also require a flexible number of layers to reach the stationary solution, rather than an infinite number of layers. There have been recent developments in addressing the issue of a variable number of layers. For example, in reference [20], the authors propose a method in which the dropout parameters are trained in conjunction with the model parameters. In the context of ED-CNNs, it is a widely held belief that the cone-like architecture leads to improved performance due to the compression in the bottleneck. However, we argue that this is not the sole, if any, contributing factor. It is important to note that the ED-CNN architecture involves summing over the lattice and introducing a new variable, the depth. Additionally, past the bottleneck, the ED-CNN projects back to the original space. In this sense, we say the ED-CNN learns a propagator in a reciprocal space similar to methods such as Laplace and Fourier transform. In Table 1 we specify the neural network by specifying the type of operation and output shape through the sequence of blocks in the NN. Notice that the bottleneck block is \(2048*16*16\) which is twice the size of the input. To generate representative \(\sigma\)-source initial conditions and paired steady-state diffusion fields, we considered a two-dimensional lattice of size \(512\times 512\) units\({}^{2}\). We generated \(400\)k initial configurations with one to \(20\) sources randomly placed in the lattice, such that there are \(20\)k per fixed number of sources (see Fig. 4). Each source has a \(5\)-unit radius and for any given configuration one source has a constant flux value equal to \(1\), while the other sources have a constant flux value between \(0\) and \(1\) randomly assigned using a uniform distribution. Everywhere else the field value is zero. Each initial configuration served as the input for the NN \(|x\rangle\). Then we calculated the stationary solution to the diffusion equation with absorbing boundary conditions for each initial condition using the _Differential Equation_ package in Julia [22]. The Julia-calculated stationary solution is the target or ground truth image \(|y\rangle\) for the NN. In Figs. 1(a) and 1(c) we show an initial condition and the stationary solution, respectively. We set the diffusion constant to \(D=1\)units\({}^{2}/\)s and the decay rate \(\gamma=1/400\)s\({}^{-1}\), which yield a diffusion length \(l_{D}=\sqrt{D/\gamma}=20\) units. Notice that this length is \(4\) times the radius of the sources and \(\sim 1/25\) the lattice linear dimension. As \(\gamma\) increases and as \(D\) decreases, this length decreases. As this length decreases, the field gradient also decreases [23]. The source code to generate the data and train the NN can be found in Ref. [24]. We built and trained the NN in PyTorch. We used ADAM as the optimizer [25]. Deciding on a loss function is a critical choice in the creation of the surrogate. The loss function determines the types of error the surrogate's approximation will make compared to the direct calculation and the acceptability of these errors will depend on the specific application. The mean squared error (_MSE_) error is a standard choice. However, it is more sensitive to larger absolute errors and therefore tolerates large relative errors at pixels with small values. In most biological contexts we want to have a small absolute error for small values and a small relative error for large values. We explored the Figure 2: Sketch of the NN’s architecture. The first \(6\) layers perform a meanpool operation that reduces the height and width by half after each layer, with the image dimensions following the sequence \(\{512^{2},256^{2},128^{2},64^{2},32^{2},16^{2},16^{2}\}\) while adding channels after each layer following the sequence \(\{1,64,128,256,512,1024,2048\}\). Then, the following \(7\) blocks of layers reverse the process, reducing the number of channels following the sequence \(\{1024,512,256,128,64,32,1\}\) while increasing the height and width following the sequence \(\{16^{2},32^{2},64^{2},128^{2},256^{2},512^{2},512^{2}\}\). This sketch only defines the kind of layers used. For details about the activation functions used in each layer, see Table 1. NN's performance when trained using a set of different loss functions with different hyperparameters. The set of different loss functions we used were _MAE, MSE_, Huber loss [26] and inverse Huber loss defined as: \[IH_{\theta}(x)=\begin{cases}\frac{1}{2}x^{2},&|x|>\delta\,,\\ \delta(x-\frac{1}{2}\delta),&|x|\leq\delta\end{cases}\quad. \tag{4}\] The highest and lowest values in the input and output images are 1 and 0, respectively. The former only occurs in sources and their vicinity. Given the configurations of the sources, the fraction of pixels in the image with values near 1 is \(\sim\pi R^{2}/L^{2}\approx 0.03-0.6\%\), as shown in Fig. 3(h). Thus, pixels with small values are much more common than pixels with large values, and because the loss function is an average over the field, high field values tend to get washed out. To account for this unbalance between the frequency of occurrence of low and high values, we considered prefactors to assign different weights to the error per pixel. One of which was an exponential weight on the pixels. We modulate this exponential weight through a scalar hyperparameter \(w(\geq 0)\), such that the \(i\)th lattice position's contribution to the loss function yields: \[\mathcal{L}_{E,i\beta}^{(\alpha)}=\exp(-(\langle i|\mathbf{1}\rangle-\langle i |y_{\beta}\rangle)/w)\cdot\left(\langle i|\hat{y}_{\beta}\rangle-\langle i|y_{ \beta}\rangle\right)^{\alpha}\,, \tag{5}\] where \(\alpha\) is 1 or 2 for MAE or MSE, respectively and \(\beta\) tags the tuple in the data set (input and target). Here \(\langle|\rangle\) denotes the inner product and \(|i\rangle\) is a unitary vector with the same size as \(|y_{\beta}\rangle\) with all components equal to zero except the element in position \(i\) which is equal to one. \(|\mathbf{1}\rangle\) is a vector with all components equal to 1 and with size equal to that of \(|y_{\beta}\rangle\). Then \(\langle i|y_{\beta}\rangle\) is a scalar corresponding to the pixel value at the \(i\)th position in \(|y_{\beta}\rangle\), whereas \(\langle i|\mathbf{1}\rangle=1\) for all \(i\). Notice that high and low pixel values will have an exponential weight \(\approx 1\) and \(\approx\exp(-1/w)\), respectively. This implies that the error associated to high pixels will have a larger value than that for low pixels. Another prefactor we considered was a step-wise function, _viz._ \[\mathcal{L}_{S,i\beta}^{(a)}=\left(1+a\tanh(b\langle i|y_{\beta}\rangle)\right) \cdot\left(\langle i|\hat{y}_{\beta}\rangle-\langle i|y_{\beta}\rangle\right)^{ \alpha}\,, \tag{6}\] where \(a(\geq 0)\) and \(b(\geq 0)\) are hyperparameters. As mentioned earlier, we also trained models by replacing the metric term (MSE or MAE) with the Huber loss or the inverse Huber loss with hyperparameter \(\delta\). The loss function \(\mathcal{L}_{\chi}^{(a)}\) is the mean value over all pixels (\(i\)) and data set (\(\beta\)) with \(\chi=\{E,S\}\) for exponential weight and step weight, respectively, _i.e._, \[\mathcal{L}_{\chi}^{(a)}=\langle\mathcal{L}_{x,i\beta}^{(a)}\rangle\,. \tag{7}\] Here \(\langle\bullet\rangle\) denotes average over pixels and data set. Each model was trained up to 100 epochs unless stated otherwise, and saved the model with the lowest loss function value over the corresponding test set. We used 80% and 20% of the data set for training and test sets, respectively (see Fig. 4). We trained different models and studied the effects different loss functions, and different data set size and features have in the model performance. In the next section we describe these and present the results. ## 3 Results ### Performance for different loss functions and data set sizes In this section we present the performance results of each of the models we trained. To this end, we use the MAE between prediction and target for each model. In general, models with good performance may not necessarily perform well in predicting sources and/or the field close to sources. In addition to computing the MAE on the whole lattice, we also measured the MAE in different regions in the lattice as shown in Fig. 3. Filtering field and sources can be done using the initial state, while regions _R1, R2_ and _R3_ correspond to pixel values in \([0.2,1]\), \([0.1,0.2]\) and \([0.05,0.1]\), respectively. We compare the effects of different loss functions and different data set sizes. We validate all of our models using the same test set composed by \(80\,k\) tuples (_i.e._, input and target). In general, there are three main factors that contribute to the performance difference between different models, namely, the data set size, the loss function and the pseudo-stochasticity when training the model (weights initialization, minibatch parsing, etc). We found that the factor that contributes the most to performance is the data set size. For clarity purposes, we present our results by clustering models by data size trained on. We trained models using \(1.25\%,2.5\%,5\%,12.5\%,25\%,50\%\) and \(100\%\) of the training set. In every training set partition we selected an equal number of tuples per number of sources. Additionally, the data set with \(1.25\%\) is a subset of the data set with \(2.5\%\) which is a subset of the data set with \(5\%\) and hence forth as depicted in Fig. 4. After training each model, we evaluated the performance on the test set per number of sources (see Fig. 4). We tagged each model with a code which references the data set size used, the prefactor used, whether it was trained with MAE, MSE, Huber or inverse Huber and, in some case, we reference in the naming code the hyperparameters used. For instance, model 40E1 was trained with \(1/40=2.5\%\) of the data set, an exponential prefactor and MAE while 80EIH was trained using \(1/80=1.25\%\) of the data set, exponential prefactor and inverse Huber. In Fig. 5 we show the MAE for models trained with \(1.25\%\) of the data set. The hyperparameter selected for each model are displayed in Table 2. There are a number of features to highlight from Fig. 5. First, the largest absolute error occurs at the sources and decreases as one moves away from the sources. In addition, the error increases monotonically with the number of sources. Interestingly, as the number of sources increases, the error at the sources decreases rapidly and then increases slow and steadily. Most models performed quite similar, but overall model 80E1 performed best. However, the difference in performance is not significant as we will discuss next. To assess the amplitude of fluctuations in performance due to stochasticity from initialization and training of the models, we trained three sets of five models each. All models were trained using the same hyperparameters. Each Figure 3: Shown in white the **a)** field, **b)** sources, **c)** region 1, **d)** region 2 and **e)** region 3 regions in lattice given **f)** the input and **g)** the stationary state solution for 19 randomly placed sources. **h)** Average area fraction in lattice covered by R1,R2,R3 and sources _vs_ number of sources. \begin{table} \begin{tabular}{|c c c c c c c c|} \hline Codenname & w & a & b & \(\delta\) & Loss Function & Epochs & App Trans \\ \hline \hline 80E1 & 1 & - & - & - & Exp-MAE & 100 & - \\ \hline 80E2 & 1 & - & - & - & Exp-MSE & 100 & - \\ \hline 80E1H & 1 & - & - & - & Exp-IH & 100 & - \\ \hline 80S1\_a4000\_b10\_sqrtt & - & 4000 & 10 & - & Step-MAE & 100 & sqrt \\ \hline 80S1\_a6000\_b1 & - & 6000 & 1 & - & Step-MAE & 100 & sqrt \\ \hline 80S1\_a4000\_b10 & - & 4000 & 10 & - & Step-MAE & 100 & - \\ \hline 80S1\_a6000\_b1 & - & 6000 & 1 & - & Step-MAE & 100 & - \\ \hline 80S1\_a8000\_b1 & - & 8000 & 1 & - & Step-MAE & 100 & - \\ \hline 80S1\_a6000\_b10 & - & 6000 & 10 & - & Step-MAE & 100 & - \\ \hline 80S2 & - & 6000 & 10 & - & Step-MSE & 100 & - \\ \hline 80SH & - & 6000 & 1 & - & Step-Huber & 100 & - \\ \hline 80SH\_sqrtt & - & 4000 & 10 & - & Step-Huber & 100 & sqrt \\ \hline \end{tabular} \end{table} Table 2: Models trained with 1.25% of the training data set. During training of model 80S1_a6000_b1, we used a learning rate equal to 0.00002. Models 80S1_a4000_b10_sqrtt, 80S1_a6000_b1_sqrtt and 80SH_sqrt were trained by applying square root to the training data. Figure 4: Diagram of data set sizes. The data set is composed by 400k tuples (input and ground truth). We separate it in 20% for test set (40k) and 80% for training set (360k). We further take subsets \(S_{i}\) of the training and test sets as depicted in the figure (50%, 25%, 12.5%, 5%, 2.5% and 1.25%). Each of these subsets contain equal number of samples with 1 to 20 sources. Each subset is contained in the immediate next, i.e., \(S_{1.25\%}\subset S_{2.5\%}\subset S_{5\%}\subset S_{12.5\%}\subset S _{25\%}\subset S_{50\%}\subset S_{100\%}\). set was trained with a 1.25% data set, such that there was no overlap between the data sets. In Fig. 6 we show the average performance and standard deviation (in shadow) per set. Notice that there is no significant improvement between sets. Furthermore, the fluctuations per set are large enough to conclude that the different performances between the different models in Fig. 5 can be attributed to fluctuations. In Fig. 7 we show the MAE for models trained with 2.5% of the data set. The hyperparameter selected for each model are displayed in Table 3. Model 40E1, which was trained using the exponential weight MAE loss function, performed best in all the lattice regions and for all number of sources. However, notice that the model with the second best performance, model 40S1, was trained using the step-weighed MAE, while what the two worst two models have in common is that both were trained using MSE. Furthermore, the largest difference in performance between the best and the worst model, which happens in the sources, relative to the error is \(\approx\) 0.2. This suggests the different performance in models can be attributed to fluctuations and not necessarily the effect of the loss function. On the other hand, it was found that, in general, the performance decreases as the number of sources increases. As noted previously, in the case of the sources the performance first decreases, reaching a minimum around 5 sources and it then increases with the number of sources. The appearance of a minimum in performance is a feature that we will come across throughout the present paper and we leave the discussion for later. In Fig. 8 we show the MAE for models trained with 5% of the data set. The hyperparameter selected for each model are displayed in Table 4. Notice that the same general trends discussed previously also holds in this case. In particular, models 20EIH-# performed best, which were trained using the exponential weight inverse Huber loss function. On the other hand, model 20E2 had the worst performance. To assess the amplitude of fluctuations in performance due to stochasticity from initialization and training of the models, we trained two sets of five models Figure 5: MAE _vs_ number of sources. Each curve correspond to different models (see legend) trained with 1.25% of the data. Each plot correspond to a specific region in the lattice (see Fig. 3) \begin{table} \begin{tabular}{|c c c c c c c|} \hline ID & w & a & b & \(\delta\) & Loss Function & Epochs & App Trans \\ \hline \hline 40E1 & 1 & - & - & - & Exp-MAE & 100 & - \\ \hline 40E2 & 1 & - & - & - & Exp-MSE & 100 & - \\ \hline 40S1 & - & 4000 & 10 & - & Step-MAE & 100 & - \\ \hline 40S2 & - & 4000 & 10 & - & Step-MSE & 100 & - \\ \hline \end{tabular} \end{table} Table 3: Models trained with 2.5% of the training data set. Figure 6: MAE _vs_ number of sources. Each curve correspond to the average (\(\pm\) standard deviation) of a sample (generically tagged as A, B or C) composed by five models trained with the same 1.25% data set but different initialization parameters and seeds. Each plot correspond to a specific region in the lattice (see Fig. 3) \begin{table} \begin{tabular}{|c c c c c c c|} \hline ID & w & a & b & \(\delta\) & Loss Function & Epochs & App Trans \\ \hline \hline 20E1 & 1 & - & - & - & Exp-MAE & 100 & - \\ \hline 20E2 & 1 & - & - & - & Exp-MSE & 100 & - \\ \hline 20EIH\_\_60.5 & 1 & - & - & 0.5 & Exp-Inv-Huber & 100 & - \\ \hline 20EIH\_\_60.05 & 1 & - & - & 0.05 & Exp-Inv-Huber & 100 & - \\ \hline 20EIH\_\_60.15 & 1 & - & - & 0.15 & Exp-Inv-Huber & 100 & - \\ \hline 20S1 & - & 4000 & 10 & - & Step-MAE & 100 & - \\ \hline 20S2 & - & 4000 & 10 & - & Step-MSE & 100 & - \\ \hline \end{tabular} \end{table} Table 4: Models trained with 5.0% of the training data set. Figure 7: MAE _vs_ number of sources. Each curve correspond to different models trained with 2.5% of the data. Each plot correspond to specific regions in the lattice (see Fig. 3) Figure 8: MAE _vs_ number of sources. Each curve correspond to different models trained with 5% of the data. Each plot correspond to specific regions in the lattice (see Fig. 3) \begin{table} \begin{tabular}{|l c c c c c c c|} \hline ID & w & a & b & \(\delta\) & Loss Function & Epochs & App Trans \\ \hline \hline 8E1 & 1 & - & - & - & Exp-MAE & 100 & - \\ \hline 8E2 & 1 & - & - & - & Exp-MSE & 100 & - \\ \hline 8S1 & - & 4000 & 10 & - & Step-MAE & 100 & - \\ \hline 8S2 & - & 4000 & 10 & - & Step-MSE & 100 & - \\ \hline \end{tabular} \end{table} Table 5: Models trained with 12.5% of the training data set. Figure 9: MAE _vs_ number of sources. Each curve correspond to the average (\(\pm\) standard deviation) of a sample composed by three or five models (see legend) trained with 5.0% size training set (generically called \(\Lambda\) or B, such that \(A\cap B=\varphi\)). We used the inverse Huber loss function in all 16 models with \(\delta\) fixed to \(\{0.05,0.15,0.5\}\). The remaining hyperparameters were the same per curve but different initialization parameters and seeds. Each plot correspond to a specific region in the lattice (see Fig. 3). \begin{table} \begin{tabular}{|c c c c c c c c|} \hline ID & w & a & b & \(\delta\) & Loss Function & Epochs & App Trans \\ \hline \hline 4E1 & 1 & - & - & - & Exp-MAE & 100 & - \\ \hline 4E2 & 1 & - & - & - & Exp-MSE & 89 & - \\ \hline 4R & 1 & 4000 & 10 & - & MAE & 84 & - \\ \hline 4S1 & - & 4000 & 10 & - & Step-MAE & 100 & - \\ \hline 4SH & - & 4000 & 10 & - & Step-Huber & 100 & - \\ \hline 4T-1 & - & 4000 & 10 & - & Step-MSE & 100 & - \\ \hline 4T-5 & - & 4000 & 10 & - & Step-MSE & 100 & - \\ \hline \end{tabular} \end{table} Table 6: Models trained with 25% of the training data set. Model 4E2-2 was trained adding Gaussian noise w/ standard deviation equal to 0.01 on the groundtruth. In training model 4R, each epoch selected either step or exp prefactor randomly with probability 0.8 for the former and 0.2 for the latter. In training model 4T-1, the loss function toggled between step and exp prefactor, starting with exp. In training model 4T-5, the loss function toggled between step and exp prefactor every 5 epochs. \begin{table} \begin{tabular}{|c c c c c c c|} \hline ID & w & a & b & \(\delta\) & Loss Function & Epochs & App Trans \\ \hline \hline 2E1 & 1 & - & - & - & Exp-MAE & 64 & - \\ \hline 2S1 & - & 4000 & 10 & - & Step-MAE & 81 & - \\ \hline \end{tabular} \end{table} Table 8: Models trained with 100% of the training data set. Figure 10: MAE _vs_ number of sources. Each curve correspond to different models (see legend) trained with 12.5% of the data. Each plot correspond to specific regions in the lattice (see Fig. 3)) \begin{table} \begin{tabular}{|c c c c c c c|} \hline ID & w & a & b & \(\delta\) & Loss Function & Epochs & App Trans \\ \hline \hline 2E1 & 1 & - & - & - & Exp-MAE & 56 & - \\ \hline 1S1 & - & 4000 & 10 & - & Step-MAE & 40 & - \\ \hline \end{tabular} \end{table} Table 7: Models trained with 50% of the training data set. Figure 11: MAE _vs_ number of sources. Each curve correspond to different models (see legend) trained with 25% of the data. Each plot correspond to specific regions in the lattice (see Fig. 3)) Figure 12: MAE _vs_ number of sources. Each curve correspond to different models (see legend) trained with 50% of the data. Each plot correspond to specific regions in the lattice (see Fig. 3)) each. All models were trained using the same hyperparameters. Each set was trained with a 5% data set, such that there was no overlap between the training sets. We also trained two sets more composed by 3 models each, using different \(\delta\)-hyperparameter. In Fig. 9 we show the average performance and standard deviation (in shadow) per set. Notice that there is no significant improvement between sets. However, the fluctuations per set have reduced quite substantially compared to the 1.25% case shown in Fig. 6 and discussed previously. In this regard, the different performances between the different models in Fig. 8 can be attributed to some extent to fluctuations, however notice that model 20E2 can be taken as an outlier. We trained multiple models using 12%, 25%, 50% and 100% of the data set. The models hyperparameters are specified in Tables 5, 6, 7 and 8, respectively. The performances are shown in Figs. 10, 11, 12 and 13, respectively. A repeating feature is that the error increases with the number of sources in all cases, as discussed previously. Moreover, models trained using the exp prefactor typically outperform the rest. We also trained a model, 4R, that each epoch randomly selects between exp-prefactor and step-prefactor with probability 0.2 and 0.8, respectively. We also trained two models, 4T-1 and 4T-5, that toggle between step- and exp-prefactor every 1 and 5 epochs, respectively. Notice that models 4R and 4T-5 outperform the rest, including 4E1. The rationale behind models 4T- and 4R is that perturbing the model over the loss function landscape, via toggling or randomly choosing between the loss functions, can help the model find a lower local minimum. In the case of 50% and 100% data set size, we only trained two models per data set size as we wanted to compare performance between step- and exp-prefactor. The exponential weighing prefactor show better performance in all regions of the lattice, as shown in Figs. 12 and 13. In the previous the different performances for different data set size and different loss functions were shown and discussed. We showed that increasing the data set size decreases the performance fluctuations. We also showed that, in general, different loss functions yield similar performance. However, increasing the data set size improves the model performance. In Fig. 14 we show the performance of models trained with different data set size and with the same loss function, namely, exponential-weighed MAE. Notice the performance has a logarithmic dependence on the training set size. The main takeaways are, first, models trained with exponential prefactor and MAE consistently show better performance; second, changing loss functions between epochs can have a positive effect in performance as seen by models 4R and 4T-5; third, as expected the error decreases with the training set size; fourth, as the training size increases the performance increases but the rate decreases. The later is surprising, because the high dimensionality of the problem means that even our largest training set significantly undersamples Figure 13: MAE _vs_ number of sources. Each curve correspond to different models (see legend) trained with 100% of the data. Each plot correspond to specific regions in the lattice (see Fig. 3)) the space of possible configurations. This saturation points to a possible fundamental limit in the ability of the encoder-decoder network to solve the diffusion problem and suggests that other networks might be more effective. ### Inference In this section we present and discuss the results for prediction by inference. While in the previous section we focused on performance for different data set size, here we study performance of models trained on a data set with a fixed number of sources. In this sense, we look at how well a model performs when given an input with an arbitrary number of sources different to the number of sources the model was trained on. This is particularly relevant since, by brute estimation, the configurational space is considerably large and exponentially dependent on the number of sources, however, by taking into account the configurational space symmetries one could construct the training set where the number of samples is depend on the number of sources. We found that there is an optimum, _i.e._, models trained on a fixed and different number of sources perform differently. We trained 20 models on each of the 20 data sets _one-on-one_. Each model was trained for 100 epochs and using the exponentially weighed MAE as the loss function. To measure the inference prediction, we test the prediction of each model over each of the 20 test sets. Fig. 16 shows a density plot of the normalized MAE for different regions in the lattice. The normalization is per row, _i.e._, for a fixed number of sources tested on (\(Y\) axis), each row yields the normalized MAE corresponding to each of the 19 models trained using a fixed number of sources (we have not included the model trained using 1 source as the MAE is an order of magnitude larger). These plots show that there is not a model that outperforms the rest for all test sets, rather models trained with \(\sim 10-12\) perform better. We have averaged each of the non-normalized rows in Fig. 16 and we have plotted the values in Fig. 17. The model that yields the minimum error will depend on the region in the lattice we are considering, yet overall the optimum lies in models trained between 10 and 13 sources. The previous results suggest that the different configurations obtained from considering 10 to 13 sources in a lattice is sufficient to extrapolate to the different configuration that can arise from considering more sources. We can understand this in the following manner. Notice that what the model is learning is the field generated by the superposition of sources at different relative distances. In this regard, a model trained using a data set with a small number of sources will not have a large number of sources clustered and, hence, will never learn the field generated from a large number of sources. On the other hand, a data set with a Figure 14: MAE averaged over number of sources \(\imath\)s % of training set size. Each point correspond to the models that performed best per amount of data (see legend). Each plot correspond to specific regions in the lattice (see Fig. 3)) large number of sources will most likely have a large number of clustered sources and lack small clusters of sources and, thus, a model trained using this data set will not learn the field generated from a few close sources. The data sets with number of sources around 12 are such that contain different clusters as depicted in Fig. 15. The results in Figs. 16 and 17 are in fact suggesting redundancy in the data set, _i.e.,_ a better curated data set would be comprised by different data set sizes dependent upon the number of sources. Figure 16: MAE of models trained on a fixed number of sources \(P\) (X axis) and tested on a fixed number of sources \(M\) (Y axis). Each row is normalized by dividing by the maximum value per row. Each row shows the MAE per model tested on a fixed number of sources. Each plot correspond to specific regions in the lattice (see Fig. 3)) Figure 15: Two to nine sources clustered. Models trained using a data set with a small number of sources will not learn the to predict the steady-state of a large number of sources clustered. Conversely, models trained using a data set with a large number of sources will not learn the to predict the steady-state of a small number of sources clustered. Models trained using a data set capable of generating these configurations will yield better performance. ## 4 Discussion Deep diffusion surrogates can aid in obtaining the steady-state solution for multiscale modeling. In this paper we looked at 20 sources randomly placed in 2D lattice. It's still far less complex than the real problem, _e.g._, simulation of a vascular system. Nevertheless, this is a step forward in that direction. We have shown that increasing the number of sources already pose a number of challenges in different aspects. We showed how the network architecture, the training set structure and size, the loss function, the hyperparameters for training algorithms and defining metrics to evaluate the task-specific performance of the trained network (which may differ from the loss function used in training) are all aspects that affect the final product. In the case of the NN architecture we argued that the encoder-decoder CNN architecture performs well not due to data compression, as believed by many, rather to data transformation akin to Fourier transform. However a more rigorous proof is, both required and desired. That's not to say that other architecture should not be able to perform well in predicting the steady-state solution. In fact, in prior work [17] we combined an ED-CNN and a CNN for a similar task and found that the CNN improved the performance by reinforcing the sources in the prediction. The results shown in this paper highlight that the largest absolute error occurs at and near the sources. Therefore, we reckon the architecture such as a UNet [27] as a good candidate for this task and we leave it for future work. We considered different loss functions and compared the different performances due to the different loss functions. The wide numeric range for input and output of neural networks makes the analysis very sensitive to choices in the loss function. Our results suggest that the loss function can have a significant effect on the model's performance. However, we showed that the data set size has a greater effect in the model's performance and, furthermore, the performance associated to the loss function depends on the data set size. We also showed how a large enough data set reduces the performance fluctuations in the test set. In a real problem the landscape of possibilities is unknown, which implies that the model fluctuations are, at least, unknown. This hints the difficulty in bounding the performance error for any unseen configuration. The naive solution is to increase the training data set. Increasing the training set arbitrarily will lead to an increase in training time. So care must be taken in such approach. One needs to take the best of both worlds, i.e., models trained on sufficiently large data sets to reduce fluctuations but small enough in order be able to train in a fashionable time. A better curated data set where configuration redundancy is kept at a minimum can lead to better performance. Another approach that seem promising is active Figure 17: Mean average error for a test set with containing samples with all number of sources for a network trained with a different fixed number of sources \(P\). Each data point corresponds to a network trained with a fixed number of sources \(P\) (x-axis). learning [28] whereby data is ranked by the magnitude of the performance error and data with the largest error is then fed into the training. Defining the right metrics in deep learning is highly challenging. Partly because quantifying the degree of success can be difficult, whereas it is fairly easy agreeing in the ideal success. In other words, quantifying _good enough_ is not straightforward and requires bench-marking different approaches for comparison. A thorough discussion on benchmark suite can be found in [29]. ## 5 Conclusions When selecting a NN for a specific task, it is important to consider the function and requirements of the task at hand. There is currently no consensus on which NN is optimal for a given task, primarily due to the large number of NN options available, the rapidly evolving nature of the field, and the lack of a comprehensive deep learning theory. This leads to a reliance on empirical results. Our paper is an important step at establishing best practices for this type of problems. we focused on randomly placed sources with random fluxes which yield large variations in the field. Our method can be generalized for different diffusion equations. As part of future steps, we will increase the complexity of the problem being solved by considering conditions closer to real-problems, _i.e._, by considering less symmetrical sources, different diffusivities and different boundary conditions. In addition, further design is required for these models to be used in a production environment in a reliable way, _i.e._, how to deal with error performance edge cases on-the-fly? Ultimately, to be able to deploy the model for production, one requires a method to keep the performance error below a predefined bound. For instance, one can train an additional NN that takes the predicted stationary solution and predicts the initial condition, which is then compared with the ground truth initial condition. This framework allows a comparison between ground truth input and predicted input without requiring the ground truth steady-state solution. However, this approach would only be reliable if the input error is always proportional to the steady-state error and hence requires further investigation. A perhaps simpler approach consists in sampling the NN's output for the same input using a _drop-out_ feature, such that if the fluctuations from the samples are small enough, then the NN's prediction is robust enough. Both cases require benchmark design. We have developed a process in this paper that can easily be replicated for more complicated problems of this type, and provided a variety of benchmarks. ## 6 Acknowledgements JQTM acknowledges a Mitacs Postdoctoral Fellowship. GCF acknowledges partial support from DOE DE-SC0023452 and NSF OAC-2204115.
2307.06020
Representing Vineyard Modules
Time-series of persistence diagrams, known as vineyards, have shown to be useful in diverse applications. A natural algebraic version of vineyards is a time series of persistence modules equipped with interleaving maps between the persistence modules at different time values. We call this a vineyard module. In this paper we will set up the framework for representing vineyards modules via families of matrices and outline an algorithmic way to change the bases of the persistence modules at each time step within the vineyard module to make the matrices in this representation as simple as possible. With some reasonable assumptions on the vineyard modules, this simplified representation of the vineyard module can be completely described (up to isomorphism) by the underlying vineyard and a vector of finite length. We first must set up a lot of preliminary results about changes of bases for persistence modules where we are given $\epsilon$-interleaving maps for sufficiently close $\epsilon$. While this vector representation is not in general guaranteed to be unique we can prove that it will be always zero when the vineyard module is isomorphic to the direct sum of vine modules. This new perspective on vineyards provides an interesting and yet tractable case study within multi-parameter persistence.
Katharine Turner
2023-07-12T09:02:53Z
http://arxiv.org/abs/2307.06020v1
# Representing Vineyard Modules ###### Abstract Time-series of persistence diagrams, known as vineyards, have shown to be useful in diverse applications. A natural algebraic version of vineyards is a time series of persistence modules equipped with interleaving maps between the persistence modules at different time values. We call this a vineyard module. In this paper we will set up the framework for representing vineyards modules via families of matrices and outline an algorithmic way to change the bases of the persistence modules at each time step within the vineyard module to make the matrices in this representation as simple as possible. With some reasonable assumptions on the vineyard modules, this simplified representation of the vineyard module can be completely described (up to isomorphism) by the underlying vineyard and a vector of finite length. We first must set up a lot of preliminary results about changes of bases for persistence modules where we are given \(\epsilon\)-interleaving maps for sufficiently close \(\epsilon\). While this vector representation is not in general guaranteed to be unique we can prove that it will be always zero when the vineyard module is isomorphic to the direct sum of vine modules. This new perspective on vineyards provides an interesting and yet tractable case study within multi-parameter persistence. ## 1 Introduction Vineyards ([11, 10, 3]) are established within the topological data analysis literature as a way to studying time varying data with applications including music classification [2], detecting dynamical regime change [4], and EEG dynamics [14], and studying fMRI data [13]. Historically vineyards are defined as a continuous map from a finite real interval to the space of persistence diagrams. Informally, there is an intuitive decomposition of vineyards into paths of points within the persistence diagrams which are called vines. However, to formally and rigorously decompose vineyards we need to view them as algebraic objects. In turn this requires informative ways to represent this algebraic information. To view vineyards as an algebraic object we first need to consider them as a continuous map from the unit interval to the space of persistence modules instead of persistence diagrams. There is now an important choice - do we merely require the existence of appropriate interleaving maps (which means we have no more information that the continuous map into the space of persistence diagrams) or do we incorporate the interleaving maps as part of the algebraic object? To distinguish these situations we use the term _vineyard_ for a continuous map from a closed interval \([t_{1},t_{2}]\) to the space of persistence diagrams, and _vineyard module_ to denote the algebraic object consisting of both the parameterised family of the original persistence modules alongside interleaving maps which are required to commute with the transition maps between the persistence modules. It turns out that there is a dramatic difference in the types of indecomposable vineyards under these two different paradigms. Vineyard modules contain strictly more information than vineyards. Different vineyard modules can become isomorphic as vineyards (when we forget the interleaving maps). Vineyards decompose naturally into paths of points within the plane. If no persistence diagram has any points with higher multiplicity, then this decomposition is guaranteed to be unique by continuity. This paths are called _vines_ in the existing literature. We can define vine modules as vineyard modules whose corresponding vineyard is a vine. The definition of an indecomposable vineyard module stems naturally from the definitions of morphisms and direct sums of vineyard modules. Morphisms between vineyard modules are a family of morphisms, one for each time value, which commute appropriately with interleaving maps between the persistence modules. Direct sums can constructed by taking direct sums for the persistence modules at each time value and constructing the interleaving maps as direct sums of interleaving maps for each summand. At the end we illustrate an example of a vineyard module which is provably not isomorphic to the direct sum of two vine modules. A complete characterisation of indecomposable vineyard modules is beyond the scope of this paper. Here we focus on the first prerequisite step of creating a framework to represent vineyard modules. We define a vine and matrix representation of a vineyard module which consists of an index set of vines and a family of matrices. We show that two vineyard modules with the same vine and matrix representation are isomorphic. If all the matrices in this vine and matrix representation satisfy a common block diagonal form then we automatically can split the vineyard module as a direct sum of vineyard modules constructed over each of the blocks. However, there are many different vine and matrix representations for the same vineyard module as it is very dependent on the choices of basis made for the persistence modules at each time step. To counteract this ambiguity we outline an algorithmic way to change the bases of the persistence modules at each time step within the vineyard module to make the matrices in this representation as simple as possible. With some reasonable assumptions on the vineyard modules, this simplified representation of the vineyard module can be described (up to isomorphism) by the underlying vineyard and a vector of finite length. We first must set up a lot of preliminary results about changes of bases for persistence modules where we are given \(\epsilon\)-interleaving maps for sufficiently close \(\epsilon\). While we cannot show that this representation is guaranteed to be unique, we can prove that it will be always zero when the vineyard modules is isomorphic to the direct sum of vine modules. As such it provides an algorithmic method of determining when a vineyard module is trivial. There are many potential directions of research studying these vineyard modules, with this paper providing a framework for studying them. This new perspective on vineyards provides an interesting new case study within multi-parameter persistence. More complicated than 1-parameter persistence and yet much more tractable than ladders of persistence modules (which are effectively two persistence modules with a morphism between them), let alone persistent homology of bi-filtrations. Related work includes the characterisation of the space of bases for a persistence module and the isomorphism classes and matrix representations of ladders ([8, 6, 1]). Other related work includes algorithms for updating persistence diagrams along vineyards such as in [3, 5]. There is potential for efficient computation of representations of vineyard modules. ## 2 Introducing vineyard modules and our simplifying assumptions This paper will be assuming readers are familiar with algebra of persistence modules, including interleaving maps, interleaving distance and bottleneck distance. This section is instead will focus on the various simplifying assumptions we will make.These assumptions relate to finiteness and genericity and will be reasonable in many applications. Recall that a _persistence module_\(\mathcal{X}\) is a collection of vector spaces \(\{X_{t}\}_{t\in\mathbb{R}}\) along with transition maps \(\psi_{s}^{t}:X_{s}\to X_{t}\) for all \(s\leq t\) such that \(\psi_{s}^{s}\) is the identity for all \(s\) and \(\psi_{s}^{t}\circ\psi_{r}^{s}=\psi_{r}^{t}\) whenever \(r\leq s\leq t\). A _morphism_ between persistence modules \(\alpha:\mathcal{X}\rightarrow\mathcal{Y}\) is a parameterised family of linear maps \(\alpha_{r}:X_{r}\to Y_{r}\) which commute with the transitions maps of both \(\mathcal{X}\) and \(\mathcal{Y}\). An isomorphism between persistence modules is an invertible morphism. In the representation theory of persistence modules the building blocks are interval modules. An _interval module_ over the interval \([b,d)\) is a persistence module with \(X_{t}\) a copy of the field for \(t\in[b,d)\) and \(0\) otherwise. The transition maps are the identity when \(s,t\in[b,d)\) and \(0\) otherwise. We denote the interval module over the interval \([b,d)\) by \(\mathcal{I}[b,d)\). **Assumption 2.1**.: _Throughout this paper we assume that every persistence module will isomorphic to \(\bigoplus\limits_{i=1}^{N}\mathcal{I}[b_{i},d_{i}),\) with \(b_{i}\in\mathbb{R}\) finite, \(N\) finite and \([b_{i},d_{i})\neq[b_{j},d_{j})\) for all \(i\neq j\)._ We know that up to permuting the order of the intervals this decomposition would be unique. In full generality there are four types of intervals, i.e. open-open - \((\mathbf{b},\mathbf{d})\), open-closed- \((\mathbf{b},\mathbf{d}]\), closed-open \([\mathbf{b},\mathbf{d})\), and closed-closed - \([\mathbf{b},\mathbf{d}]\) which may appear in the decomposition of persistence modules, but we will assume no intervals of these other forms appear. Note that this restriction naturally occurs in virtually all applications. By considering the each bar in the persistence barcode as a point in \(\mathbb{R}^{2}\) with the first coordinate \(b_{i}\) and second coordinate \(d_{i}\), we obtain the _persistence diagram_. We refer to \(b_{i}\) as the birth time and \(d_{i}\) as the death time. We will denote the space of persistence diagrams by \(\mathcal{PD}\). Note that by our simplifying assumptions all our persistence diagrams contain only finitely many off-diagonal points. The space of persistence diagrams is equipped with many metrics. In this paper we will only consider the bottleneck distance. The bottleneck distance is a form of optimal transport metric. For \(X\) and \(Y\) persistence diagrams with off-diagonal points \(\{x_{i}=(a_{i},b_{i})\}\) and \(\{y_{j}=(c_{j},d_{j})\}\) respectively, a transportation plan between \(X\) and \(Y\) is a subset \(M\subset X\times Y\) such that each \(x_{i}\) and \(y_{j}\) appears in at most one pair. Let \(U(X)\subset X\) be the \(x_{i}\) not appearing in any pair in \(M\) and \(U(Y)\subset Y\) the set of \(y_{j}\) not appearing in any pair in \(M\). The cost associated to \(M\) is \[\mathrm{cost}(M)=\max\{\sup_{(x_{i},y_{j})\in M}(\max(|a_{i}-c_{j}|,|b_{i}-d_{ j}|)),\sup_{x_{i}\in U(X)}(|a_{i}-b_{i}|/2),\sup_{y_{j}\in U(Y)}(|c_{j}-d_{j}|/2)\}\] The _bottleneck distance_ is defined as the infimum of the costs over all transportation plans. One of the fundamental shifts in perspective required for topology to be useful in applications is the ability to quantify how far from isomorphic two objects are. To do this we loosen the definition of morphism, which we call an \(\epsilon\)-morphism to incorporate \(\epsilon\) worth of wiggle room. A \(\epsilon\)_-morphism_ between persistence modules \(\alpha:\mathcal{X}\to\mathcal{Y}\) is a parameterised family of linear maps \(\alpha_{r}:X_{r}\to Y_{r+\epsilon}\) which commute with the transitions maps of both \(\mathcal{X}\) and \(\mathcal{Y}\). Note that a \(0\)-morphism is just a morphism. In this wiggle-room universe the analogous concept for an isomorphism is an \(\epsilon\)-interleaving. This consists of a pair of \(\epsilon\)-morphisms \(\alpha:\mathcal{X}\to\mathcal{Y}\) and \(\beta:\mathcal{Y}\to\mathcal{X}\) such that all the maps (the \(\alpha_{t}\), \(\beta_{t}\) and the transition maps in \(\mathcal{X}\) and \(\mathcal{Y}\)) commute appropriately. We can use interleaving maps to define the _interleaving distance_ between persistence modules; \[d_{int}(\mathcal{X},\mathcal{Y})=\inf\{\epsilon\geq 0\mid\text{there exists an $\epsilon-\text{interleaving between $\mathcal{X}$ and $\mathcal{Y}$}}\}.\] It is well known that the interleaving distance between two persistence modules is the same as the bottleneck distance between their respective diagrams [9]. For a vineyard module the persistence modules at each time value are given so we will be using interleaving distances. We are now ready to define vineyards and then vineyard modules. **Definition 2.2**.: A _vineyard_ is a map from \([t_{1},t_{2}]\) to \(\mathcal{PD}\) which is continuous with respect to the bottleneck distance. We can define the domain of a vineyard as the set of times where the corresponding persistence diagram contains at least one off-diagonal point. **Definition 2.3**.: Given a vineyard \(\mathscr{X}=\{\mathcal{X}_{s}\}\) the _support_ of \(\mathscr{X}\) is the set of \(s\) such that \(\mathcal{X}_{s}\) contains an off-diagonal point. A _vine_ is a map from a compact interval to \(\mathcal{PD}\) which is continuous with respect to the bottleneck distance, such that the support is a non-empty interval and the number of off-diagonal points is at most one. Every vineyard can be written as the union of a finite number of vines. If no persistence diagram has any points of multiplicity greater than or equal to \(2\) then this union of vines is unique up to reindexing. This is the generic case. Whenever there are a multiplicity of points in the persistence diagrams there will be a combinatorial explosion of different potential decompositions of the vineyard into vines. Our simplifying assumptions will imply all the vineyards we study are generic (as in never points of multiplicity in any persistence diagram), that every vineyard will contain only a finite number of vines, and that two critical values can coincide (as in only one pair of birth values, one pair of death values of one pair of a birth and a death values can be the same). These genericity assumptions are analogous to those found within Cerf theory which is studies one-parameter families of Morse functions. **Definition 2.4** (Vineyard module).: Let \([t_{1},t_{2}]\subset\mathbb{R}\) and \(f:[t_{1},t_{2}]\rightarrow(0,\infty)\) be a bounded continuous function. For \(s,t\in[t_{1},t_{2}]\) set \(F(s,t)=\big{|}\int_{s}^{t}f(x)\,dx\big{|}\). A _vineyard module_ over with respect to \(f\) is a family of persistence modules \(\{V_{t}\mid t_{1}\leq t\leq t_{2}\}\), alongside for each \(s<t\) an \(F(s,t)\)-interleaving \(\alpha_{s}^{t}:V_{s}\to V_{t}\) and \(\beta_{t}^{*}:V_{t}\to V_{s}\) such that all the interleaving maps commute with each other and the transition maps within the individual persistence modules. We call these \(\alpha_{s}^{t}\) and \(\beta_{t}^{*}\) the _interleaving maps_ within the vineyard module as they interleaving between the persistence modules at different time values. For the sake of clarity we will from now on make the simplifying assumption that \(f:[t_{1},t_{2}]\rightarrow\mathbb{R}\) is the constant function with value \(1\). Such vineyard modules and their corresponding vineyards are called \(1\)_-Lipschitz_. This will imply that the scaling function is \(F(s,t)=|s-t|\). One potential way to extend the results from \(1\)-Lipschitz to more general vineyard modules would be to explore rescaling the time parameter via the value of \(f\) at each point in time. There are complications when multiple vineyard modules which have different scales for the the interleaving maps which has great potential for confusion. For this reason we will restrict here to the simpler case where everything is \(1\)-Lipschitz and leave generalising to all vineyard modules to future research. **Assumption 2.5**.: _We will assume that all vineyard modules are \(1\)-Lipschitz._ Given a vineyard module we can forget the interleaving maps and consider the underlying vineyard. There can be many different vineyard modules that have the same underlying vineyard but are not isomorphic. We can define the vines within a vineyard module as the vines within its corresponding vineyard. As we have two-dimensions to consider we will use different terminology to help discriminate. We will refer to the parameter along to vineyard which references which persistence module we are in as the _time_ and the parameter within a single persistence persistence module as the _height_. A height within a persistence module is _critical_ if it is birth or death time of some interval within the interval decomposition. A time is _critical_ if the birth and death values within the corresponding persistence module are not all distinct. **Assumption 2.6**.: _We will assume that all vineyard modules contain only finitely many vines and only finitely many critical times. We also will assume at the critical times that no more that at most two critical heights can coincide (as in only one pair of birth values, one pair of death values of one pair of a birth and a death values can be the same)._ Our simplifying assumptions assure that the set of vines within a vineyard module is uniquely determined as there are no points with higher multiplicity than \(1\) in any of the persistence diagrams. We can use the decomposition of a vineyard into vines to give a consistent labelling of the basis elements within the persistence modules of a vineyard module. This will substantially ease the bookkeeping required. Given a vine we can construct its _vine module_ which is a vineyard module whose persistence modules \(\mathcal{X}_{t}\) are interval modules \(\mathcal{I}[\mathbf{b}(\gamma(t)),\mathbf{d}(\gamma(t)))\) for all \(t\in\operatorname{supp}(\gamma)\) and the zero persistence module otherwise. We also have \(|s-t|\)-morphisms between \(\mathcal{X}_{s}=(X_{a}^{s})_{a\in\mathbb{R}}\) and \(\mathcal{X}_{t}=(X_{a}^{t})_{a\in\mathbb{R}}\); with \(\alpha_{a}:X_{a}^{s}\to X_{a+|s-t|}^{t}\) are the identity for \(\mathbf{b}(\gamma(s))\leq a<\mathbf{d}(\gamma(t))-|s-t|\), and otherwise the zero map, and \(\beta::X_{a}^{t}\to X_{a+|s-t|}^{s}\) defined symmetrically. **Definition 2.7**.: A _morphism_ between vineyard modules \(\mathscr{A}=\{\mathcal{A}_{t}\}\) and \(\mathscr{B}=\{\mathcal{B}_{t}\}\) is a family of morphisms \(\alpha_{t}:\mathcal{A}_{t}\rightarrow\mathcal{B}_{t}\) which commute with all the appropriate interleaving and transition maps. Once we have a notion of morphism we can define, submodules, indecomposable modules, simple modules and a decomposition into submodules. There are many directions of theory development of the relevant homological algebra. However we will leave this for future work. Given two vineyard modules we can consider their direct sum. **Definition 2.8**.: Let \(\mathscr{V}=(\{V_{t}\},\{\alpha_{V}^{s\to t}\},\{\beta_{V}^{t \to s}\}\) and \(\mathscr{W}=(\{W_{t}\},\{\alpha_{W}^{s\to t}\},\{\beta_{W}^{t \to s}\})\) be vineyard modules. Their _direct sum_\(\mathscr{V}\oplus\mathscr{W}\) is the vineyard module with persistence modules \(\{V_{t}\oplus W_{t}\}\) and interleaving maps \(\{\alpha_{V}^{s\to t}\oplus\alpha_{W}^{s\to t}\}\) and \(\{\beta_{V}^{t\to s}\oplus\beta_{W}^{t\to s}\}\). We know that every vineyard module is isomorphic to a direct sum of indecomposable vineyard modules. In this decomposition, each vine must be fully contained in a single summand. **Proposition 2.9**.: Let \(\mathscr{X}\) be a vineyard module which is the direct sum of vineyard modules of vineyard modules \(\mathscr{V}=\{V_{t},\alpha_{V},\beta_{V}\}\) and \(\mathscr{W}=\{W_{t},\alpha_{W},\beta_{W}\}\). Let \(\gamma\) be a vine of \(V\). Then either \([\mathbf{b}(\gamma(t)),\mathbf{d}(\gamma(t)))\) is an interval in the interval decomposition of \(V_{t}\) for all \(t\) in the support of \(\gamma\), or \([\mathbf{b}(\gamma(t)),\mathbf{d}(\gamma(t)))\) is an interval in the interval decomposition of \(W_{t}\) for all \(t\) in the support of \(\gamma\). Proof.: Since \(\mathscr{X}=\mathscr{V}\oplus\mathscr{W}\) we also have \(X_{t}=V_{t}\oplus W_{t}\) for all \(t\). For each \(t\in\operatorname{supp}(\gamma)\), \([\mathbf{b}(\gamma(t)),\mathbf{d}(\gamma(t)))\) is an interval in the interval decomposition of \(X_{t}\) so it must either be an interval in \(V_{t}\) or an interval in \(W_{t}\). Let \(A^{V},A^{W}\subset\operatorname{supp}(\gamma)\) be the sets of values of \(t\) where \([\mathbf{b}(\gamma(t)),\mathbf{d}(\gamma(t)))\) is an interval in the interval decomposition of \(V_{t}\) and \(W_{t}\) respectively. If either of these sets is empty we are done. Suppose neither set is empty. Since \(\operatorname{supp}(\gamma)\) is a connected interval which is open in \((s_{0},s_{1})\) without loss of generality (swapping the roles of \(\mathscr{V}\) and \(\mathscr{W}\) if necessary) there exists a value \(t\in A^{V}\) and a sequence \(\{t_{n}\}\) in \(A^{W}\) which converges to \(t\). Let \(\epsilon>0\) be the minimum distance from \([\mathbf{b}(\gamma(t)),\mathbf{d}(\gamma(t)))\) to any interval in \(X_{t}\) or the diagonal. This \(\epsilon\) is non-zero by our genericity assumption that there are no intervals of higher multiplicity. There is an element of \(s\in\{t_{n}\}\) with distance less than \(\epsilon/2\) from \(t\). As \(\mathscr{V}\) is a vineyard module the bottleneck distance between \(V_{t}\) and \(V_{s}\) is bounded by \(|s-t|\). However, since \([\mathbf{b}(\gamma(t)),\mathbf{d}(\gamma(t)))\) is not an interval in the interval decomposition of \(V_{s}\) there is no interval within \(V_{s}\) suitable to pair with \([\mathbf{b}(\gamma(t)),\mathbf{d}(\gamma(t)))\) in \(V_{t}\). This causes a contradiction. From Proposition 2.9 we know that each when decomposing a vineyard module into submodules that each vine that this decomposition will also partition the vines. **Corollary 2.10**.: _Let \(\mathscr{V}\) be a vineyard modules with vines \(\{\gamma_{1},\gamma_{2},\ldots\gamma_{N}\}\). Let \(\bigoplus_{n=1}^{k}\mathscr{V}_{i}\) be a decomposition of \(\mathscr{V}\) into (non-zero) indecomposable submodules. Then \(k\leq N\) and there exists a partition \(P=\sqcup_{i=1}^{k}P_{i}\) of \(\{1,\ldots N\}\) such that the the vineyard of \(\mathscr{V}_{i}\) consists of the union of the vines \(\{\gamma_{j}\mid j\in P_{i}\}\)._ First observe that by Proposition 2.9 we know that each vine must entirely contained in the vineyards of \(\mathscr{V}\) for exactly one \(i\). We know that \(k\leq N\) as whenever the corresponding vineyard of a vineyard module has no vines it must have come from the zero vineyard module. **Definition 2.11**.: Given an underling vineyard \(\mathbb{V}\) with vines \(\{\gamma_{1},\gamma_{2},\ldots,\gamma_{K}\}\), the _trivial_ vineyard module is the direct sum of the vine modules \(\mathcal{I}[\gamma_{i}]\). ## 3 Matrix representations of \(\epsilon\)-morphisms Throughout we will be exploiting matrix representation of \(\epsilon\)-morphisms between persistence modules which first requires understanding what a basis is. Given a persistence module there can be many possibles choices of basis. The space of bases is more complicated than in the situation of vector spaces. For looking at the space of all possible bases in a persistence module in great detail please see [8]. Here we will use much more condensed notation. For the purposes of this paper we will use the following description of a basis. Note that this description does require the assumption our persistence modules are in the form \(\oplus_{i=1}^{m}\mathcal{I}[b_{i},d_{i})\) and other definitions would need to be used if we were considering intervals with different choices of closed/open endpoints. Before we define a basis we must first define the birth and death time of an element within a persistence module. **Definition 3.1**.: Let \(\mathcal{X}=(X_{t},\phi_{s}^{t})\) be a persistence module. We say that \(x\in X_{t}\) is _born_ at \(t\), denoted \(\mathbf{b}(x)=t\), if \(x\) is not in the image of \(\phi_{s}^{t}(X_{s})\) for any \(s<t\). We define the _death_ of \(x\), denoted \(\mathbf{d}(x)\), to be \(\inf\{s>\mathbf{b}(x)\mid\phi_{\mathbf{b}(x)}^{s}(x_{t})=0\}\). **Definition 3.2**.: Suppose \(\mathcal{X}=(X_{t},\phi_{s}^{t})\) is a persistence module with interval decomposition \(\oplus_{i=1}^{N}\mathcal{I}[b_{i},d_{i})\) such that no intervals appear with multiplicity greater than \(1\). The set \[\{x_{1},x_{2},\ldots,x_{N}\mid x_{i}\in X_{b_{i}}\}\] is called a _basis_ for \(\mathcal{X}\) if, \(\mathbf{b}(x_{i})=b_{i}\), \(\mathbf{d}(x_{i})=d_{i}\) and for each \(t\in\mathbb{R}\), the set \(\{\phi_{\mathbf{b}(x_{i})}^{t}(x_{i})\mid\mathbf{b}(x_{i})\leq t<\mathbf{d}(x_ {i})\}\) is a basis of \(X_{t}\). Once we have fixed a choice of basis for \(\mathcal{X}=(\{X_{t}\},\{\phi_{s}^{t}\})\) and \(\mathcal{Y}=(\{Y_{t}\},\{\psi_{s}^{t}\})\) we can consider the matrix representations for any \(\epsilon\)-morphism \(\alpha:\mathcal{X}\rightarrow\mathcal{Y}\) with respect to this basis. Using the index order for the basis elements (\(B_{X}=\{x_{i}\}\) generators of \(\mathcal{X}\) and \(B_{Y}=\{y_{j}\}\) generators for \(\mathcal{Y}\)), we can construct matrix \(Mat_{B_{X}}^{B_{Y}}(\alpha)\) by requiring \[\alpha_{\mathbf{b}(x_{i})}(x_{i})=\sum_{\{j|\mathbf{b}(x_{i})+\epsilon\in[ \mathbf{b}(y_{j}),\mathbf{d}(y_{j}))\}}Mat_{B_{X}}^{B_{Y}}(\alpha)(j,i)\psi_{ \mathbf{b}(y_{j})}^{\mathbf{b}(x_{i})+\epsilon}(y_{j}).\] and setting \(Mat_{B_{X}}^{B_{Y}}(\alpha)(j,i)=0\) whenever \(\mathbf{b}(x_{i})+\epsilon\notin[\mathbf{b}(y_{j}),\mathbf{d}(y_{j}))\). This is well defined as each vector space \(Y_{t}\in\mathcal{Y}\) has \(\{\psi_{\mathbf{b}(y_{j})}^{t}(y_{j})\mid\mathbf{b}(y_{j})\leq t<\mathbf{d}( y_{j})\}\) as a basis. **Lemma 3.3**.: _For fixed bases \(B_{X}\) and \(B_{Y}\) of persistence modules \(\mathcal{X}=(\{X_{t}\},\{\phi_{s}^{t}\})\) and \(\mathcal{Y}=(\{Y_{t},\psi_{s}^{t}\})\), the matrix \(Mat_{B_{X}}^{B_{Y}}(\alpha)\) completely determines \(\alpha\). Furthermore, if \(Mat_{B_{X}}^{B_{Y}}(\alpha)(j,i)\neq 0\) then_ \[\mathbf{b}(y_{j})\leq\mathbf{b}(x_{i})+\epsilon<\mathbf{d}(y_{j})\leq\mathbf{ d}(x_{i})+\epsilon\] Proof.: We can write each of the linear maps \(\alpha_{t}\) via \(Mat_{B_{X}}^{B_{Y}}(\alpha)\) and the transition maps \(\phi\) and \(\psi\). \[\alpha_{s}\left(\sum_{\mathbf{b}(x_{i})\leq s<\mathbf{d}(x_{i})} \lambda_{i}\phi_{\mathbf{b}(x_{i})}^{s}(x_{i})\right) =\sum_{\mathbf{b}(x_{i})\leq s<\mathbf{d}(x_{i})}\lambda_{i} \alpha_{s}(\phi_{\mathbf{b}(x_{i})}^{s}(x_{i}))\] \[=\sum_{\mathbf{b}(x_{i})\leq s<\mathbf{d}(x_{i})}\lambda_{i} \psi_{\mathbf{b}(x_{i})+\epsilon}^{s+\epsilon}(\alpha_{\mathbf{b}(x_{i})}(x_{ i}))\] \[=\sum_{\mathbf{b}(x_{i})\leq s<\mathbf{d}(x_{i})}\lambda_{i} \psi_{\mathbf{b}(x_{i})+\epsilon}^{s+\epsilon}\left(\sum_{\mathbf{b}(y_{j}) \leq\mathbf{b}(x_{i})+\epsilon<\mathbf{d}(y_{j})}Mat_{B_{X}}^{B_{Y}}(\alpha)(j,i)\psi_{\mathbf{b}(y_{j})}^{\mathbf{b}(x_{i})+\epsilon}(y_{j})\right)\] \[=\sum_{\{i|\mathbf{b}(x_{i})\leq s<\mathbf{d}(x_{i})\}}\lambda_{i }\sum_{\{j|\mathbf{b}(y_{j})\leq\mathbf{b}(x_{i})+\epsilon<\mathbf{d}(y_{j})\} }Mat_{B_{X}}^{B_{Y}}(\alpha)(j,i)\psi_{\mathbf{b}(y_{j})}^{s+\epsilon}(y_{j})\] Since the \(\alpha_{r}\) must commute with the transition maps if \(Mat(\alpha)(j,i)\) is non-zero then \(\mathbf{d}(x_{i})\geq\mathbf{d}(y_{j})-\epsilon\). Instead of using change of basis matrices (such as explored in [8]) we will instead represent each change of basis as a linear transformation of the previous basis. That is, we wish to write the new basis elements as a linear combination of the old basis elements. This will reduce the linear algebra calculations needed later and avoid the issue of using inverses (which are not well-defined when using extended basis later). **Definition 3.4**.: Let \(\mathcal{X}\) be a persistence module with interval decomposition \(\oplus_{i=1}^{N}\mathcal{I}[b_{i},d_{i})\) such that no intervals appear with multiplicity greater than \(1\). We say that an \(N\times N\) matrix \(A=(a_{ij})\) is a _basis transformation matrix_ for \(\mathcal{X}\) if \(a_{ii}\neq 0\) for all \(i\) and whenever \(a_{ji}\neq 0\) then \(\mathbf{b}(x_{j})\leq\mathbf{b}(x_{i})\) and \(\mathbf{d}(x_{j})\leq\mathbf{d}(x_{i})\). The following lemma is effectively proved in [8] but with such vastly different notation and perspective that we include the proof here. **Lemma 3.5**.: _Let \(\mathcal{X}=(\{X_{t}\},\{\phi_{s}^{t}\})\) be a persistence module with interval decomposition \(\oplus_{i=1}^{N}\mathcal{I}[b_{i},d_{i})\) such that no intervals appear with multiplicity greater than \(1\). Fix a basis \(B=\{x_{1},\ldots x_{N}\}\) for \(\mathcal{X}\). If \(A=(a_{ji})\) is a basis transformation matrix then the set \(B^{new}:=\{x_{1}^{new},x_{2}^{new},\ldots,x_{N}^{new}\}\) forms a basis for \(\mathcal{X}\) where_ \[x_{i}^{new}=\sum a_{ji}\phi_{\mathbf{b}(x_{j})}^{\mathbf{b}(x_{i})}(x_{j})\in X _{\mathbf{b}(x_{i})}.\] _With a slight abuse of notation we write \(B_{Y}^{new}=A(B_{Y})\). For this new basis we have \(\mathbf{b}(x_{i}^{new})=\mathbf{b}(x_{i})\) and \(\mathbf{d}(x_{i}^{new})=\mathbf{d}(x_{i})\)._ Proof.: Let \(x_{i}^{new}=\sum a_{ji}\phi_{\mathbf{b}(x_{j})}^{\mathbf{b}(x_{i})}(x_{j})\) which by construction is an element of \(X_{\mathbf{b}(x_{i})}\). Fix a sufficiently small \(\delta>0\) so that no births of deaths events occur within \([\mathbf{b}(x_{i})-\delta,\mathbf{b}(x_{i}))\). As \(B\) is a basis, \(\phi_{\mathbf{b}(x_{i})}^{\mathbf{d}(x_{i})}(x_{i})=0\). Furthermore, by assumption, \(\phi_{\mathbf{b}(x_{j})}^{\mathbf{d}(x_{i})}(x_{j})=0\) whenever \(a_{ji}\neq 0\). Together these imply \[\phi_{\mathbf{b}(x_{i})}^{\mathbf{d}(x_{i})}(x_{i}^{new})=a_{ii}\phi_{\mathbf{ b}(x_{i})}^{\mathbf{d}(x_{i})}(x_{i})+\sum_{\{j|\mathbf{b}(x_{j})<\mathbf{b}(i)\}}a_ {ji}\phi_{\mathbf{b}(x_{j})}^{\mathbf{d}(x_{i})}(x_{j})=0.\] For \(t\in[\mathbf{b}(x_{i}),\mathbf{d}(x_{i}))\) we know that \(\{\phi_{\mathbf{b}(x_{j})}^{t}(x_{j})\mid\mathbf{b}(x_{j})\leq t<\mathbf{d}(x _{j})\}\) is a basis of \(X_{t}\). This implies that \(\phi_{\mathbf{b}(x_{i})}^{t}(x_{i})\) is linearly independent to \(\{\phi_{\mathbf{b}(x_{j})}^{t}(x_{j})\mid\mathbf{b}(x_{j})\leq t<\mathbf{d}(x _{j}),j\neq i\}\) and \(\phi_{\mathbf{b}(x_{i})}^{t}(x_{i}^{n}ew)=a_{ii}\phi_{\mathbf{b}(x_{i})}^{t}(x _{i})+\sum_{\{j|\mathbf{b}(x_{j})<\mathbf{b}(i)\}}a_{ji}\phi_{\mathbf{b}(x_{j })}^{t}(x_{j})\neq 0\). We have now shown that \(\mathbf{b}(x_{i}^{new})=\mathbf{b}(x_{i})\) and \(\mathbf{d}(x_{i}^{new})=\mathbf{d}(x_{i})\) for all \(i\). We need to show that the set \(\{\phi_{\mathbf{b}(x_{i}^{new})}^{t}(x_{i})\mid\mathbf{b}(x_{i}^{new})\leq t< \mathbf{d}(x_{i}^{new})\}\) is a basis of \(X_{t}\). Fix a \(t\) and let \(S=\{i\mid\mathbf{b}(x_{i})\leq t<\mathbf{d}(x_{i})\}\). set \(A_{t}\) to be the matrix \(A\) restricted to the columns and rows with indices in \(S\). Without loss of generality, rearrange the order of the indices in \(S\) and the corresponding rows and columns within \(A_{S}\) such that \(b_{j}\leq b_{i}\) whenever \(j\leq i\). Our assumptions on the entries \(a_{ji}\) imply that \(A_{t}\) is an upper triangular matrix with non-zero diagonal entries. This implies \(A_{t}\) is always invertible. As vectors in \(X_{t}\) we have \(x_{i}^{new}=A_{t}x_{i}\) for each \(i\in S\). Since \(\{x_{i}\mid i\in S\}\) is a basis of \(X_{t}\) and \(A_{t}\) is invertible we also have \(\{x_{i}^{new}\mid i\in S\}\) is a basis for \(X_{t}\). Note that if \(\mathbf{b}(x_{i})>\mathbf{d}(x_{j})\) then \(\phi_{\mathbf{b}(x_{j})}^{\mathbf{b}(x_{i})}(x_{j})=0\). This means that more than one basis transformation matrix can create the same new basis. Here we are only considering basis transformations which retain the same indexing with respect to some interval decomposition. It would be possible to generalise to allow for permutations of the indexing of the intervals. However in the context of vineyard modules this is unnecessary and a potential source of confusion. We now want to understand how the matrices of \(\epsilon\)-morphisms change when we transform the basis. This will be analogous to matrix theory but some care needs to be made. We will use \(\mathbb{I}\) to denote the identity matrix. **Lemma 3.6**.: _Consider an \(\epsilon\)-morphism \(\alpha:\mathcal{X}\rightarrow\mathcal{Y}\) where \(B_{X}\) is a basis for \(\mathcal{X}\) and \(B_{Y}^{old}\) is a basis for \(\mathcal{Y}\) such that \(|\mathbf{b}(x_{i})-\mathbf{b}(y_{i})|<\epsilon\) and \(|\mathbf{d}(x_{i})-\mathbf{d}(y_{i})|<\epsilon\) and all intervals are of length greater than \(2\epsilon\). If \(Mat_{B_{X}^{\mathcal{Y}^{old}}}^{B_{Y}^{old}}(\alpha)\) is a basis transformation matrix for \(\mathcal{Y}\) and \(B_{Y}^{new}=Mat_{B_{X}^{\mathcal{Y}^{old}}}^{B_{Y}^{old}}(\alpha)(B_{Y}^{old})\) is the corresponding transformed basis then \(Mat_{B_{X}^{\mathcal{Y}^{new}}}^{B_{Y}^{new}}(\alpha)=\mathbb{I}\)._ Proof.: Fix \(i\). Since \(Mat_{B_{X}^{\mathcal{Y}}}^{B_{Y}^{old}}(\alpha)\) is a basis transformation we know that whenever \(Mat_{B_{X}^{X}}^{B_{Y}^{old}}(\alpha)(j,i)\neq 0\) we have \(\mathbf{b}(y_{j}^{old})\leq\mathbf{b}(y_{i}^{old})\). This means we can rewrite each of the \(\psi_{\mathbf{b}(y_{j}^{old})}^{\mathbf{b}(x_{i})+\epsilon}\) as the composition of \(\psi_{\mathbf{b}(y_{i}^{old})}^{\mathbf{b}(x_{i})+\epsilon}\) and \(\psi_{\mathbf{b}(y_{j}^{old})}^{\mathbf{b}(y_{j}^{old})}\). \[\alpha_{\mathbf{b}(x_{i})}(x_{i}) =\sum_{\mathbf{b}(y_{j}^{old})\leq\mathbf{b}(x_{i})+\epsilon}Mat_{ B_{X}^{X}}^{B_{Y}^{old}}(\alpha)(j,i)\psi_{\mathbf{b}(y_{j}^{old})}^{\mathbf{b}(x_{i})+ \epsilon}(y_{j}^{old})\] \[=\psi_{\mathbf{b}(y_{j}^{old})}^{\mathbf{b}(x_{i})+\epsilon}\left( \sum_{\mathbf{b}(y_{j}^{old})\leq\mathbf{b}(x_{i})+\epsilon}Mat_{B_{X}^{X}}^{B_{Y }^{old}}(\alpha)(j,i)\psi_{\mathbf{b}(y_{j}^{old})}^{\mathbf{b}(y_{j}^{old})}(y _{j}^{old})\right)\] \[=\psi_{\mathbf{b}(y_{i}^{new})}^{\mathbf{b}(x_{i})+\epsilon}(y_{j} ^{new}).\] Note that \(\mathbf{b}(y_{i}^{new})=\mathbf{b}(y_{i}^{old})\) by definition. Slightly more complication but of high importance later is the case where \(Mat_{B_{X}}^{B_{Y}^{old}}(\alpha)e_{lk}^{\mu}\) is a basis transformation, where \(e_{lk}^{\mu}\) is the elementary matrix with \[e_{lk}^{\mu}(i,j)=\begin{cases}1&\text{if }i=j\\ \mu&\text{if }(i,j)=(k,l)\\ 0&\text{otherwise.}\end{cases}\] The function \(A\mapsto Ae_{lk}^{\mu}\) corresponds to the standard elementary column operation of adding \(\mu\) times column \(l\) to column \(k\). **Lemma 3.7**.: _Consider an \(\epsilon\)-morphism \(\alpha:\mathcal{X}\rightarrow\mathcal{Y}\) where \(B_{X}\) is a basis for \(\mathcal{X}\) and \(B_{Y}^{old}\) is a basis for \(\mathcal{Y}\) such that \(|\mathbf{b}(x_{i})-\mathbf{b}(y_{i}^{old})|<\epsilon\) and \(|\mathbf{d}(x_{i})-\mathbf{d}(y_{i}^{old})|<\epsilon\) and all intervals are of length greater than \(2\epsilon\). Further assume that \(\mathbf{b}(x_{l})+\epsilon<\mathbf{d}(y_{l}^{old})\)._ _If \(Mat_{B_{X}}^{p_{X}^{old}}(\alpha)e_{lk}^{-\lambda}\) is a basis transformation for \(\mathcal{Y}\) and \(B_{Y}^{new}\) is the basis for \(\mathcal{Y}\) and after this basis transformation. Then \(Mat_{B_{X}}^{B_{Y}^{new}}(\alpha)=e_{lk}^{\lambda}\)._ Proof.: First consider \(i\neq k\). Under the basis transformation \(Mat_{B_{X}}^{B_{Y}^{old}}(\alpha)e_{lk}^{-\lambda}\) we have \[y_{i}^{new}=\sum_{j}\left(Mat_{B_{X}}^{B_{Y}^{old}}(\alpha)e_{lk}^{-\lambda} \right)(j,i)\psi_{\mathbf{b}(y_{i}^{old})}^{\mathbf{b}(y_{i}^{old})}(y_{j}^{old })=\sum_{j}Mat_{B_{X}}^{B_{Y}^{old}}(\alpha)(j,i)\psi_{\mathbf{b}(y_{j}^{old}) }^{\mathbf{b}(y_{j}^{old})}(y_{j}^{old}).\] With the new basis we have \[\alpha_{\mathbf{b}(x_{i})}(x_{i}) =\sum_{\mathbf{b}(y_{j}^{old})\leq\mathbf{b}(x_{i})+\epsilon}Mat_{ B_{X}}^{B_{Y}^{old}}(\alpha)(j,i)\psi_{\mathbf{b}(y_{j}^{old})}^{\mathbf{b}(x_{i})+ \epsilon}(y_{j}^{old})\] \[=\psi_{\mathbf{b}(y_{i}^{old})}^{\mathbf{b}(x_{i})+\epsilon}\left( \sum_{\mathbf{b}(y_{j}^{old})\leq\mathbf{b}(x_{i})+\epsilon}Mat_{B_{X}}^{B_{Y }^{old}}(\alpha)(j,i)\psi_{\mathbf{b}(y_{j}^{old})}^{\mathbf{b}(y_{j}^{old})} (y_{j}^{old})\right)\] \[=\psi_{\mathbf{b}(y_{i}^{new})}^{\mathbf{b}(x_{i})+\epsilon}(y_{ i}^{new})(y_{i}^{new})\] Note that \(\mathbf{b}(y_{i}^{new})=\mathbf{b}(y_{i}^{old})\) by definition. As the \((j,l)\) entry of \(Mat_{B_{X}}^{B_{Y}^{old}}(\alpha)\) and \(Mat_{B_{X}}^{B_{X}^{old}}(\alpha)e_{lk}^{-\lambda}\) agree, if \(Mat_{B_{X}}^{B_{Y}^{old}}(\alpha)(j,l)\neq 0\) then \(\mathbf{b}(y_{j})\leq\mathbf{b}(y_{l}^{old})\). If \(Mat_{B_{X}}^{B_{Y}^{old}}(\alpha)(j,k)-\lambda Mat_{B_{X}^{X}}^{B_{Y}^{old}}( \alpha)(j,l)\neq 0\) then \(\mathbf{b}(y_{j}^{old})\leq\mathbf{b}(y_{k}^{old})\) as by assumption as \(Mat_{B_{X}^{old}}^{B_{Y}^{old}}(\alpha)e_{lk}^{-\lambda}\) is a basis transformation for \(\mathcal{Y}\). We can use these facts to rewrite the summations in the following calculation. \[\alpha_{\mathbf{b}(x_{k})}(x_{k}) =\sum_{\mathbf{b}(y_{j}^{old})\leq\mathbf{b}(x_{k})+\epsilon}Mat_{ B_{X}}^{B_{Y}^{old}}(\alpha)(j,k)\psi_{\mathbf{b}(y_{j}^{old})}^{\mathbf{b}(x_{k})+ \epsilon}(y_{j}^{old})\] \[=\sum_{\mathbf{b}(y_{j}^{old})\leq\mathbf{b}(x_{k})+\epsilon}(Mat _{B_{X}}^{B_{Y}^{old}}(\alpha)(j,k)-\lambda Mat_{B_{X}}^{B_{Y}^{old}}(\alpha)(j,l ))+\lambda Mat_{B_{X}}^{B_{Y}^{old}}(\alpha)(j,l))\psi_{\mathbf{b}(y_{j}^{old}) }^{\mathbf{b}(x_{k})+\epsilon}(y_{j}^{old})\] \[=\psi_{\mathbf{b}(y_{k}^{old})+\epsilon}^{\mathbf{b}(x_{k})+ \epsilon}\left(\sum_{\mathbf{b}(y_{j}^{old})\leq\mathbf{b}(y_{k}^{old})}(Mat_{ B_{X}}^{B_{Y}^{old}}(\alpha)(j,k)-\lambda Mat_{B_{X}}^{B_{Y}^{old}}(\alpha)(j,l))\psi_{ \mathbf{b}(y_{j}^{old})}^{\mathbf{b}(y_{k}^{old})}(y_{j}^{old})\right)\] \[\qquad+\psi_{\mathbf{b}(y_{l}^{old})}^{\mathbf{b}(x_{k})+\epsilon} \lambda\left(\sum_{\mathbf{b}(y_{j}^{old})\leq\mathbf{b}(y_{j}^{old})}Mat_{B_{X} }^{B_{Y}^{old}}(\alpha)(j,l)\psi_{\mathbf{b}(y_{j}^{old})}^{\mathbf{b}(y_{j}^{ old})}(y_{j}^{old})\right)\] \[=\psi_{\mathbf{b}(y_{k}^{new})}^{\mathbf{b}(x_{k})+\epsilon}(y_{ k}^{new})+\lambda\psi_{\mathbf{b}(y_{l}^{new})}^{\mathbf{b}(x_{k})+\epsilon}(y_{l}^{new})\] From our assumptions about the lengths of intervals and the pairing of critical values we know that \(\mathbf{b}(x_{i})+\epsilon<\mathbf{d}(y_{i}^{new})\) for all \(i\). We also assumed that \(\mathbf{b}(x_{l})+\epsilon<\mathbf{d}(y_{k}^{new})\). Since the \(\{y_{j}^{new}\}\) form a basis we can conclude that \(Mat_{B_{X}}^{B^{new}_{X}}(\alpha)=e_{lk}^{\lambda}\). To make the bookkeeping easier later we will want to have the same number of basis elements throughout the time period of a vineyard. It will be helpful to generalise our notion of basis to allow for extra zero elements. To do this we will introduce the definition of an extended basis and transformation of an extended basis. **Definition 3.8**.: Given a persistence module \(\mathcal{X}\) we say that an _extended basis_ of \(\mathcal{X}\) is a multiset \(B^{\prime}\) consisting of the union of a basis \(B\) of \(\mathcal{X}\) and an indexed set of zero elements. Note that within an extended basis the order of the indices of the zero and non-zero elements may be mixed up. When we wish to pull out the basis contained in an extended basis we will be restricting to appropriate subset of indices. The notions of the matrix of a morphism and basis transformations naturally extend to extended basis. To extend the definition of the matrix of a morphism we merely add in rows and columns of zeros for the indices of the extended basis which are zero. To extend the notion of a basis transformation we also add rows and columns for the zero elements of the different extended basis. If we restrict the extended basis transformation matrix to the indices of the contained bases then we will have a (non-extended) basis transformation matrix. ## 4 Simplifying the matrix for an \(\epsilon\)-interleaving This section is devoted to understanding when \(Mat_{B_{X}}^{B_{Y}}(\alpha)\) is a basis transformation matrix for \(\mathcal{Y}\) when \(\alpha:\mathcal{X}\rightarrow\mathcal{Y}\) is part of an interleaving of sufficiently close persistence modules. Firstly we will establish a useful lemma for calculations. **Lemma 4.1**.: _Let \(B_{X}\) and \(B_{Y}\) be bases for persistence modules \(\mathcal{X}=(X_{t},\phi_{s}^{t})\) and \(\mathcal{Y}=(Y_{t},\psi_{s}^{t})\). Let \(\alpha:\mathcal{X}\rightarrow\mathcal{Y}\) and \(\beta:\mathcal{Y}\rightarrow\mathcal{X}\) form an \(\epsilon\)-interleaving. Then for each \(i\) we have_ \[\beta_{\mathbf{b}(x_{i})+\epsilon}(\alpha_{\mathbf{b}(x_{i})}(x_{i}))=\sum_{j, k}Mat_{B_{X}}^{B_{Y}}(\alpha)(j,i)Mat_{B_{Y}}^{B_{X}}(\beta)(k,j)\phi_{ \mathbf{b}(x_{k})}^{\mathbf{b}(x_{i})+2\epsilon}(x_{k}).\] Proof.: \[\beta_{\mathbf{b}(x_{i})+\epsilon}(\alpha_{\mathbf{b}(x_{i})}(x_{ i})) =\beta_{\mathbf{b}(x_{i})+\epsilon}\left(\sum_{j}Mat_{B_{X}}^{B_{Y}}( \alpha)(j,i)\psi_{\mathbf{b}(y_{j})}^{\mathbf{b}(x_{i})+\epsilon}(y_{j})\right)\] \[=\sum_{j}Mat_{B_{X}}^{B_{Y}}(\alpha)(j,i)\beta_{\mathbf{b}(x_{i}) +\epsilon}(\psi_{\mathbf{b}(y_{j})}^{\mathbf{b}(x_{i})+\epsilon}(y_{j}))\] \[=\sum_{j}Mat_{B_{X}}^{B_{Y}}(\alpha)(j,i)\phi_{\mathbf{b}(y_{j})+ \epsilon}^{\mathbf{b}(x_{i})+2\epsilon}(\beta_{\mathbf{b}(y_{j})}(y_{j}))\] \[=\sum_{j}Mat_{B_{X}}^{B_{Y}}(\alpha)(j,i)\phi_{\mathbf{b}(y_{j})+ \epsilon}^{\mathbf{b}(x_{i})+2\epsilon}(\sum_{k}Mat_{B_{Y}}^{B_{X}}(\beta)(k, j)\phi_{\mathbf{b}(x_{k})}^{\mathbf{b}(y_{j})+\epsilon})\] \[=\sum_{j,k}Mat_{B_{X}}^{B_{Y}}(\alpha)(j,i)Mat_{B_{Y}}^{B_{X}}( \beta)(k,j)\phi_{\mathbf{b}(x_{k})}^{\mathbf{b}(x_{i})+2\epsilon}(x_{k})\] We want to relate the \(\epsilon\)-morphisms within an interleaving (for sufficiently small \(\epsilon\)) to basis transformation matrices. The main consideration is how the natural ordering amoungst the intervals changes. There is a natural partial order on \(\mathbb{R}^{2}\) with \((b_{1},d_{1})\leq(b_{2},d_{2})\) whenever \(b_{1}\leq b_{2}\) and \(d_{1}\leq d_{2}\). This partial order induces a partial order on the set of intervals within a barcode and from this we have a natural partial order on the basis elements associated to each of the intervals. **Definition 4.2**.: Let \(x_{i},x_{j}\) be basis elements of persistence module \(\mathcal{X}\). We say \(x_{j}\leq x_{i}\) if \(\mathbf{b}(x_{i})\leq\mathbf{b}(x_{j})\) and \(\mathbf{d}(x_{j})\leq\mathbf{d}(x_{i})\). We start with the (boring) case where the order of the critical values do not change and later we will consider what can happen when critical values coincide. Here the partial order stays the same, even with some \(\epsilon\) wiggle room. **Proposition 4.3**.: _Let \(\mathcal{X}\) and \(\mathcal{Y}\) be persistence modules where all critical values are distinct and the difference between pairs of critical values within a persistence module is greater than \(2\epsilon\), and \(\alpha:\mathcal{X}\rightarrow\mathcal{Y}\) and \(\beta:\mathcal{Y}\rightarrow\mathcal{X}\) form an \(\epsilon\)-interleaving. This implies there must be the same number of intervals \(\mathcal{X}\) and \(\mathcal{Y}\) and we can pair them up so that the births and deaths vary by at most \(\epsilon\)._ _For any choice of basis \(B_{X}=\{x_{i}\}\) for \(\mathcal{X}\) and \(B^{\text{old}}_{Y}=\{y_{i}\}\) for \(\mathcal{Y}\) such that \(|\mathbf{b}(x_{i})-\mathbf{b}(y_{i})|<\epsilon\) and \(|\mathbf{d}(x_{i})-\mathbf{d}(y_{i})|<\epsilon\) for all \(i\), we have \(Mat^{B^{\text{old}}_{Y}}_{B^{\mathcal{Y}}}(\alpha)\) is a basis transformation matrix for \(\mathcal{Y}\)._ _Let \(B^{new}_{Y}=Mat^{B^{\text{old}}_{X}}_{B_{X}}(\alpha)B_{Y}\) be the new basis for \(\mathcal{Y}\). Then both \(Mat^{B^{\text{New}}_{X}}_{B_{X}}(\alpha)\) and \(Mat^{B^{\text{X}}_{X}}_{B^{new}_{Y}}(\beta)\) are the identity matrix._ Proof.: Suppose that \(Mat^{B^{\text{old}}_{X}}_{B_{X}}(\alpha)(j,i)\neq 0\). We know \(\mathbf{d}(y_{j})\leq\mathbf{d}(x_{i})+\epsilon\). Combined with our assumption that \(|\mathbf{d}(x_{j})-\mathbf{d}(y_{j})|<\epsilon\) we have \(\mathbf{d}(y_{j})\leq\mathbf{d}(y_{i})+2\epsilon\). Our assumption that every pair of critical values is at least \(2\epsilon\) apart strengthens \(\mathbf{d}(y_{j})\leq\mathbf{d}(y_{i})+2\epsilon\) to \(\mathbf{d}(y_{j})\leq\mathbf{d}(y_{i})\). The same argument can be applied to conclude that \(\mathbf{b}(y_{j})\leq\mathbf{b}(y_{i})\) for all \((j,i)\) with \(Mat^{B^{\text{old}}_{X}}_{B_{X}}(\alpha)(j,i)\neq 0\). By Lemma 4.1\(\beta_{\mathbf{b}(x_{i})+\epsilon}(\alpha_{\mathbf{b}(x_{i})}(x_{i}))=\sum_{j,k }Mat^{B^{\text{Old}}_{Y}}_{B_{X}}(\alpha)(j,i)Mat^{B^{X}_{X}}_{B^{\text{old}} _{Y}}(\beta)(k,j)\phi^{\mathbf{b}(x_{i})+2\epsilon}_{\mathbf{b}(x_{k})}.\) Since \(\alpha\) and \(\beta\) form an \(\epsilon\)-interleaving we have \(\beta_{\mathbf{b}(x_{i})+\epsilon}(\alpha_{\mathbf{b}(x_{i})}(x_{i}))=\phi^{ \mathbf{b}(x_{i})+2\epsilon}_{\mathbf{b}(x_{i})}.\) As the distances between every pair of critical values within \(\mathcal{X}\) are greater than \(2\epsilon\) we know that \(\{\phi^{\mathbf{b}(x_{i})+2\epsilon}_{\mathbf{b}(x_{k})}(x_{k})\}\) forms a basis for \(X_{\mathbf{b}(x_{i})+2\epsilon}\) and thus \[\sum_{j}Mat^{B^{\text{old}}_{Y}}_{B_{X}}(\alpha)(j,i)Mat^{B_{X}}_{B^{\text{old }}_{Y}}(\beta)(i,j)=1.\] Since the order of the critical values in \(\mathcal{X}\) and \(\mathcal{Y}\) are the same, we know that for \(j\neq i\) that at least one of \(Mat^{B^{\text{old}}_{X}}_{B_{X}}(\alpha)(j,i)=0\) or \(Mat^{B^{\text{old}}_{X}}_{B_{Y}}(\beta)(i,j)=0\). This implies that \(Mat^{B^{\text{old}}_{X}}_{B_{X}}(\alpha)(i,i)Mat^{B^{\text{old}}_{X}}_{B_{Y}}( \beta)(i,i)=1\) and hence \(Mat^{B^{\text{old}}_{Y}}_{B_{X}}(\alpha)(i,i)\neq 0\). We have now shown that \(Mat^{B^{\text{old}}_{X}}_{B_{X}}(\alpha)\) is a basis transformation matrix for \(\mathcal{Y}\). By Lemma 3.6\(Mat^{B^{new}_{Y}}_{B_{X}}(\alpha)\) is the identity. Substituting this into the equation in Lemma 4.1 we see for each \(i\) that \[\phi^{\mathbf{b}(x_{i})+2\epsilon}_{\mathbf{b}(x_{i})}(x_{i})=\beta_{\mathbf{b} (x_{i})+\epsilon}(\alpha_{\mathbf{b}(x_{i})}(x_{i}))=\sum_{k}Mat^{B_{X}}_{B^{ \text{New}}_{Y}}(\beta)(k,i)\phi^{\mathbf{b}(x_{i})+2\epsilon}_{\mathbf{b}(x_{ k})}(x_{k}).\] Again using that, for each \(i\), we know \(\{\phi^{\mathbf{b}(x_{i})+2\epsilon}_{\mathbf{b}(x_{k})}\}\) forms a basis of \(X_{\mathbf{b}(x_{i})+2\epsilon}\), and that no critical values occur in \((\mathbf{b}(x_{i}),\mathbf{b}(x_{i})+2\epsilon]\), we can conclude that \(Mat^{B_{X}}_{B^{\text{New}}_{Y}}(\beta)\) is the identity matrix. There are many different cases of segments to consider separately, which are illustrated in Table 1 and Table 2. Our simplifying assumptions do reduce the number of cases to consider. For sufficiently close time values where we have the same number of intervals, there is either no change in the ordering of critical values, or a single change between all distinct values and distinct except for a single pair with a single pair equal. In the following table we present the different options. We will want to consider the effect of fixing the basis in \(\mathcal{X}\) and allowing the basis of \(\mathcal{Y}\) to vary. This means that the roles of \(\mathcal{X}\) and \(\mathcal{Y}\) are not symmetric. The indexing throughout this section will use \(v_{k}\) and \(v_{l}\) as the two vines where a potential change in the order of birth and death times occur and only depict these intervals within the table. It turns out that we can apply the same proof from Proposition 4.3 to cover all of the non-special cases without much need for amended. **Proposition 4.4**.: _Let \(\mathcal{X}\) and \(\mathcal{Y}\) be persistence modules such that the order chages in the order of the critical values fit one of cases 1-10 in Table 1 with the depicted vines involved in the change in the order of critical values being \(\gamma_{k}\) and \(\gamma_{l}\). Further assume that all pairwise differences of critical values are at least \(2\epsilon\) except for the following:_ * \(|\mathbf{b}(x_{k})-\mathbf{b}(x_{l})|\) _and_ \(|\mathbf{b}(y_{k})-\mathbf{b}(y_{l})|\) _in cases_ \(1\)_,_ \(2\)_, and_ \(3\)_,_ \begin{table} \begin{tabular}{c|c|c|c|} Case & \(\mathcal{X}\) & \(\mathcal{Y}\) \\ \hline 1 & & & \\ 2 & & & \\ 3 & & & \\ 4 & & & \\ 5 & & & \\ 6 & & & \\ 7 & & & \\ 8 & & & \\ 9 & & & \\ 10 & & & \\ \end{tabular} \end{table} Table 1: The different possible cases of critical values coinciding in \(\mathcal{X}\) or \(\mathcal{Y}\) such that whenever we are given an \(\epsilon\)-interleaving \(\alpha:\mathcal{X}\rightarrow\mathcal{Y}\) and \(\beta:\mathcal{Y}\rightarrow\mathcal{X}\) and a basis \(B_{X}\) of \(\mathcal{X}\) then we can find a basis \(\mathcal{Y}\) so that the matrices of the interleaving maps is the identity. * \(|\mathbf{d}(x_{k})-\mathbf{d}(x_{l})|\) _and_ \(|\mathbf{d}(y_{k})-\mathbf{d}(y_{l})|\) _in cases_ \(4\)_,_ \(5\) _and_ \(6\)__ * \(|\mathbf{b}(x_{k})-\mathbf{d}(x_{l})|\) _and_ \(|\mathbf{b}(y_{k})-\mathbf{d}(y_{l})|\) _in cases_ \(7\)_,_ \(8\)_,_ \(9\)_, and_ \(10\)_._ _Suppose that \(\alpha:\mathcal{X}\rightarrow\mathcal{Y}\) and \(\beta:\mathcal{Y}\rightarrow\mathcal{X}\) form an \(\epsilon\)-interleaving. This implies there must be the same number of intervals \(\mathcal{X}\) and \(\mathcal{Y}\) and we have paired them up so that the births and deaths vary by at most \(\epsilon\)._ _For any choice of basis \(B_{X}=\{x_{i}\}\) for \(\mathcal{X}\) and \(B_{Y}^{old}=\{y_{i}^{old}\}\) for \(\mathcal{Y}\) such that \(|\mathbf{b}(x_{i})-\mathbf{b}(y_{i}^{old})|<\epsilon\) and \(|\mathbf{d}(x_{i})-\mathbf{d}(y_{i}^{old})|<\epsilon\) for all \(i\), we have \(Mat^{B_{X}}_{B_{Y}^{old}}(\beta)\) is a basis transformation matrix for \(\mathcal{Y}\)._ _Let \(B_{Y}^{new}=Mat^{B_{Y}^{old}}_{B_{X}}(\alpha)B_{Y}^{old}\) be the new basis for \(\mathcal{Y}\). Then both \(Mat^{B_{Y}^{new}}_{B_{X}}(\alpha)\) and \(Mat^{B_{X}}_{B_{Y}^{new}}(\beta)\) are the identity matrix._ Proof.: We will show that \(Mat^{B_{Y}^{old}}_{B_{X}}(\alpha)(j,i)\neq 0\) implies \(\mathbf{b}(y_{j}^{old})\leq\mathbf{b}(y_{i}^{old})<\mathbf{d}(y_{j}^{old})\leq \mathbf{d}(y_{j}^{old})\). To do this we will split into different options for \((j,i)\). Suppose that \((j,i)\) is neither \((k,l)\) nor \((l,k)\). If \(Mat^{B_{X}^{old}}_{B_{X}}(\alpha)(j,i)\neq 0\) then by definition that \[\mathbf{b}(y_{j}^{old})\leq\mathbf{b}(x_{i})+\epsilon<\mathbf{d}(y_{j}^{old}) \leq\mathbf{d}(x_{i})+\epsilon.\] Our pairing of intervals tells us that \(|\mathbf{b}(x_{i})-\mathbf{b}(y_{i}^{old})|<\epsilon\) and \(|\mathbf{d}(x_{i})-\mathbf{d}(y_{i}^{old})|<\epsilon\). Together these inequalities imply \(\mathbf{d}(y_{j}^{old})\leq\mathbf{d}(y_{i}^{old})+2\epsilon\) and \(\mathbf{b}(y_{j}^{old})\leq\mathbf{b}(y_{i}^{old})+2\epsilon\). We have assumed that \(|\mathbf{d}(y_{j}^{old})-\mathbf{d}(y_{i}^{old})|>2\epsilon\) and \(|\mathbf{b}(y_{j}^{old})-\mathbf{b}(y_{i}^{old})|>2\epsilon\). These strengthen \(\mathbf{d}(y_{j}^{old})\leq\mathbf{d}(y_{i}^{old})+2\epsilon\) to \(\mathbf{d}(y_{j}^{old})\leq\mathbf{d}(y_{i}^{old})\) and \(\mathbf{b}(y_{j}^{old})\leq\mathbf{b}(y_{i}^{old})+2\epsilon\) to \(\mathbf{b}(y_{j}^{old})\leq\mathbf{b}(y_{i}^{old})\). Thus \(Mat^{B_{Y}^{old}}_{B_{X}}(\alpha)(j,i)\neq 0\) implies \(\mathbf{b}(y_{j}^{old})\leq\mathbf{b}(y_{i}^{old})<\mathbf{d}(y_{j}^{old}) \leq\mathbf{d}(y_{j}^{old})\). Now consider \((j,i)=(l,k)\) In all cases we have \(\mathbf{b}(y_{l}^{old})\leq\mathbf{b}(y_{k}^{old})\) and \(\mathbf{d}(y_{l}^{old})\leq\mathbf{d}(y_{k}^{old})\) so whether \(Mat^{B_{Y}^{old}}_{B_{X}}(\alpha)(k,l)\) is non-zero or not causes no obstruction for \(Mat^{B_{Y}^{old}}_{B_{X}}(\alpha)\) being a basis transformation matrix for \(\mathcal{Y}\). Finally consider \((j,i)=(k,l)\). Here \(Mat^{B_{Y}^{old}}_{B_{X}}(\alpha)(k,l)\) is always zero. The reasoning in each case is as follows. In cases 1, 2 and 3 we have \(\mathbf{b}(y_{k}^{old})>\mathbf{b}(y_{l}^{old})+2\epsilon\) so \(\mathbf{b}(y_{k}^{old})>\mathbf{b}(x_{l}^{old})+\epsilon\). In cases 4, 5 and 6 we have \(\mathbf{d}(y_{k}^{old})>\mathbf{d}(y_{l}^{old})+2\epsilon\) so \(\mathbf{d}(y_{k}^{old})>\mathbf{d}(x_{l}^{old})+\epsilon\). In cases 7 and 8 we have \(\mathbf{b}(y_{k}^{old})=\mathbf{d}(y_{l}^{old})\) which implies \(\mathbf{b}(x_{k}^{old})+\epsilon>\mathbf{d}(y_{l}^{old})\). In cases 9 and 10 we have \(\mathbf{b}(x_{k}^{old})=\mathbf{d}(x_{l}^{old})\) which implies \(\mathbf{b}(x_{k}^{old})+\epsilon>\mathbf{d}(y_{l}^{old})\). Having covered all the cases we can state that \(Mat^{B_{X}^{old}}_{B_{X}}(\alpha)(j,i)\neq 0\) implies \(\mathbf{b}(y_{j}^{old})\leq\mathbf{b}(y_{i}^{old})\) and \(\mathbf{d}(y_{j}^{old})\leq\mathbf{d}(y_{j}^{old})\) for all \((j,i)\). From Lemma 4.1 \[\beta_{\mathbf{b}(x_{i})+\epsilon}(\alpha_{\mathbf{b}(x_{i})}(x_{i}))=\sum_{j,k }Mat^{B_{Y}^{old}}_{B_{X}}(\alpha)(j,i)Mat^{B_{X}}_{B_{Y}^{old}}(\beta)(k,j) \phi^{\mathbf{b}(x_{i})+2\epsilon}_{\mathbf{b}(x_{k})}(x_{k}).\] Since \(\beta_{\mathbf{b}(x_{i})+\epsilon}(\alpha_{\mathbf{b}(x_{i})}(x_{i}))=\phi^{ \mathbf{b}(x_{i})+2\epsilon}_{\mathbf{b}(x_{i})}\) and the \(\{\phi^{\mathbf{b}(x_{i})+2\epsilon}_{\mathbf{b}(x_{k})}(x_{k})|\mathbf{b}(x_{k} )\leq\mathbf{b}(x_{i})+2\epsilon<\mathbf{d}(x_{k})\}\) form a basis for \(X_{\mathbf{b}(x_{i})+2\epsilon}\) we know that \(\sum_{j}Mat^{B_{X}^{old}}_{B_{X}}(\alpha)(j,i)Mat^{B_{X}^{old}}_{B_{Y}^{old}}( \beta)(k,j)=1\) We thus have shown that \(Mat^{B_{X}^{old}}_{B_{X}}(\alpha)\) is a basis transformation matrix for \(\mathcal{Y}\). Furthermore, by Lemma 3.6 we automatically have \(Mat^{B_{X}^{new}}_{B_{X}}(\alpha)\) is the identity matrix. We now wish to show that \(Mat^{B_{X}^{new}}_{B_{X}^{new}}(\beta)\) is also the identity matrix. Substituting \(Mat^{B_{Y}^{new}}_{B_{X}}(\alpha)=\mathbb{I}\) into the equation in Lemma 4.1 we see for each \(i\) that \[\phi^{\mathbf{b}(x_{i})+2\epsilon}_{\mathbf{b}(x_{i})}(x_{i})=\beta_{\mathbf{b}(x _{i})+\epsilon}(\alpha_{\mathbf{b}(x_{i})}(x_{i}))=\sum_{j}Mat^{B_{X}^{new}}_{B_{ Y}^{new}}(\beta)(j,i)\phi^{\mathbf{b}(x_{i})+2\epsilon}_{\mathbf{b}(x_{j})}(x_{j}).\] For each \(i\), we know \(\{\phi^{\mathbf{b}(x_{i})+2\epsilon}_{\mathbf{b}(x_{j})}(x_{j})|\mathbf{b}(x_{j} )\leq\mathbf{b}(x_{i})+2\epsilon<\mathbf{d}(x_{j})\}\) forms a basis of \(X_{\mathbf{b}(x_{i})+2\epsilon}\). This implies that \(Mat^{B_{X}^{new}}_{B_{Y}^{new}}(\beta)(i,i)=1\) and if \(Mat^{B_{X}^{new}}_{B_{Y}^{new}}(\beta)(j,i)\neq 0\), for some \(j\neq i\), then \[\mathbf{d}(x_{i})\leq\mathbf{b}(x_{j})+2\epsilon.\] By definition \(Mat^{B_{X}^{ww}}_{B_{Y}^{ww}}(\beta)(j,i)\neq 0\) also implies that \(\mathbf{b}(y_{i}^{new})+\epsilon<\mathbf{d}(x_{j})\). As \(|\mathbf{b}(y_{i}^{new})-\mathbf{b}(x_{i})|<\epsilon\) we conclude that \(\mathbf{d}(x_{j})\in(\mathbf{b}(x_{i}),\mathbf{b}(x_{i})+2\epsilon)\). Given our assumptions the only case where this could occur is case 10 with \(j=k\), and here \(\mathbf{d}(y_{j})=\mathbf{b}(y_{j})\). However this implies \(\mathbf{b}(y_{i}^{new})+\epsilon<\mathbf{d}(x_{j})\) as \(|\mathbf{b}(y_{i}^{new})-\mathbf{b}(x_{i})|<\epsilon\). This is a contradiction. We thus have shown that \(Mat^{B_{X}^{ww}}_{B_{Y}^{ww}}(\beta)=\mathbb{I}\). The remaining two cases are the ones which stop the automatic decomposition of vineyard modules into a sum of vine modules. These are illustrated in Table 2. **Proposition 4.5**.: _Let \(\mathcal{X}\) and \(\mathcal{Y}\) be persistence modules with bases \(B_{X}\) and \(B_{Y}\) and that \(\alpha:\mathcal{X}\rightarrow\mathcal{Y}\) and \(\beta:\mathcal{Y}\rightarrow\mathcal{X}\) form an \(\epsilon\)-interleaving. Suppose that \(x_{l}\leq x_{k}\) but \(y_{l}\nleq y_{k}\) and that all other elements of the preorder remain the same. Further suppose that either_ 1. \(\mathbf{d}(x_{k})=\mathbf{d}(x_{l})\) _and_ \(|\mathbf{d}(y_{k})-\mathbf{d}(y_{l})|<\epsilon\) _and that all other pairwise differences of critical values are at least_ \(2\epsilon\)_, or_ 2. \(\mathbf{b}(x_{k})=\mathbf{b}(x_{l})\) _and_ \(|\mathbf{b}(y_{k})-\mathbf{b}(y_{l})|<\epsilon\) _and that all other pairwise differences of critical values are at least_ \(2\epsilon\)_.._ _Let \(\lambda=Mat^{B_{Y}}_{B_{X}}(\alpha)(l,k)/Mat^{B_{Y}}_{B_{X}}(\alpha)(l,l)\) and \(e^{\lambda}_{lk}\) be the elementary matrix with \(\lambda\) in the \((l,k)\) entry. Then \(Mat^{B_{Y}}_{B_{X}}(\alpha)e^{-\lambda}_{lk}\) is a basis transformation matrix for \(\mathcal{Y}\)._ _Furthermore under the new basis \(B_{Y}^{new}\) we have \(Mat^{B_{Y}^{new}}_{B_{X}}(\alpha)=e^{\lambda}_{lk}\) and \(Mat^{B_{X}^{ww}}_{B_{Y}^{ww}}(\beta)=e^{-\lambda}_{lk}\)._ Proof.: We will prove for \((a)\) (case 11 in Table 2) and omit the proof for \((b)\) (case 12 in Table 2) as it is highly analogous where we only need to switch the roles of births and deaths. For \(i\neq k\), \(\left(Mat^{B_{Y}^{yld}}_{B_{X}}(\alpha)e^{-\lambda}_{lk}\right)(j,i)=Mat^{B_{ X}^{yld}}_{B_{X}}(\alpha)(j,i)\). If \(Mat^{B_{Y}^{yld}}_{B_{X}}(\alpha)(j,i)\neq 0\) then \[\mathbf{b}(y_{j}^{old})\leq\mathbf{b}(x_{i})+\epsilon<\mathbf{d}(y_{j}^{old}) \leq\mathbf{d}(x_{i})+\epsilon.\] Our pairing of intervals tells us that \(|\mathbf{b}(x_{i})-\mathbf{b}(y_{i}^{old})|<\epsilon\) and \(|\mathbf{d}(x_{i})-\mathbf{d}(y_{i}^{old})|<\epsilon\). Together these inequalities imply \(\mathbf{d}(y_{j}^{old})\leq\mathbf{d}(y_{i}^{old})+2\epsilon\), \(\mathbf{b}(y_{j}^{old})\leq\mathbf{b}(y_{i}^{old})+2\epsilon\) and \(\mathbf{b}(y_{i}^{old})<\mathbf{d}(y_{j}^{old})\). By construction of \(e^{-\lambda}_{lk}\) we have \[\left(Mat^{B_{Y}^{yld}}_{B_{X}}(\alpha)e^{-\lambda}_{lk}\right)(j,k) =Mat^{B_{X}^{yld}}_{B_{X}}(\alpha)(j,k)-\lambda Mat^{B_{X}^{yld}}_{ B_{X}}(\alpha)(j,l)\] \[=Mat^{B_{Y}^{yld}}_{B_{X}}(\alpha)(j,k)-\left(Mat^{B_{Y}}_{B_{X}}( \alpha)(l,k)/Mat^{B_{Y}}_{B_{X}}(\alpha)(l,l)\right)Mat^{B_{X}^{yld}}_{B_{X}}( \alpha)(j,l)\] In particular \(\left(Mat^{B_{Y}^{yld}}_{B_{X}}(\alpha)e^{-\lambda}_{lk}\right)(l,k)=0\). \begin{table} \begin{tabular}{c|c|c} Case & \(\mathcal{X}\) & \(\mathcal{Y}\) \\ \hline 11 & \(\mathcal{Y}\) \\ 12 & \(\mathcal{Y}\) \\ \end{tabular} \end{table} Table 2: The cases when, for a fixed basis of \(\mathcal{X}\), we can’t guarantee to find a basis of \(\mathcal{Y}\) so that the matrices of the interleaving maps are the identity. If the first of the two intervals corresponds to basis elements \(x_{k}\) and \(y_{k}\) and the second interval to \(x_{l}\) and \(y_{l}\) then we have \(x_{l}\leq x_{k}\) but \(y_{l}\nleq y_{k}\). Suppose that \(i=k\) but \(j\neq l\). We have assumed that \(|{\bf d}(y_{j}^{old})-{\bf d}(y_{l}^{old})|>2\epsilon\) and \(|{\bf b}(y_{j}^{old})-{\bf b}(y_{i}^{old})|>2\epsilon\). These strengthen \({\bf d}(y_{j}^{old})\leq{\bf d}(y_{i}^{old})+2\epsilon\) to \({\bf d}(y_{j}^{old})\leq{\bf d}(y_{i}^{old})\) and \({\bf b}(y_{j}^{old})\leq{\bf b}(y_{i}^{old})+2\epsilon\) to \({\bf b}(y_{j}^{old})\leq{\bf b}(y_{i}^{old})\). Thus \(Mat_{B_{X}^{\prime\prime}}^{B_{X}^{\prime\prime}}(\alpha)(j,i)\neq 0\) implies \({\bf b}(y_{j}^{old})\leq{\bf b}(y_{i}^{old})<{\bf d}(y_{j}^{old})\leq{\bf d}(y _{j}^{old})\). The same reasoning in Proposition 4.4 applies to show that \(Mat_{B_{X}^{\prime\prime}}^{B_{X}^{\prime\prime}}(\alpha)(i,i)\neq 0\) for all \(i\). We thus have shown that \(\left(Mat_{B_{X}^{\prime}}^{B_{X}^{\prime}}(\alpha)e_{lk}^{-\lambda}\right)\) is a basis transformation matrix for \(\mathcal{Y}\). By Lemma 3.7 we know that if for \[B_{Y}^{new}=\left(Mat_{B_{X}^{\prime}}^{B_{Y}^{old}}(\alpha)e_{lk}^{-\lambda} \right)(B_{Y}^{old})\] we have \(Mat_{B_{X}^{\prime\prime}}^{B_{X}^{new}}(\alpha)=e_{lk}^{\lambda}\). We now wish to show that \(Mat_{B_{Y}^{new}}^{B_{X}^{new}}(\beta)=e_{lk}^{-\lambda}\). Substituting \(Mat_{B_{X}^{\prime}}^{B_{Y}^{new}}(\alpha)=e_{lk}^{\lambda}\) into the equation in Lemma 4.1 we see for each \(i\neq k\) that \[\phi_{{\bf b}(x_{i})}^{{\bf b}(x_{i})+2\epsilon}(x_{i})=\beta_{{\bf b}(x_{i})+ \epsilon}(\alpha_{{\bf b}(x_{i})}(x_{i}))=\sum_{j}Mat_{B_{Y}^{new}}^{B_{X}}( \beta)(j,i)\phi_{{\bf b}(x_{j})}^{{\bf b}(x_{i})+2\epsilon}(x_{j}) \tag{1}\] and \(\{\phi_{{\bf b}(x_{i})}^{{\bf b}(x_{i})+2\epsilon}(x_{j})|{\bf b}(x_{j})\leq{ \bf b}(x_{i})+2\epsilon<{\bf d}(x_{j})\}\) forms a basis of \(X_{{\bf b}(x_{i})+2\epsilon}\). Immediately this implies that \(Mat_{B_{Y}^{new}}^{B_{X}}(\beta)(i,i)=1\). If \(Mat_{B_{Y}^{new}}^{B_{X}}(\beta)(j,i)\neq 0\), for some \(j\neq i\), then both \({\bf d}(x_{i})\leq{\bf b}(x_{j})+2\epsilon\) (by equation (1)) and \({\bf d}(y_{i})>{\bf b}(y_{j})+\epsilon\) (by definition of matrix of an epsilon morphism) which this contradicts our assumptions so \(Mat_{B_{Y}^{new}}^{B_{X}}(\beta)(j,i)=0\) for all \(j\neq i\). For \(i=k\) Lemma 4.1 combined with the above says \[\phi_{{\bf b}(x_{k})}^{{\bf b}(x_{k})+2\epsilon}(x_{k})=\beta_{{ \bf b}(x_{k})+\epsilon}(\alpha_{{\bf b}(x_{k})}(x_{k})) =\sum_{j}(Mat_{B_{Y}^{new}}^{B_{X}}(\beta)(j,k)+\lambda Mat_{B_{Y }^{new}}^{B_{X}}(\beta)(j,l))\phi_{{\bf b}(x_{j})}^{{\bf b}(x_{k})+2\epsilon}( x_{j})\] \[=\lambda\phi_{{\bf b}(x_{l})}^{{\bf b}(x_{k})+2\epsilon}(x_{l})+ \sum_{j}(Mat_{B_{Y}^{new}}^{B_{X}}(\beta)(j,k)\phi_{{\bf b}(x_{j})}^{{\bf b}(x _{k})+2\epsilon}(x_{j})\] This implies that \(Mat_{B_{Y}^{new}}^{B_{X}}(\beta)(k,k)=1\) and \(Mat_{B_{Y}^{new}}^{B_{X}}(\beta)(k,l)=-\lambda\). The remaining \(Mat_{B_{Y}^{new}}^{B_{X}}(\beta)(j,k)=0\) by the same contradiction argument above where \(i\neq k\). There are in fact two more cases we need to consider which is when the number of intervals changes. We will have a different number of basis elements in \(\mathcal{X}\) and \(\mathcal{Y}\). In terms of vineyards these correspond to the situation where a vine moves in or out of the diagonal. We cannot expect the matrices of our interleaving maps to be the identity but we can get the next best thing which is the projection map onto the common set for intervals. This will happen because the interleaving maps will naturally split into a direct sum of morphisms - one over the common intervals and one for the interval present in only one of the persistence modules. **Proposition 4.6**.: Let \(\hat{\mathcal{X}}\), \(\mathcal{Y}\) be persistence modules such that all the critical values are distinct and the pairwise distance between any pair of critical values within the same persistence module is greater than \(2\epsilon\). Let \(N\) denote the number of intervals in \(\mathcal{Y}\). Set \(\mathcal{X}=\hat{\mathcal{X}}\oplus\mathcal{I}[b,d)\) where \(d-b<2\epsilon\) and the distance from \(b\) or \(d\) to any critical value of \(\mathcal{A}\) is at least \(2\epsilon\). Suppose that \(\alpha:\mathcal{X}\rightarrow\mathcal{Y}\) and \(\beta:\mathcal{Y}\rightarrow\mathcal{X}\) form an \(\epsilon\)-interleaving. Any basis \(B_{X}\) of \(\mathcal{X}\) will partition into a basis for \(\hat{\mathcal{X}}\) (which we will denote \(B_{\hat{X}}\)) plus one other element. Without loss of generality order the basis elements of \(\hat{\mathcal{X}}\) so those in \(\mathcal{X}\) appear first. Choose an extended basis \(B_{Y}\) of \(\mathcal{Y}\) consisting of a basis \(\hat{B}_{Y}\) of \(\mathcal{Y}\) alongside a single \(0\) element appearing last in index order. Then \(Mat_{B_{X}}^{B_{Y}}(\alpha)\) is an extended basis transformation matrix for \(B_{Y}\). Under the new extended basis \(B_{Y}^{new}\) we have \[Mat_{B_{X}}^{B_{Y}^{new}}(\alpha)=\text{diag}(1,1,\ldots,1,0)=Mat_{B_{Y}^{new}} ^{B_{X}}(\beta)\] We also have the restriction of \(Mat_{B_{Y}}^{B_{X}}(\beta)\) to the first \(N\) columns and \(N\) rows is a basis transformation matrix for \(B_{\hat{X}}\), and the block matrix of \(Mat_{B_{Y}}^{B_{X}}(\beta)\) with the \(1\) by \(1\) matrix with \(1\) is a a basis transformation matrix for \(B_{X}\). Under this new basis \[Mat_{B_{X}^{new}}^{B_{Y}}(\alpha)=\text{diag}(1,1,\ldots,1,0)=Mat_{B_{Y}}^{B_{X }^{new}}(\beta)\] Proof.: Our assumption that the distance from the end points of \(\hat{I}\) to critical value of \(\mathcal{A}\) is at least \(2\epsilon\), alongside the length of \(\hat{I}\) smaller that \(2\epsilon\) implies that for every interval in \(\mathcal{B}\), either \(\hat{I}\) is contained in that interval or it is disjoint to it. This implies that \(Mat_{B_{X}}^{B_{Y}}(\alpha)(N+1,i)=0=Mat_{B_{X}}^{B_{Y}}(\alpha)(i,N+1)\) for all \(i\). As the \(N+1\) element in the extended basis \(B_{Y}\) is a zero element we have definition \(Mat_{B_{Y}}^{B_{X}}(\beta)(N+1,i)=0=Mat_{B_{Y}}^{B_{X}}(\beta)(i,N+1)\) for all \(i\) When we restrict to first \(N\) elements of \(B_{X}\) and \(B_{Y}\) then we are in the same case as in Proposition 4.3 and the same argument can be applied here to complete the proof. ## 5 Vine and Matrix representations of vineyard modules In order to explore this decomposition further we will need find nice ways to represent vineyard modules. For this we will define a vine, basis and matrix representation. **Definition 5.1**.: Let \(\mathscr{V}=(\mathcal{V}_{t},\alpha_{s}^{t},\beta_{t}^{s})\) be a vineyard modules over time interval \([s_{0},s_{1}]\). A _vine and matrix_ representation of \(\mathscr{V}=(\mathcal{V}_{t},\alpha_{s}^{t},\beta_{t}^{s})\) consists of a set of wines \(\{\gamma_{1},\gamma_{2},\ldots\gamma_{N}\}\) each of which is defined over a connected subset of \([s_{0},s_{1}]\), alongside families of matrices \(\{M(\alpha)^{s\to t}\mid s_{1}\leq s<t\leq s_{1}\}\) and \(\{M(\beta)^{t\to s}\mid s_{1}\leq s<t\leq s_{1}\}\), such that there exists family of extended bases \(\{B_{t}\}\) (where \(B_{t}\) is a basis for \(\mathcal{V}_{t}\) and respects the order of the vines) such that \(M(\alpha)^{s\to t}=Mat_{B_{s}^{\epsilon}}^{B_{\epsilon}}(\alpha^{s\to t})\) and \(M(\beta)^{t\to s}=Mat_{B_{t}}^{B_{\epsilon}}(\beta^{t\to s})\). We call \(\{B_{t}\}\) an associated family of bases for that representation. Notably this vine and matrix representation is not unique as it depends on the choice of bases. Given a vine and matrix representation there may also be many potential associated family of bases. However, we do at least know that if the vine and matrix representations agree then the vineyard modules are isomorphic. **Proposition 5.2**.: Let \(\mathscr{V}=(\mathcal{V}_{t},\alpha_{V}^{s\to t},\beta_{V}^{t\to s})\) and \(\mathcal{W}=(\mathcal{W}_{t},\alpha_{W}^{s\to t},\beta_{W}^{t\to s})\) be vineyard modules over the same vineyard with vines \(\{\gamma_{i}\}\). Suppose that \((\{\gamma_{i}\},\{M(\alpha)^{s\to t}\},\{M(\beta)^{t\to s}\})\) is a vine and matrix representation of both \(\mathscr{V}\) and \(\mathscr{W}\). Then \(\mathscr{V}\) and \(\mathscr{W}\) are isomorphic as vineyard modules. Proof.: There must exist bases \(B_{t}^{V}\) of \(\mathcal{V}_{t}\) and \(B_{t}^{W}\) of \(\mathcal{W}_{t}\) such that \[Mat_{B_{s}^{\epsilon}}^{B_{t}^{V}}(\alpha_{V}^{s\to t})=M(\alpha)^{s\to t}= Mat_{B_{s}^{W}}^{B_{W}^{W}}(\alpha_{W}^{s\to t})\quad\text{ and }\quad Mat_{B_{t}^{V}}^{B_{V}}(\beta_{V}^{t\to s})=M(\beta_{W})^{t\to s}= Mat_{B_{t}}^{B_{\epsilon}}(\beta^{t\to s})\] for all \(s<t\). Set \(\rho_{t}:\mathcal{V}_{t}\rightarrow\mathcal{W}_{t}\) by \(Mat_{B_{t}^{V}}^{B_{t}^{W}}(\rho)=\pi_{t}\) where \(\pi_{t}\) is the diagonal matrix with \(1\) at the \((i,i)\) entry with \(t\in\text{supp}(\gamma_{i})\) and \(0\) otherwise. Observe that trivially we have that \(\rho_{t}\) commutes appropriately with all the interleaving and transition maps and determines a morphism \(\rho:\mathscr{V}\rightarrow\mathscr{W}\). This vineyard module morphism \(\rho\) is also clearly invertible with a symmetric construction of the inverse. Given the vine and matrix representations of a finite number of vineyard modules there is an obvious construction of a vine and matrix representation of the vineyard module of their direct sum via block matrices. Being able to write the matrices as block matrices description provides an easy sufficient condition for when a vineyard module decomposes. **Lemma 5.3**.: _Let \(\mathscr{Y}=(\mathscr{X}_{t},\alpha_{s}^{t},\beta_{s}^{t})\) be a vineyard module with vine and matrix representation \((\{\gamma_{i}\},\{M(\alpha^{s\to t})\},\{M(\beta^{t\to s})\})\). Suppose that for all \(s_{0}\leq s<t\leq s_{1}\) both \(M(\alpha^{s\to t})\) and \(M(\beta^{t\to s})\) satisfy block diagonal matrix with block index sets \(S_{1},S_{2},\ldots S_{m}\). Then we can construct vineyard modules \(\mathscr{X}_{1},\mathscr{X}_{2},\ldots,\mathscr{X}_{m}\) with \(\mathscr{X}\cong\oplus_{j=1}^{m}\mathscr{X}_{j}\) where \(\mathscr{X}_{j}\) is has vine and matrix representations_ \[(\{\gamma_{i}\mid i\in S_{j}\},\{\pi_{S_{j}}(M(\alpha^{s\to t}))\},\{\pi_{S_{ j}}(M(\beta^{t\to s}))\}).\] _Here \(\pi_{S_{j}}(A)\) is the restriction of matrix \(A\) to the coordinates in \(S_{j}\)._ Finding necessary conditions for decomposition in terms of the vine and matrix representation is much harder. Depending on the choice of associated bases, we can not expect that a vineyard module which is the direct sum of vine modules will necessarily have matrices that will split up into a block diagonal form. We will need to find ways to transform the bases of the persistence modules over the different \(t\) so that the matrices are of a nice form. We now wish to use these basis transformations to simplify the matrices of the interleaving maps within a vineyard module. The plan is to fix the basis at \(t_{0}\) and then transforms the bases in a forward or backward direction. This is complicated by the vines within a vineyard module having different supports. Let \(\pi_{S}\) denote the projection matrix onto the coordinates in set \(S\). That is, the diagonal matrix with \(1\) for each index in set \(S\) and \(0\) otherwise. Given the structure of the vineyard modules we only need to proscribe how to change the basis over smaller segments. To construct these segments we need to consider the locations where birth and or death values coincide, or a new interval appears/disappears (philosophically its own birth and death values coincide). We define these time values as _critical_. **Definition 5.4**.: A vineyard module _segment_ is the restriction of a vineyard module to a time interval \([T_{0},T_{1}]\) such that there are no critical times in \((T_{0},T_{1})\) and one of the three conditions hold: * neither \(T_{0}\) nor \(T_{1}\) are critical times and the \(|T_{0}-T_{1}|\) is bounded above by \(\epsilon/4\) where \(\epsilon\) is the smallest distance between the distinct birth and death values within \(T_{0}\) or within \(T_{1}\), * one of \(T_{0}\) or \(T_{1}\) is critical (label this \(T_{i}\)) and \(|T_{0}-T_{1}|\) is bounded above by \(\epsilon/4\) where \(\epsilon\) is the smallest distance between the distinct birth and death values in \(T_{i}\). Note that our simplifying assumptions guarantee that the number of times that the endpoints of the vines \(\{\gamma_{i}\}\) are not all distinct is finite. In the case where an interval appears or disappears we consider the limiting value as one of the distinct birth/death values. The partition of a vineyard module into segments in dependent only on the set of vines and not on the interleaving maps. We wish to simplify the matrix representative of the transition maps within a vineyard module by progressively changing the basis of the persistence modules going forward or going backwards. There will be time values where we cannot guarantee that these simplifications result in diagonal matrices. This leads to the definition of forwards and backwards incompatibility. **Definition 5.5**.: Let \(\mathscr{V}\) be a vineyard module. We say that \(\mathscr{V}\) is _forwards incompatible_ at \(s\) by vines \((\gamma_{k},\gamma_{l})\) if \(\gamma_{l}(s)\leq\gamma_{k}(s)\) but \(\gamma_{l}(t)\nleq\gamma_{k}(t)\) for all \(t\in(s,s+\delta)\) for \(\delta>0\) sufficiently small. We say that \(s\) is _forwards compatible_ is it is not forwards incompatible. The forwards incompatible cases are shown in Table 2 with \(\mathcal{X}=\mathcal{V}_{t}\) and \(\mathcal{Y}=\mathcal{V}_{s}\) for \(t>s\) sufficiently close, for \(s\) forwards incompatible. The definition of backwards compatible and backwards incompatible is completely symmetric - traversing the vineyard in the opposite direction. **Definition 5.6**.: We say \(\mathscr{V}\) is _backwards incompatible_ at \(t\) by vines \((\gamma_{k},\gamma_{l})\) if \(\gamma_{l}(t)\leq\gamma_{k}(t)\) but \(\gamma_{l}(s)\nleq\gamma_{k}(s)\) for all \(s\in(t-\delta,t)\) for \(\delta>0\) sufficiently small. We say that \(t\) is _backwards compatible_ is it is not backwards incompatible. Note that for a segment \([T_{m},T_{m+1}]\) the only potentially forwards incompatible value is \(T_{m}\). **Definition 5.7**.: Let \(\mathscr{V}=(\mathcal{V}_{t},\{\alpha^{s\to t}\},\{\beta^{t\to s}\})\) be a vineyard module segment over \([T_{m},T_{m+1}]\). And \(\{B^{old}_{t}\}\) an initial choice of basis for each \(\mathcal{V}_{t}\). Let \(A:=Mat_{B^{new}_{T_{m}}}(\alpha^{T_{m}\to T_{m+1}})\) and \(\tilde{A}\) the matrix \(A\) with \(1\) added to any non-zero diagonal element.We say \(\{B^{new}_{t}\}\) is a _forwards simplified_ family of bases if * \(B^{new}_{T_{m}}=B^{old}_{T_{m}}\), * \(B^{new}_{T_{m+1}}=\tilde{A}(B^{old}_{T_{m+1}})\) and * \(Mat_{B^{new}_{T_{m+1}}}^{B^{new}_{new}}(\alpha^{s\to t})=\pi_{S_{s} \cap S_{t}}\) for all \(t>s\) sufficiently close, when \(T_{m}\) is forwards compatible, and if \(T_{m}\) forwards incompatible by vines \((\gamma_{k},\gamma_{l})\) and \(\lambda=A(l,k)/A(l,l)\) then * \(B^{new}_{T_{m}}=B^{old}_{T_{m}}\) * \(B^{new}_{T_{m+1}}=(Ae^{-\lambda}_{lk})(B^{old}_{T_{m+1}})\) * \(Mat_{B^{new}_{T_{m}}}^{B^{new}_{new}}(\alpha^{s\to t})=\pi_{S_{s} \cap S_{t}}\) for all \(T_{m}<s<t\) with \(s,t\) sufficiently close, and * \(Mat_{B^{new}_{T_{m}}}^{B^{new}_{new}}(\alpha^{T_{m}\to t})=e^{\lambda}_{lk}\pi_ {S_{T_{m}}}\) for all \(t>T_{m}\) sufficiently close. The definition of backwards simplified is symmetric. Note that for a segment \([T_{m},T_{m+1}]\) the only potentially backwards incompatible value is \(T_{m+1}\). **Definition 5.8**.: Let \(\mathscr{V}=(\mathcal{V}_{t},\{\alpha^{s\to t}\},\{\beta^{t\to s}\})\) be a vineyard module segment over \([T_{m},T_{m+1}]\). And \(\{B^{old}_{t}\}\) an initial choice of basis for each \(\mathcal{V}_{t}\). Let \(A=Mat_{B^{2t}_{T_{m+1}}}^{B^{old}_{T_{m}}}(\beta^{T_{m+1}\to T_{m}})\) and \(\tilde{A}\) the matrix \(A\) with \(1\) added to any non-zero diagonal element. We say \(\{B^{new}_{t}\}\) is a _backwards simplified_ family of bases if * \(B^{new}_{T_{m+1}}=B^{old}_{T_{m+1}}\), * \(B^{new}_{T_{m}}=\tilde{A}(B^{old}_{T_{m}})\) and * \(Mat_{B^{new}_{t}}^{B^{new}_{new}}(\beta^{t\to s})=\pi_{S_{s}\cap S_{t}}\) for all \(t>s\) sufficiently close, when \(T_{m+1}\) is backwards compatible, and if \(T_{m+1}\) backwards incompatible by vines \((\gamma_{k},\gamma_{l})\) and \(\lambda=A(l,k)/A(l,l)\) then * \(B^{new}_{T_{m+1}}=B^{old}_{T_{m+1}}\), * \(B^{new}_{T_{m}}=(Ae^{-\lambda}_{lk})(B^{old}_{T_{m}})\) * \(Mat_{B^{new}_{t}}^{B^{new}_{new}}(\beta^{t\to s})=\pi_{S_{s}\cap S_{t}}\) for all \(s<t<T_{m+1}\) with \(s,t\) sufficiently close, and * \(Mat_{B^{new}_{T_{m+1}}}^{B^{new}_{new}}(\beta^{T_{m+1}\to t})=e^{\lambda}_{lk} \pi_{S_{T_{m+1}}}\) for all \(t<T_{m+1}\) sufficiently close. Given a vineyard module we can partition it into segments \(\{[T_{m},T_{m+1}]\}_{m=1}^{M-1}\). We wish to forward simplify progressively over the segments from \([T_{0},T_{1}]\) through to \([T_{M-1},T_{M}]\). We then can backwards simplify back again starting with \([T_{M-1},T_{M}]\) and progressively back to \([T_{0},T_{1}]\). The final family of bases will be call _forwards and then backwards simplified_. Given the symmetry in the definitions of forward and backward simplification it will be sufficient to show it is always possible to forward simplify a segment. **Proposition 5.9**.: Let \(\mathscr{V}=(\{\mathcal{V}_{t}\},\{\alpha^{s\to t}\},\{\beta^{t\to s}\})\) be a vineyard module segment over \([T_{m},T_{m+1}]\). Then we can forward simplify \(\mathscr{V}\). Proof.: We can split the proof into the different cases depending on whether \(T_{m}\) is critical and forward compatible, \(T_{m}\) is critical and forwards incompatible, \(T_{m+1}\) is critical, or neither \(T_{m}\) nor \(T_{m+1}\) is critical. We will omit the vineyard parameter from the transition maps within the persistence modules (denoting all by \(\phi\)) as we already have an overwhelming abundance of indices and which persistence module the transition module is within can always be inferred from context using the location of the input. Denote by \(\{B_{t}^{old}\}\) the choice of basis for each \(\mathcal{V}_{t}\) before forwards simplifying. Let \(A=Mat_{B_{T_{m}}^{new}}^{B_{T_{m+1}}^{old}}(\alpha^{T_{m}\to T_{m+1}})\) and let \(\tilde{A}\) be the matrix \(A\) with \(1\) added to any non-zero diagonal element. #### Case where neither \(T_{m}\) nor \(T_{m+1}\) is critical: If neither \(T_{m}\) nor \(T_{m+1}\) are critical then for all \(t\in[T_{m},T_{m+1}]\) the critical values are all distinct. Observe that \(S_{t}\) is the same for all \(t\in[T_{m},T_{m+1}]\). Set \(B_{T_{m}}^{new}=B_{T_{m}}^{old}\). Both \(\mathcal{V}_{T_{m}}\) and \(\mathcal{V}_{t}\) are persistence modules whose critical values are distinct and the difference between pairs of critical values within a persistence module is greater than \(4|T_{m}-t|\). Furthermore, \(\alpha^{T_{m}\to t}\) and \(\beta^{t\to T_{m}}\) form an \(|t-T_{m}|\) interleaving. This means that we can apply Proposition 4.3 to say that if we can set \(B_{t}^{new}\) to be \(Mat_{B_{T_{m}}^{new}}^{B_{t}^{old}}(\alpha^{T_{m}\to t})B_{t}^{old}\) as the new basis for \(\mathcal{V}_{t}\) then both \(Mat_{B_{T_{m}}^{new}}^{B_{t}^{new}}(\alpha^{T_{m}\to t})\) and \(Mat_{B_{t}^{new}}^{B_{t}^{new}}(\beta^{t\to T_{m}})\) are the identity when restricted to the vines in \(S_{T_{m}}\). In particular for \(t=T_{m}\) we have \(B_{T_{m+1}}^{new}=A(B_{T_{m+1}}^{old})\). Since the support of the vines is the same throughout the segment \(A(B_{T_{m+1}}^{old})=\tilde{A}(B_{T_{m+1}}^{old})\). It remains to show that for \(s<t\) that \(Mat_{B_{s}^{new}}^{B_{t}^{new}}(\alpha^{s\to t})=\pi_{T_{m}}=Mat_{B_{t }^{new}}^{B_{t}^{new}}(\beta^{t\to s})\). Denote the basis elements in \(B_{t}^{new}\) by \(\{x_{i}^{t}\}\). It is sufficient to show that \(\alpha_{\mathbf{b}(x_{i}^{t})}^{s\to t}(x_{i}^{s})=\phi_{\mathbf{b}(x_{i}^{t}) +|s-t|}^{\mathbf{b}(x_{i}^{t})+|s-t|}(x_{i}^{t})\). Let \(s,t\in(T_{m},T_{m+1}]\) with \(s<t\). Let \(x_{i}^{s}\) be a non-zero basis element in \(B_{s}^{new}\). Diagram chasing we can show that \[\phi_{\mathbf{b}(x_{i}^{s})+|s-T_{m}|+|t-T_{m}|}^{\mathbf{b}(x_{ i}^{s\to t})}(x_{i}^{s}) =\alpha_{\mathbf{b}(x_{i}^{s})+|s-T_{m}|}^{T_{m}\to t}(\beta_{ \mathbf{b}(x_{i}^{s})}^{s\to s_{n}}(x_{i}^{s}))\] \[=\alpha_{\mathbf{b}(x_{i}^{s})+|s-T_{m}|}^{T_{m}\to t}(\phi_{ \mathbf{b}(x_{i}^{s})}^{\mathbf{b}(x_{i}^{s})+|s-s_{i}|}(x_{i}^{s_{n}}))\] \[=\phi_{\mathbf{b}(x_{i}^{s})+|s-T_{m}|+|t-T_{m}|}^{\mathbf{b}(x_{ i}^{s})+|s-t|}(\alpha_{\mathbf{b}(x_{i}^{s})}^{T_{m}\to t}(x_{i}^{T_{m}}))\] \[=\phi_{\mathbf{b}(x_{i}^{s})+|s-T_{m}|+|t-T_{m}|}^{\mathbf{b}(x_ {i}^{s})+|t-T_{m}|}(\phi_{\mathbf{b}(x_{i}^{s})}^{\mathbf{b}(x_{i}^{s})+|s-t|}( x_{i}^{t}))\] \[=\phi_{\mathbf{b}(x_{i}^{s})+|s-T_{m}|+|t-T_{m}|}^{\mathbf{b}(x_ {i}^{s})+|s-t|}(x_{i}^{t}))\] By assumption there are no critical heights of \(\mathcal{V}_{t}\) within the interval \[[\mathbf{b}(x_{i}^{s})+|s-t|,\mathbf{b}(x_{i}^{s})+|s-T_{m}|+|t-T_{m}|]\subset( \mathbf{b}(x_{i}^{t}),\mathbf{b}(x_{i}^{t})+\delta)\] and so we can infer that \(\alpha_{\mathbf{b}(x_{i}^{s})}^{s\to t}(x_{i}^{s})=\phi_{\mathbf{b}(x_{i}^{s}) +|s-t|}^{\mathbf{b}(x_{i}^{t})}(x_{i}^{t})\). Since this holds for all \(i\in S_{T_{m}}\) we conclude that \(Mat_{B_{s}^{new}}^{B_{t}^{new}}(\alpha^{s\to t})=\pi_{S_{T_{m}}}\). #### Case where \(T_{m+1}\) is critical: Observe that \(S_{t}\) is the same for all \(t\in[T_{m},T_{m+1})\) and that \(S_{T_{m+1}}\cap S_{t}=S_{T_{m+1}}\) for all \(t\in[T_{m},T_{m+1}]\). This is because the only potential change in support can be from the disappearance of a vine at time \(T_{m+1}\). Set \(B_{T_{m}}^{new}=B_{T_{m}}^{old}\). By Proposition 4.6 (if an interval disappears at \(T_{m+1}\)) or Proposition 4.4 (otherwise) we can set \(B_{T_{m+1}}^{new}\) to be \(A(B_{T_{m+1}}^{old})\) (noting this is the same as \(\tilde{A}(B_{T_{m+1}}^{old})\)), and that under this new basis \[Mat_{B_{T_{m}}^{new}}^{B_{t}^{new}}(\alpha^{T_{m}\to T_{m+1}})=\pi_{S_{T_{m}} \cap S_{T_{m+1}}}=Mat_{B_{T_{m}}^{new}}^{B_{T_{m}}^{new}}(\beta^{T_{m+1}\to T_{M}}).\] Define the function \(f:[T_{m},T_{m+1}]\rightarrow[0,\infty)\) by \(f(t)\) as the smallest distance between any pair of critical values in \(\mathcal{V}_{t}\). Note that \(f\) is \(2\)-Lipschitz as the vineyard is assumed to be \(1\)-Lipschitz, \(f(T_{m+1})=0\) and \(f(t)>0\) for \(t\neq T_{m+1}\). In particular this implies \(0<f(t)<2|T_{m+1}-t|\) and for any \(t\in[T_{m},T_{m+1})\) we have \(t+f(t)/4\in[T_{m},T_{m+1})\). Construct the strictly increasing sequence \(\{s_{n}\}\subset[T_{m},T_{m+1})\) with \(s_{0}:=T_{m}\) and \(s_{n}:=s_{n-1}+f(s_{n-1})/4\). As \(\{s_{n}\}\) is a bounded increasing sequence it must converge to some limit which we will denote \(L\in[T_{m},T_{m+1}]\). Suppose that \(L<T_{m+1}\) which by assumption implies \(f(L)>0\). Choose \(k\) such that \(s_{k}>L-f(L)/4\). Since \(|f(s_{k})-f(L)|<2|L-s_{k}|\) we have \(f(s_{k})>f(L)/2\). This implies that \[s_{k+1}=s_{k}+f(s_{k})/4>L-f(L)/16+f(L)/8=L+f(L)/16>L\] which is a contradiction as \(\{s_{n}\}\) is increasing. We conclude that \(\lim_{n\rightarrow\infty}s_{n}=T_{m+1}\). Thus every \(s\in[T_{m},T_{m+1})\) will satisfy \(s\in[s_{n},s_{s+1})\) for some \(n\). We will consider \(s<t\) to be sufficiently close if they lie in the same or adjacent subintervals. We can define \(B_{t}^{new}\) for \(t\in(s_{n},s_{n+1}]\) inductively over \(n\), using the same arguments in to case where neither \(T_{m}\) nor \(T_{m-1}\) are critical as we can note that \([s_{n},s_{n+1}]\) is satisfies the definition of segment by construction. This implies that by the previous case that \(Mat_{B_{t}^{new}}^{B_{t}^{new}}(\alpha^{s\to t})=\pi_{T_{m}}=Mat_{B_{t}^{ new}}^{B_{t}^{new}}(\beta^{t\to s})\) for \(s<t\) and both in \([s_{n},s_{n+1}]\). Now suppose that \(s\in(s_{n-1},s_{n}]\) and \(t\in(s_{n},s_{n+1}]\). We have already shown \(\alpha_{\mathbf{b}(x_{i}^{j})}^{s\to s_{n}}(x_{i}^{s})=\phi_{\mathbf{b}(x_{i} ^{j})}^{\mathbf{b}(x_{i}^{j})+|s-s_{n}|}(x_{i}^{s_{n}})\) and \(\alpha_{\mathbf{b}(x_{i}^{j_{n}})}^{s_{n}\to t}(x_{i}^{s_{n}})=\phi_{ \mathbf{b}(x_{i}^{j})}^{\mathbf{b}(x_{i}^{j_{n}})+|t-s_{n}|}(x_{i}^{t})\).As the interleaving maps commute we combine to say \[\alpha_{\mathbf{b}(x_{i}^{j})}^{s\to t}(x_{i}^{s})=\alpha_{\mathbf{b}(x_{i} ^{j})+|s_{n}-s|}^{s_{n}\to t}(\alpha_{\mathbf{b}(x_{i}^{j})}^{s\to s_{n}}(x_{i }^{s}))=\phi_{\mathbf{b}(x_{i}^{j})}^{\mathbf{b}(x_{i}^{j})+|s-t|}(x_{i}^{t}).\] Note that by construction of our sequence \(\{s_{n}\}\) we have \(\mathbf{d}(x_{i}^{t})>\mathbf{b}(x_{i}^{s})+|s-t|\). As this holds for all vines \(\gamma_{i}\) with \(i\in S_{t}\) we conclude \(Mat_{B_{t}^{new}}^{B_{t}^{new}}(\alpha^{s\to t})=\pi_{S_{t}}\) for \(s<t<T_{m+1}\) sufficiently close. We want to show that \(Mat_{B_{t}^{new}}^{B_{t}^{new}}(\alpha^{s\to T_{m+1}})=\pi_{S_{T_{m}}}\) for all \(s\in[T_{m},T_{m+1}]\). We prove this inductively over \(n\) for \(s\in(s_{n-1},s_{n}]\) with the base case of \(n=0\) the singleton \(\{T_{m}\}\) true by construction. Let \(\gamma_{i}\in S_{T_{m+1}}\) and thus \(\mathbf{d}(\gamma_{i}^{t})-\mathbf{b}(\gamma_{i}^{t})>|T_{m}-T_{m+1}\) for all \(t\in[T_{m},T_{m+1}]\). As the interleaving maps commute we know \[\alpha_{\mathbf{b}(x_{i}^{j_{n}})}^{s_{n}\to T_{m+1}}(x_{i}^{s_{n}})= \alpha_{\mathbf{b}(x_{i}^{j_{n}})+|s-s_{n}|}^{s\to T_{m+1}}(\alpha_{ \mathbf{b}(x_{i}^{j_{n}})}^{s_{n}\to s}(x_{i}^{s_{n}}))\] and thus \[\phi_{\mathbf{b}(x_{i}^{j_{n}})+|T_{m+1}-s_{n}|}^{\mathbf{b}(x_{i} ^{j_{n}})+|s-s_{n}|}(\alpha_{\mathbf{b}(x_{i}^{j_{n}})+|s-s_{n}|}^{\mathbf{b}( x_{i}^{j_{n}})+|s-s_{n}|}(x_{i}^{s}))\] \[=\sum_{j}Mat_{B_{t}^{new}}^{B_{t}^{new}}(\alpha^{s\to T_{m+1}})(j,i) \phi_{\mathbf{b}(x_{j}^{j_{n}})}^{\mathbf{b}(x_{j}^{j_{n}})+|T_{m+1}-s_{n}|}(x_ {j}^{T_{m+1}})\] As no critical values in \(\mathcal{V}_{T_{m+1}}\) occur in the height range of \((\mathbf{b}(x_{i}^{T_{m+1}}),\mathbf{b}(x_{i}^{T_{m+1}})+|T_{m}-T_{m+1}|)\) we infer that \(Mat_{B_{t}^{new}}^{B_{t}^{new}}(\alpha^{s_{n}\to T_{m+1}})=\pi_{S_{T_{m+1}}}\). **Case where \(T_{m}\) is critical and forwards compatible:** Observe that \(S_{t}\) is the same for all \(t\in(T_{m},T_{m+1}]\) and that \(S_{T_{m}}\cap S_{t}=S_{T_{m}}\) for all \(t\in[T_{m},T_{[}m+1]\). Set \(B_{T_{m}}^{new}=B_{T_{m}}^{old}\) and by Proposition 4.4 or Proposition 4.6 (depending on the type of critical behaivoiur) we can set \(B_{T_{m+1}}^{new}\) to be \(Mat_{B_{t_{m}}^{new}}^{B_{T_{m+1}}^{old}}(\alpha^{T_{m}\to T_{m+1}})B_{T_{m+1}}^{old}\) as the new basis for \(\mathcal{V}_{T_{m+1}}\), and under this new basis \[Mat_{B_{t_{m}}^{new}}^{B_{t_{m+1}}^{new}}(\alpha^{T_{m}\to T_{m+1}})=\pi_{S_{T_{m}} \cap S_{T_{m+1}}}=Mat_{B_{t_{m}}^{new}}^{B_{t_{m}}^{new}}(\beta^{T_{m+1}\to T_{m}}).\] We now use the same process as in the case where \(T_{m}\) is critical but in the reverse direction. Define the sequence \(\{s_{n}\}\) inductively by \(s_{0}=T_{m+1}\) and \(s_{n}=s_{n-1}-f(s_{n})/4\). This sequence is bounded and strictly decreasing. We can show it limits to \(T_{m}\) analogously to the above case. We can then inductive define the new bases for the persistence module. For \(t\in[s_{n+1},s_{n})\) we apply Proposition 4.3 with \(\mathcal{X}=\mathcal{V}_{s_{n}}\) and \(\mathcal{Y}=\mathcal{V}_{t}\) and, slightly confusingly, \(\alpha=\beta^{s_{n}\to t}\) and \(\beta=\alpha^{t\to s_{n}}\). The calculations showing that the matrices of the various interleaving maps are all \(\pi_{S_{s}\cap S_{t}}\) is highly analogous and thus we will omit them here. #### Case where \(T_{m}\) is critical and forwards incompatible: Observe that \(S_{t}\) is the same for all \(t\in[T_{m},T_{m+1}]\). Let \((\gamma_{k},\gamma_{l})\) denote the vines that make \(T_{m}\) forwards incompatible. Set \(\lambda=Mat_{B_{T_{m+1}}^{new}}^{B_{T_{m}}^{old}}(\beta^{T_{m+1}\to T_{m}})(l,k)/ Mat_{B_{T_{m}}^{B_{T_{m}}^{old}}}^{B_{T_{m}}^{old}}(\beta^{T_{m+1}\to T_{m}})(l,l)\). Set \(B_{T_{m}}^{new}=B_{T_{m}}^{old}\). By Proposition 4.5 we know \(Mat_{B_{T_{m}}^{new}}^{B_{T_{m+1}}^{new}}(\alpha^{T_{m}\to T_{m+1}})e_{lk}^{-\lambda}\) is a basis transformation matrix for \(\mathcal{V}_{T_{m+1}}\) and, furthermore, that under the new basis \(B_{T_{m+1}}^{new}\) we have \(Mat_{B_{T_{m+1}}^{new}}^{B_{T_{m+1}}^{new}}(\alpha^{T_{m}\to T_{m+1}})=e_{lk}^{\lambda}\) and \(Mat_{B_{T_{m+1}}^{new}}^{B_{T_{m+1}}^{new}}(\beta^{T_{m+1}\to T_{m}})=e_{lk}^{-\lambda}\). We now use the same process as in the case where \(T_{m}\) is critical and forward compatible. We use the same sequence \(\{s_{n}\}\) inductively defined by \(s_{0}=T_{m+1}\) and \(s_{n}=s_{n-1}-f(s_{n})/4\) which again limits to \(T_{m}\). We then inductively over define the new bases for the persistence module for \(t\in[s_{n+1},s_{n})\) using Proposition 4.3. The same arguments show that \(Mat_{B_{T_{m}}^{new}}^{B_{T_{m}}^{new}}(\alpha^{s\to t})=\pi_{T_{m}}=Mat_{B_{t} ^{new}}^{B_{T_{m+1}}^{new}}(\beta^{t\to s})\) for \(s<t\) sufficiently close and both in \([T_{m},T_{m+1})\). We want to show that \(Mat_{B_{T_{m+1}}^{new}}^{B_{T_{m+1}}^{new}}(\alpha^{s\to T_{m+1}})=\pi_{S_{T_{m }}}e_{lk}^{\lambda}\) for all \(s\in[T_{m},T_{m+1}].\) We prove this inductively over \(n\) for \(s\in(s_{n-1},s_{n}]\) with the base case of \(n=0\) the singleton \(\{T_{m}\}\) true by construction. Let \(\gamma_{i}\in S_{T_{m+1}}\) and thus \(\mathbf{d}(\gamma_{i}^{t})-\mathbf{b}(\gamma_{i}^{t})>|T_{m}-T_{m+1}|\) for all \(t\in[T_{m},T_{m+1}]\). If \(i\neq k\) then as the interleaving maps commute we know \[\alpha_{\mathbf{b}(x_{i}^{\prime n})}^{s_{n}\to T_{m+1}}(x_{i}^{s_{n}})= \alpha_{\mathbf{b}(x_{i}^{\prime n})+|s-s_{n}|}^{s\to T_{m+1}}(\alpha_{ \mathbf{b}(x_{i}^{\prime n})}^{s_{n}\to s}(x_{i}^{s_{n}}))\] and thus \[\phi_{\mathbf{b}(x_{i}^{\prime n})}^{\mathbf{b}(x_{i}^{\prime n}) +|T_{m+1}-s_{n}|}(x_{i}^{T_{m+1}}) =\alpha_{\mathbf{b}(x_{i}^{\prime n})+|s-s_{n}|}^{s\to T_{m+1}}( \phi_{\mathbf{b}(x_{i}^{\prime n})}^{\mathbf{b}(x_{i}^{\prime n})+|s-s_{n}|}(x _{i}^{s}))\] \[=\sum_{j}Mat_{B_{T_{m+1}}^{new}}^{B_{T_{m+1}}^{new}}(\alpha^{s\to T _{m+1}})(j,i)\phi_{\mathbf{b}(x_{j}^{\prime T_{m+1}})}^{\mathbf{b}(x_{i}^{ \prime n})+|T_{m+1}-s_{n}|}(x_{j}^{T_{m+1}}).\] For \(i=k\), we instead get \[\phi_{\mathbf{b}(x_{k}^{\prime T_{m+1}})}^{\mathbf{b}(x_{k}^{ \prime T_{m+1}})+|T_{m+1}-s_{n}|}(x_{k}^{T_{m+1}}) +\lambda\phi_{\mathbf{b}(x_{i}^{\prime T_{m+1}})}^{\mathbf{b}(x _{i}^{\prime n})+|T_{m+1}-s_{n}|}(x_{l}^{T_{m+1}})\] \[=\alpha_{\mathbf{b}(x_{i}^{\prime n})+|s-s_{n}|}^{s\to T_{m+1}}( \phi_{\mathbf{b}(x_{i}^{\prime n})}^{\mathbf{b}(x_{i}^{\prime n})+|s-s_{n}|}(x _{i}^{s}))\] \[=\sum_{j}Mat_{B_{T_{m}}^{new}}^{B_{T_{m+1}}^{new}}(\alpha^{s\to T _{m+1}})(j,i)\phi_{\mathbf{b}(x_{j}^{\prime T_{m+1}})}^{\mathbf{b}(x_{i}^{ \prime n})+|T_{m+1}-s_{n}|}(x_{j}^{T_{m+1}})\] As no critical values in \(\mathcal{V}_{T_{m+1}}\) occur in the height range of \((\mathbf{b}(x_{i}^{T_{m+1}}),\mathbf{b}(x_{i}^{T_{m+1}})+|T_{m}-T_{m+1}|)\) we infer that \(Mat_{B_{T_{m}}^{new}}^{B_{T_{m+1}}^{new}}(\alpha^{s_{n}\to T_{m+1}})=\pi_{S_{T_{m +1}}}e_{lk}^{\lambda}\). It would be possible to develop algorithms for computing forward and backwards simplified vine and matrix representations given an input vine and matrix representation over a sufficiently dense discretisation, but this is outside the scope of this paper. Vineyard and Vector representation If we require that the associated family of basis for a vine and matrix representation has been forwards and then backwards simplified then we know that the matrices must be in a very restricted form. Assuming that \(s<t\) are sufficient close we know that the matrices are diagonal for almost all pairs \(s,t\) with the entries \(0\) or \(1\) in a manner determined by the underlying vineyard. The only matrices that are an exception to this are where \(t\) is not backwards compatible and \(s<t\). Here we can have an additional non-zero entry, but only at the \((l,k)\) entry where \(t\) is backwards incompatible by vines \((\gamma_{k},\gamma_{l})\). Let \(\lambda(t)\) denote this \((l,k)\) entry. Given a vineyard module \(\mathscr{V}\) with and underlying \(\mathbb{V}\). We can summarise all the information in the vine and matrix representation (for forwards and then backwards simplified associated bases) by the sequence \(\overline{\lambda}:=(\lambda(t_{1}),\lambda(t_{2}),\ldots\lambda(t_{K}))\) where \(t_{1}<t_{2}<\ldots<t_{K}\) are the times where the vineyard \(\mathbb{V}\) is backwards incompatible. We call the pair \((\mathbb{V},\overline{\lambda})\) a _vineyard and vector representation_ of vineyard module \(\mathscr{V}\). By Proposition 5.2 we know that whenever the vineyard and vector representations of \(\mathscr{V}\) and \(\mathscr{W}\) agree then \(\mathscr{V}\) and \(\mathscr{W}\) must be isomorphic. However, we can not in general expect uniqueness of this representation. There is one important case where we have a unique vineyard and vector representation which is where the vineyard module is trivial. By trivial we mean it is isomorphic to the direct sum of vineyard modules. Notably this provides a necessary and sufficient condition for a vineyard module to be trivial. **Theorem 6.1**.: _Let \(\mathscr{V}\) be a vineyard modules with simplified representation \(\overline{\lambda}:=(\lambda(t_{1}),\lambda(t_{2}),\ldots\lambda(t_{K}))\). Then \(\mathscr{V}\) is isomorphic to the direct sum of vine modules if and only if \(\lambda(t_{k})=0\) for all \(k\)._ Proof.: Note that for the direct sum of vine modules \(\lambda(t_{k})=0\) for all \(k\). We can then apply Proposition 5.2 to say that if \(\lambda(t_{k})=0\) for all \(k\) then \(\mathscr{V}\) is isomorphic to a direct sum of vine modules. We know will wish to prove the other direction and will be assuming that \(\mathscr{V}\) is isomorphic to a direct sum of vine modules. We now need to set up substantial notation. Let \(\{[T_{m},T_{m+1}]\}_{m=0}^{N-1}\) be a segmentation of the underlying vineyard into segments. To reduce the number of indices we will write \(\mathbf{b}(\gamma_{j}^{m})\) for \(\mathbf{b}(\gamma_{j}^{T_{m}})\) and \(\mathbf{d}(\gamma_{j}^{m})\) for \(\mathbf{d}(\gamma_{j}^{T_{m}})\). For each \(m\) where \(\mathbf{b}(\gamma_{j}^{m})\leq\mathbf{b}(\gamma_{i}^{m})<\mathbf{d}(\gamma_{j }^{m})\leq\mathbf{d}(\gamma_{i}^{m})\), let \(\tau_{m}(j,i)\) be the largest \(n\leq m\) where the intervals corresponding to \(\gamma_{i}\) and \(\gamma_{j}\) are disjoint at \(T_{n}\) (with value \(-\infty\) if not disjoint at any previous time). Let \(\{\gamma_{i}\}\) denote the vines in the underlying vineyard of \(\mathscr{V}\). Suppose that \(\rho:\mathscr{V}\to\mathscr{W}\) is an isomorphism where \(\mathscr{W}=\oplus\mathscr{I}[\gamma_{i}]\) is the direct sum of vine modules equipped with the standard basis. By construction the forward and backwards simplification of \(\mathscr{W}\) leaves the basis elements unchanged. Denote the basis of \(\mathcal{W}_{T_{m}}\) by \(\{B_{m}^{W}\}\). Let \(\hat{B}_{m}\) denote the transformed bases of \(\mathcal{V}_{T_{m}}\) after forwards simplifying, and \(B_{m}\) be the resulting bases of \(\mathcal{V}_{T_{m}}\) after forwards and then backwards simplifying. Let \(\hat{M}_{m}=Mat_{\hat{B}_{m}^{m}}^{B_{m}^{W}}(\rho_{T_{m}})\) and \(M_{m}=Mat_{B_{m}^{m}}^{B_{m}^{W}}(\rho_{T_{m}})\) denote the corresponding basis transformation matrices. **Claim 1**.: _If \(\hat{M}_{m}(j,i)\neq 0\) then \(\mathbf{b}(\gamma_{j}^{n})\leq\mathbf{b}(\gamma_{i}^{n})<\mathbf{d}(\gamma_{ j}^{n})\leq\mathbf{d}(\gamma_{i}^{n})\) for all \(n\in(\tau_{m}(j,i),m)\)._ We will prove this claim by induction. The base case holds as \(\rho_{T_{0}}:\mathcal{V}_{T_{0}}\to\mathcal{W}_{T_{0}}\) is a morphism so \(\hat{M}_{0}(j,i)\neq 0\) implies \(\mathbf{b}(\gamma_{j}^{0})\leq\mathbf{b}(\gamma_{i}^{0})<\mathbf{d}(\gamma_{ j}^{0})\leq\mathbf{d}(\gamma_{i}^{0})\). Suppose that \(\hat{M}_{m+1}(j,i)\neq 0\). This implies that \(\mathbf{b}(\gamma_{j}^{m+1})\leq\mathbf{b}(\gamma_{i}^{m+1})<\mathbf{d}( \gamma_{j}^{m+1})\leq\mathbf{d}(\gamma_{i}^{m+1})\). If \(\gamma_{i}\) and \(\gamma_{j}\) are disjoint at \(T_{m}\) we are done (as \(m=\tau_{m+1}(j,i)\)) so suppose that \(\tau_{m+1}(j,i)<m\). Note by definition this implies \(\tau_{m+1}(j,i)=\tau_{m}(j,i)\). We now have to consider the different local cases. Set \(\epsilon=T_{m+1}-T_{m}\). If \(T_{m}\) is forward compatible with \(\hat{M}_{m+1}(j,i)\neq 0\) and \(m<\tau_{m+1}(j,i)\) we know \(\mathbf{d}(\gamma_{j}^{m+1})>\mathbf{b}(\gamma_{i}^{m})+\epsilon\) by considering the cases in Table 1 and the restriction on \(\epsilon\) in our definition of segment. Since \[\hat{M}_{m+1}(p,i)\phi_{\mathbf{b}(v_{p}^{m+1})}^{\mathbf{b}(w_{p}^{m+1})+ \epsilon}(v_{p}^{m+1})=\rho_{m+1}(\alpha_{X}^{T_{m}\to T_{m+1}}(w_{i}^{m}))= \alpha_{V}^{T_{m}\to T_{m+1}}(\rho_{m}(w_{i}^{m}))=\sum_{p}\hat{M}_{m}(p,i)\phi_{ \mathbf{b}(v_{p}^{m+1})}^{\mathbf{b}(w_{i}^{m})+\epsilon}(v_{p}^{m+1})\] we conclude that \(\hat{M}_{m}(j,i)=\hat{M}_{m+1}(j,i)\neq 0\). Our inductive assumption then implies \(\mathbf{b}(\gamma_{j}^{n})\leq\mathbf{b}(\gamma_{i}^{n})<\mathbf{d}(\gamma_{j }^{n})\leq\mathbf{d}(\gamma_{i}^{n})\) for all \(n\in(\tau_{m+1}(j,i),m)\). Now suppose that \(T_{m}\) is forwards incompatible with respect to vines \((\gamma_{k},\gamma_{l})\) and let \(\lambda\) be such that \(Mat_{\hat{B}_{m}^{V}}^{\mathbf{b}_{m+1}^{V}}(\alpha^{V})=e_{lk}^{\lambda}\pi_{S}\) where \(S\) is the set of vines whose support contains \(T_{m}\). Note that by construction we have \(Mat_{\tilde{B}_{m}^{W+1}}^{\tilde{B}_{m+1}^{W}}(\alpha^{W})=\pi_{S}\). Since \(\rho\) commutes with the interleaving maps, and no deaths occur within \(2\epsilon\) of any births, we know that \[\hat{M}_{m+1}=e_{lk}^{-\lambda}\hat{M}_{m}\] as matrices. For \(i\neq k\) we have \(\hat{M}_{m+1}(j,i)=\hat{M}_{m+1}(j,i)\) and so \(\hat{M}_{m+1}(j,i)\neq 0\) implies \(\hat{M}_{m+1}(j,i)\neq 0\). Thus we can apply the inductive hypothesis to say.\(\mathbf{b}(\gamma_{j}^{n})\leq\mathbf{b}(\gamma_{i}^{n})<\mathbf{d}(\gamma_{j}^{n}) \leq\mathbf{d}(\gamma_{i}^{n})\) for all \(n\in(\tau_{m+1}(j,i),m)\). For \(i=k\) and \(j=l\) we know \(\hat{M}_{m+1}(l,k)=0\) (as \(\rho_{m+1}\) is a morphism) so there is nothing to prove here. We know \(\hat{M}_{m+1}(l,k)=\hat{M}_{m}(l,k)-\lambda\hat{M}_{m}(l,l)\). As \(\rho_{m+1}\) is a morphism we know that \(\hat{M}_{m+1}(l,k)=0\) and thus \(\lambda=\hat{M}_{m}(l,k)/M_{m}(l,l)\). Finally consider \(i=k\) and \(j\neq l\) and suppose that \(\hat{M}_{m+1}(j,k)\neq 0\). If \(\hat{M}_{m}(j,k)\neq 0\) then the inductive hypothesis can be used, so suppose further that \(\hat{M}_{m}(j,k)=0\). We have \[\hat{M}_{m+1}(j,k)=\hat{M}_{m}(j,k)-\hat{M}_{m}(j,l)\hat{M}_{m}(l,k)/\hat{M}_{ m}(l,l)\] which, with are current suppositions, implies that both \(\hat{M}_{m}(j,l)\neq 0\) and \(\hat{M}_{m}(l,k)\neq 0\). By the inductive hypothesis with \(\hat{M}_{m}(j,l)\neq 0\) and \(\hat{M}_{m}(l,k)\neq 0\) we know that \(\mathbf{b}(\gamma_{j}^{n})\leq\mathbf{b}(\gamma_{l}^{n})<\mathbf{d}(\gamma_{j}^ {n})\leq\mathbf{d}(\gamma_{l}^{n})\) for all \(n\in(\tau_{m}(j,l),m)\) and \(\mathbf{b}(\gamma_{i}^{n})\leq\mathbf{b}(\gamma_{i}^{n})<\mathbf{d}(\gamma_{l} ^{n})\leq\mathbf{d}(\gamma_{k}^{n})\) for all \(n\in(\tau_{m}(l,k),m)\). We can show that \(\tau_{m}(j,k)>\tau_{m}(j,l)\) and \(\tau_{m}(j,k)>\tau_{m}(l,k)\) by sandwiching of intervals. If \(n>\tau_{m}(j,k)\) then \(\mathbf{b}(\gamma_{j}^{n})\leq\mathbf{b}(\gamma_{k}^{n})<\mathbf{d}(\gamma_{l} ^{n})\leq\mathbf{d}(\gamma_{k}^{n})\) and \(\mathbf{b}(\gamma_{j}^{n})\leq\mathbf{b}(\gamma_{l}^{n})<\mathbf{d}(\gamma_{j }^{n})\leq\mathbf{d}(\gamma_{l}^{n})\). Combined these imply \(\mathbf{b}(\gamma_{j}^{n})\leq\mathbf{b}(\gamma_{k}^{n})<\mathbf{d}(\gamma_{j }^{n})\leq\mathbf{d}(\gamma_{k}^{n})\). Having covered all the cases we have finished proving by induction that if \(\hat{M}_{m}(j,i)\neq 0\) then \(\mathbf{b}(\gamma_{j}^{n})\leq\mathbf{b}(\gamma_{i}^{n})<\mathbf{d}(\gamma_{j }^{n})\leq\mathbf{d}(\gamma_{i}^{n})\) for all \(n\in(\tau_{m}(j,i),m)\). **Claim 2**.: _If \(Mat_{B_{m}^{W}}^{B_{W}^{W}}(\rho_{T_{m}})(j,i)\neq 0\) then \(\mathbf{b}(\gamma_{j}^{T_{n}})\leq\mathbf{b}(\gamma_{i}^{T_{n}})<\mathbf{d}( \gamma_{j}^{T_{n}})\leq\mathbf{d}(\gamma_{i}^{T_{n}})\) for all \(n\in(\tau_{m}(j,i),m)\)._ This claim can also be proved by induction. The base case holds as \(M_{N}=\hat{M}_{N}\) by definition of backwards simplification, and using Claim 1. Assume the inductive hypothesis for \(m\) and we wish to show it also holds for \(m-1\). Let \(\epsilon=T_{m}-T_{m-1}\). Suppose that \(T_{m}\) is backwards compatible. By the construction of the backwards simplified basis we have \[\sum_{j}M_{m}\psi_{\mathbf{b}(w_{j}^{m})+\epsilon}^{\mathbf{b}(v_{j}^{m})+ \epsilon}(w_{j}^{m-1})=\beta_{W}^{T_{m}\to T_{m-1}}(\rho_{T_{m}}(v_{i}^{m}))= \rho_{T_{m-1}}(\beta_{V}^{T_{m}\to T_{m-1}}(v_{i}^{m}))=\sum_{j}M_{m-1}\psi_{ \mathbf{b}(w_{j}^{m-1})}^{\mathbf{b}(v_{i}^{m})+\epsilon}(w_{j}^{m-1})\] For \((j,i)\) with \(\mathbf{b}(\gamma_{j}^{m-1})\leq\mathbf{b}(\gamma_{i}^{m-1})<\mathbf{d}( \gamma_{j}^{m-1})\leq\mathbf{d}(\gamma_{i}^{m-1})\) and \(\mathbf{d}(\gamma_{j}^{m-1})>\mathbf{b}(\gamma_{i}^{m-1})+\epsilon\) we also have \(\tau_{m-1}(j,i)=\tau_{m}(j,i)\). By comparing coefficients we infer \(M_{m-1}(j,i)=M_{m}(j,i)\). We then can use the inductive assumption to say if \(Mat_{B_{m-1}^{W}}^{B_{m-1}^{W}}(\rho_{T_{m-1}})(j,i)\neq 0\) then \(\mathbf{b}(\gamma_{j}^{T_{n}})\leq\mathbf{b}(\gamma_{i}^{T_{n}})<\mathbf{d}( \gamma_{j}^{T_{n}})\leq\mathbf{d}(\gamma_{i}^{T_{n}})\) for all \(n\in(\tau_{m-1}(j,i),m-1)\). If there is a pair \((l,k)\) with \(\mathbf{b}(\gamma_{l}^{m-1})\leq\mathbf{b}(\gamma_{l}^{m-1})<\mathbf{d}(\gamma_{l }^{m-1})\leq\mathbf{d}(\gamma_{k}^{m-1})\) but \(\mathbf{d}(\gamma_{l}^{m-1})\leq\mathbf{b}(\gamma_{k}^{m-1})+\epsilon\) then we must be in the case where \(\mathbf{d}(\gamma_{l}^{m})=\mathbf{b}(\gamma_{k}^{m})\) (see case 10 in Table 1 with \(\mathcal{X}=\mathcal{V}_{T_{m}}\) and \(\mathcal{Y}=\mathcal{V}_{T_{m-1}}\)). Here we will need to use the construction of the backwards simplified basis. For the sake of clarity we will assume for the moment that all the vines have \(T_{m}\) and \(T_{m-1}\) in their support - so we have a bases rather than extended bases and transformation matrices of these matrices are invertible. Extending the argument to extended bases is left as an exercise. Denote by \(\beta_{V}:\mathcal{V}_{T_{m}}\rightarrow\mathcal{V}_{T_{m-1}}\) the morphism within \(\mathscr{V}\). Let \(A=Mat_{B_{m}}^{\hat{B}_{m}}(\mathbb{I})\) be the matrix corresponding to the change of basis. Since \(T_{m-1}\) is forwards compatible and \(T_{m}\) is backwards compatible we know \(\beta_{V}(v_{i}^{m})=v_{i}^{m-1}\) and \(\beta_{V}(\hat{v}_{i}^{m})=\hat{v}_{i}^{m}\). We thus have \[\beta_{V}(v_{i}^{m})=\beta_{V}\left(\sum_{j}A(j,i)\psi_{\mathbf{b}(\gamma_{i}^{m}) }^{\mathbf{b}(\gamma_{i}^{m})}(\hat{v}_{j}^{m})\right)=\sum_{j}A(j,i)\psi_{ \mathbf{b}(\gamma_{j}^{m-1})}^{\mathbf{b}(\gamma_{i}^{m})+\epsilon}(\hat{v}_{j}^ {m-1})\] and since \(A(l,k)=0\) this implies \(Mat_{B_{m}^{W}}^{B_{m-1}^{V}}(\beta_{V})=A\). When backwards simplifying the basis \(B^{V}_{m-1}\) is constructed by computing the matrix \(Mat^{\tilde{B}_{m-1}}_{B_{m}}(\beta_{V})\) and then using this as a basis transformation matrix and applying it to \(\hat{B}_{m-1}\). In short, \(B_{m-1}=A(\hat{B}_{m-1})\). As all the basis transformation maps commute (when the domain and codomain bases match appropriately) we can show that \(M_{m-1}=\hat{M}_{m-1}A\) and \(M_{m}=\hat{M}_{m}A\) as matrices. Combining all these equations together we have \[M_{m}-M_{m-1}=(\hat{M}_{m}-\hat{M}_{m-1})A.\] We also know that for \((j,i)\neq(l,k)\) that both \(M_{m}(j,i)=M_{m-1}(j,i)\) and \(\hat{M}_{m}(j,i)=\hat{M}_{m-1}(j,i)\). If \(M_{m-1}(l,k)\neq 0\) then, since \(M_{m}(l,k)=0\), we have \(M_{m}-M_{m-1}\neq 0\). This implies so \((\hat{M}_{m}-\hat{M}_{m-1})A\neq 0\) and since \(A\) is invertible this implies \(\hat{M}_{m}-\hat{M}_{m-1}\neq 0\). Since the only entry where \(\hat{M}_{m}\) and \(\hat{M}_{m-1}\) can differ is \((l,k)\) this implies that \(\hat{M}_{m-1}(l,k)\neq\hat{M}_{m}(l,k)=0\). We thus can use Claim 1 to say that \(\mathbf{b}(\gamma_{j}^{T_{n}})\leq\mathbf{b}(\gamma_{i}^{T_{n}})<\mathbf{d}( \gamma_{j}^{T_{n}})\leq\mathbf{d}(\gamma_{i}^{T_{n}})\) for all \(n\in(\tau_{m}(l,k),m-1)\). Now suppose that \(T_{m}\) backwards incompatible with respect to \((\gamma_{k},\gamma_{l})\). This means \(T_{m}=t_{n}\) for some \(n\) that is it is a critical time for our vector representation. By our inductive assumption this implies \(Mat^{B^{V}_{m-1}}_{B^{V}_{m}}(\beta_{T_{m}})(l,k)=0\). Let \(\lambda(=\lambda(t_{n}))\) be such that \(Mat^{\tilde{B}^{V}_{m-1}}_{\hat{B}^{V}_{m}}(\alpha^{V})=e^{\lambda}_{lk}\pi_{S}\) where \(S\) is the set of vines whose support contains \(T_{m}\). Note that by construction we have \(Mat^{B^{W}_{m-1}}_{\hat{B}^{W}_{m}}(\alpha^{W})=\pi_{S}\). Since \(\rho\) commutes with the interleaving maps, and no deaths occur within \(2\epsilon\) of any births, we know that \[\hat{M}_{m-1}=e^{-\lambda}_{lk}\hat{M}_{m}\] as matrices. We know \(\hat{M}_{m-1}(l,k)=0\) (as \(\rho_{m+1}\) is a morphism) and \(\hat{M}_{m-1}(l,k)=\hat{M}_{m}(l,k)-\lambda\hat{M}_{m}(l,l)\). As \(\hat{M}_{m}(l,k)=0\) and \(\hat{M}_{m}(l,l)\neq 0\) we conclude that \(\lambda=0\). This now implies \(\hat{M}_{m-1}=\hat{M}_{m}\) as matrices and we can we can apply the inductive assumption to finish this case. Notably in the process of proving Claim 2 we proved that the vector in our vineyard and vector representation is the zero vector. ## 7 An Indecomposable Vineyard Module with Two Vines In this section we present the simplest example of a vineyard module with two vines which is not decomposable into two vine modules. For the interests of clarity we restrict to \(\mathbb{Z}_{2}\) as the field for homology calculations but this example will hold for general fields. The underlying space is a lying in the plane which we split into four sets: \(A=\{\|z\|<1\}\), \(B=\{\|z\|=1\}\), \(C=\{1<\|z\|<2\}\) and \(D=\{\|z\|=2\}\). We have \(K=A\cup B\cup C\cup D\) and \(f_{t}:K\to\mathbb{R}\) continuous which respect to \(t\in[0,10]\) and for each \(t\) we have \(f_{t}\) is constant on each of \(A\), \(B\), \(C\), \(D\). \[f_{t}(A)=21-t,\quad f_{t}(B)=14-t,\quad f_{t}(C)=15+t,\quad f_{t}(D)=t.\] Note that all sublevel sets are closed as \(f_{t}(B)\leq f_{t}(A)\) and \(f_{t}(C)\leq f_{t}(B),f_{t}(D)\) for all \(t\). We have two times where critical heights coincide which is at \(t=3\) (with \(f_{3}(A)=f_{3}(C)\)) and at \(t=7\) (with \(f_{7}(B)=f_{7}(D)\)). The continuously changing sublevel set filtration defines a vineyard module where the persistence modules \(\mathcal{V}_{t}\) is the one for the 1-homology dimension persistent homology of the filtration by \(f_{t}\), and the interleaving maps are defined by the natural inclusion maps \(f_{t}^{-1}(-\infty,h]\subset f_{s}^{-1}(\infty,h+|s-t|]\) for \(h\in\mathbb{R}\), and \(s,t\in[0,10]\). We can depict the underlying vineyard via its barcode representations at periodic locations in Figure 1. Over the interval \([0,3)\) there is only one choice of basis for \(\mathcal{V}_{t}\), namely \(x_{t}^{t}=[B]\) (with \(\mathbf{b}([B])=f_{t}(B)\) and \(\mathbf{d}([B])=f_{t}(A)\)) and \(x_{2}^{t}=[D+B]\) (with \(\mathbf{b}([D+B])=f_{t}([D])\) and \(\mathbf{d}([D+B])=f_{t}(C)\)). Over the interval \((7,10]\) there is only one choice of basis for \(\mathcal{V}_{t}\), namely \(x_{t}^{t}=[B]\) (with \(\mathbf{b}([B])=f_{t}(B)\) and \(\mathbf{d}([B])=f_{t}(A)\)) and \(x_{2}^{t}=[D]\) (with \(\mathbf{b}([D])=f_{t}([D])\) and \(\mathbf{d}([D])=f_{t}(C)\)). In the middle section, for \(t\in[3,7]\) there are two different possible choices of basis; \(\{[B],[D+B]\}\) or \(\{[B],[D]\}\). When we forward simplify we get basis over this middle range corresponding to \(\{[B],[D+B]\}\) and then when we then backwards simplify we get instead the basis corresponding to \(\{[B],[D]\}\). After forwards and backwards simplifying we have for small \(\delta\), \(\beta^{3\to 3-\delta}(x_{2}^{3})=x_{2}^{3-\delta}+x_{1}^{3-\delta}\) and \(\beta^{3\to 3-\delta}(x_{1}^{3})=x_{1}^{3-\delta}\). In matrix form: \[Mat_{B_{3}}^{B_{3-\delta}}(\beta)=\begin{pmatrix}1&1\\ 0&1\end{pmatrix}\] implying the vector in the vine and vineyard vector representation is (1) and the vineyard module is not isomorphic to the direct sum of vine modules. ## 8 Future work Given the framework of forwards and backwards simplification we have a tractable description of vineyard modules that at the very least can determine whether if it is trivial. The next natural question is whether we can use the forward and backwards simplified representation to explore the decomposition of a vineyard module into a direct sum of indecomposable vineyard modules. As a partway step, can it determine the partition of the vines within the vineyards into those within the same indecomposable summand? Other future directions include: * allowing for countably many vines or allowing for higher multiplicity of intervals within a persistence module. * How could we extend this approach to continuous persistence module valued functions over \(S^{1}\)? Here we can still define forward simplification locally but will have the potential of holonomy. What about for more general persistence bundles ([7]). * If we consider special cases of vineyard modules do we have nice decompositions or do these decompositions have nice geometric interpretations. For example, what happens when our input is a point cloud, and we construct the vineyard where the time parameter corresponds to a bandwidth and the persistence modules are built from height filtrations of kernel density estimates? Do these representations relate to topological simplification (such as in [12])? Figure 1: The vineyard for our example of an indecomposable vineyard module with two vines. The order in which different subsets appear partition the interval into three sections shown by dashed lines. The sloping lines are the function values of the different sets and the pictures of circles and annuli show the sublevel sets in each of the sections (black for in the sublevel set and grey as in the domain but not in the sublevel set). This vineyard has two vines - one depicted by intervals in blue and the other in red. Calculations showing the vineyard module (whose interleaving maps are thse induced by inclusion) is indecomposable. * Can we enumerate or construct all the isomorphism classes of vineyard modules of a given vineyard? * Can we find conditions for when a vineyard and vector representation is realisable? * Can we describe the space of simplified representations of vineyard modules that are isomorphic?
2310.02689
Nonlinear terahertz polarizability of electrons solvated in a polar liquid
The nonlinear polaronic response of electrons solvated in liquid 2-propanol is studied by two-dimensional terahertz spectroscopy. Solvated electrons with a concentration of c$_e \approx$ 800 $\mu$M are generated by femtosecond photoionization of alcohol molecules. Electron relaxation to a localized ground state impulsively excites coherent polaron oscillations with a frequency of 3.9 THz. Off-resonant perturbation of the THz coherence by a pulse centered at 1.5 THz modifies the polaron oscillation phase. This nonlinear change of electron polarizability is reproduced by theoretical calculations.
Matthias Runge, Klaus Reimann, Michael Woerner, Thomas Elsaesser
2023-10-04T10:01:44Z
http://arxiv.org/abs/2310.02689v1
# Nonlinear Terahertz Polarizability of Electrons Solvated in a Polar Liquid ###### Abstract The nonlinear polaronic response of electrons solvated in liquid 2-propanol is studied by two-dimensional terahertz spectroscopy. Solvated electrons with a concentration of \(c_{e}\approx 800\)\(\mu\)M are generated by femtosecond photoionization of alcohol molecules. Electron relaxation to a localized ground state impulsively excites coherent polaron oscillations with a frequency of 3.9 THz. Off-resonant perturbation of the THz coherence by a pulse centered at 1.5 THz modifies the polaron oscillation phase. This nonlinear change of electron polarizability is reproduced by theoretical calculations. The interaction of an electron with a polar or ionic environment results in the formation of a composite quasi-particle, the polaron [1]. In crystalline solids, the polaron picture has been applied for describing both the quantum ground state of localized charges and the electron transport in continuum states [2; 3; 4]. In the latter, the polaron represents an electron dressed by a phonon cloud, which can undergo a joint center-of-mass motion and/or display internal excitations, e.g., longitudinal elongations of the phonon cloud [2; 3; 4]. The linear and nonlinear response of polarons to external electric fields has mainly been elucidated in studies of charge transport in solids by femtosecond all-optical experiments [5; 6; 7; 8; 9]. Much less is known on polaron excitations in systems without a periodic long-range order such as polar liquids [10; 11]. In this context, solvated electrons represent a prototypical quantum system. In equilibrium, they populate the ground state in a self-consistent potential well, which has been described by polaron models in analogy to a localized charge in a crystal [12; 13; 14], as a quantum state at a site of enhanced solvent density [15], or as the ground state in a local void or cavity in the liquid [16]. The widely accepted cavity model accounts for a broad range of properties of electrons solvated in hydrogen-bonding liquids such as water and alcohols [16; 17; 18; 19; 20; 21]. In water and methanol, OH groups of the four to six solvent molecules in the first solvation shell point towards the electron, thus minimizing the electrostatic energy and confining the electron wavefunction to a radius of gyration of 2.5 A (water) and 2.2 A (methanol). The long-range Coulomb interaction of the confined electron with the solvent results in a coupling of electronic excitations and low-frequency molecular motions in the liquid, in analogy to electron-phonon coupling in a crystal. First evidence for such polaronic excitations in water and alcohols originates from very recent studies of their ultrafast THz response. In a novel approach, solvated electrons have been generated by tunneling ionization in the fluctuating electric field of liquid water and separation of electron and parent ion in a strong THz field [22]. The presence of solvated electrons results in pronounced changes of the real and imaginary part of the THz dielectric function of water and alcohols via the electron polarizability [23; 24; 22]. Upon femtosecond electron relaxation from delocalized continuum states into the localized ground state, collective polaron oscillations are excited impulsively, giving evidence of a transient many-body response on a length scale set by the Debye screening length of the electron's electric field [23; 25]. Polaron oscillations are connected with a radial charge density modulation and, thus, are purely longitudinal. Depending on the charge density distribution within the polaron, the Debye screening length is modified. The resulting periodic modification in polaron size creates a transverse polarizability through which the polaron can be accessed with transverse optical fields [24]. While time-resolved THz spectroscopy has so far focused on the linear polaron response, there are first indications of a nonlinear interaction of THz electric fields with polarons in alcohols [23]. However, a systematic experimental or theoretical study of polaronic nonlinearities does not exist. Here, multidimensional THz spectroscopy holds a particular potential for revealing nonlinearities in the electric polarizability of the liquid. In this Letter, we present new insight in the nonlinear THz response of polarons in 2-propanol (isopropanol, IPA). Fully phase-resolved two-dimensional THz (2D-THz) experiments on solvated electrons generated by multiphoton excitation reveal a strong nonlinear change of THz polarizability. Coherent polaron oscillations, which are launched upon electron localization, display pronounced changes of oscillation phase under the action of a non-resonant THz pulse. This nonlinear THz response is reproduced by calculations introducing a nonlinear THz polarizability of the solvated electrons. The experiments are based on two- and three-pulse sequences [Figs. 1(a) and (b)], consisting of a femtosecond near-infrared (NIR) pulse for electron generation and one or two phase-locked THz pulses for mapping the nonlinear response of solvated electrons. Solvated electrons are generated in a liquid jet of IPA by multiphoton ionization with an 800-nm pulse, providing electron concentrations of up to \(c_{e}\approx 800\)\(\mu\)M. Relaxation of the photo-generated electrons to their localized ground state initiates, in an impulsive way, coherent underdamped polaron oscillations in the liquid [23; 25]. Details of the experimental setup, choice of the solvent, and electron generation are given in the Supplementary Material (SM) [26]. In the experiments with the two-pulse sequence, the THz pulse maps the polaron response of the sample arising after electron generation by the NIR pump pulse, in particular the impulsively excited coherent polaron oscillations. Such data serve as a benchmark for the results of the three-pulse exper iments, in which the polaron oscillations are perturbed by interaction with the first THz pulse after waiting times of \(T=1\) or \(75\) ps. The impact of this interaction is probed by the second THz pulse at a time delay \(\tau\) relative to the first THz pulse. The two THz pulses have peak electric fields of \(50\) kV/cm and leave the electron concentration \(c_{e}\) unchanged [22]. The transmitted THz pulses are detected by electrooptic sampling, i.e., their electric field is measured as a function of real time \(t\). The nonlinear signal generated in the two-pulse sequence is given by \(E_{\text{NL}}(t,\tau)=E_{\text{Pr}}^{c_{e}}(t,\tau)-E_{\text{Pr}}^{0}(t)\). Here \(E_{\text{Pr}}^{c_{e}}(t,\tau)\) is the THz probe field transmitted after electron generation, and \(E_{\text{Pr}}^{0}(t)\) is the transmitted field of the THz probe pulse without selected electrons. For the three-pulse sequence, the nonlinear signal is \(E_{\text{NL}}^{\text{Pert}}(t,\tau)=E_{\text{both}}^{c_{e}}(t,\tau)-E_{\text{ Pen}}^{c_{e}}(t,\tau)-E_{\text{Pr}}^{0}(t)\), where \(E_{\text{both}}^{c_{e}}(t,\tau)\) represents the electric field of the two THz pulses transmitted through the sample after electron generation, and \(E_{\text{Pert}}^{c_{e}}(t,\tau)\) the transmitted field of the perturbing first THz pulse after electron generation. In all measurements, a weak THz signal directly generated by the \(800\)-nm pulse with an amplitude of \(0.3\) kV/cm was subtracted from the transmitted THz fields. More details of the experimental setup and \(2\)D-THz spectroscopy are given in the SM and in Ref. [27]. In the following, we present two sets of data recorded for waiting times \(T=1\) ps and \(T=75\) ps. In Figs. 1(c) and (d), the electric fields \(E_{\text{Pr}}^{c_{e}}(t,\tau)\) (single THz pulse) and \(E_{\text{both}}^{c_{e}}(t,\tau)\) (two THz pulses) are plotted as a function of real time (abscissa) and delay time (ordinate). From the two-pulse data in panel (c), the unperturbed THz response \(E_{\text{NL}}(t,\tau)\) of the solvated electrons for \(T=1\) ps is extracted [Fig. 2(a)]. The signal exhibits an absorption increase with an amplitude of up to \(10\) kV/cm, reflecting the change of the macroscopic dielectric function induced by electron generation. Figure 2(b) shows the probe spectrum \(|E_{\text{Pr}}^{0}(\nu_{t})|\) together with the real and imaginary parts of the two-pulse pump-probe signal \(\Delta\alpha(\nu,\tau_{\text{ave}})=-\text{ln}[E_{\text{Pr}}^{c_{e}}(\nu,\tau_ {\text{ave}})/E_{\text{Pr}}^{0}(\nu,\tau_{\text{ave}})]\), averaged over delay times from \(\tau_{\text{avg}}=-0.75\) to \(2\) ps. One observes an absorption increase with a maximum at \(\nu_{t}=0.8\) THz and pronounced changes of the THz refractive index, indicating the spectral region most sensitive to changes in the macroscopic polarizability. In Fig. 2(c), the spectrum \(|E_{\text{NL}}(\nu_{t},\tau)|\) derived by Fourier transforming \(E_{\text{NL}}(t,\tau)\) along \(t\) is plotted as a function of detection frequency \(\nu_{t}\) and \(\tau\). For noise reduction, the data were Fourier-filtered by a Gaussian filter of \(10\) THz width in excitation frequency \(\nu_{\tau}\) and of \(2\) THz width in detection frequency \(\nu_{t}\) (FWHM) at a spectral position of \((\nu_{t},\nu_{\tau})=(1.5,0)\) THz. The low-frequency spectral wing of \(|E_{\text{NL}}(\nu_{t},\tau)|\) shows a distinct oscillatory behavior, as is evident from the shape of the contour lines. Results of the three-pulse experiment with a waiting time \(T=1\) ps [Fig. 1(d)] are presented in Fig. 2(d), showing the spectrum \(|E_{\text{NL}}^{\text{Pert}}(\nu_{t},\tau)|\). The line shape is similar to the two Figure 1: Experimental concept and \(2\)D-THz data at a waiting time of \(T=1\) ps. (a), (b) Two- and three-pulse sequences consisting of a femtosecond NIR pulse, and one or two (phase-locked) THz pulses as a function of real time \(t\). The time intervals \(T\) and \(\tau\) are the waiting and delay time. (c) Contour plot of \(E_{\text{Pr}}^{c_{e}}(t,\tau)\) (two-pulse experiment) with the NIR pump pulse (orange line) and THz probe pulse \(E_{\text{Pr}}^{0}(t)\) (vertical trace). (d) Contour plot of \(E_{\text{both}}^{c_{e}}(t,\tau)\) (three-pulse experiment) with the additional perturbing THz pulse \(E_{\text{Pert}}^{c_{e}}(t,\tau)\) (diagonal trace). Figure 2: Nonlinear \(2\)D-THz data at a waiting time \(T=1\) ps. (a) Contour plot of the nonlinear \(2\)D-THz signal \(E_{\text{NL}}(t,\tau)\). (b) Probe pulse spectrum (black line) and real and imaginary parts (blue and red lines) of the spectrally resolved pump-probe signal \(\Delta\alpha(\nu_{t},\tau_{\text{avg}})\) averaged over delay times from \(\tau_{\text{avg}}=-0.75\) to \(2\) ps. (c) Contour plot of \(|E_{\text{NL}}(\nu,\tau)|\) obtained from a Fourier transform of \(E_{\text{NL}}(t,\tau)\). (d) Contour plot of \(|E_{\text{NL}}^{\text{Pert}}(\nu_{t},\tau)|\), the Fourier transform of \(E_{\text{NL}}^{\text{Pert}}(t,\tau)\) (not shown). pulse spectrum in Fig. 2(c). However, there are changes in the oscillatory response around \(\tau=0\) that will be analyzed below. In Figs. 3(a) and (b), \(E_{\rm NL}(t,\tau)\) and \(E_{\rm NL}^{\rm Pert}(t,\tau)\) are shown for \(T=75\) ps, while Figs. 3(c) and (d) display the corresponding spectra \(|E_{\rm NL}(\nu_{\rm r},\tau)|\) and \(|E_{\rm NL}^{\rm Pert}(\nu_{\rm r},\tau)|\). The qualitative behavior is similar to the data for \(T=1\) ps (Fig. 2), a result of the underdamped character of the polaron oscillations, which persist for delay times \(\tau\) well beyond 100 ps. To characterize the oscillatory nonlinear response and to identify the impact of the perturbing THz pulse on the polaron oscillations, we analyze the low-frequency wings of the spectra for \(T=1\) ps [Figs. 2(c) and (d)] and \(T=75\) ps [Figs. 3(c) and (d)] with a method detailed in the SM. In brief, cuts of the pump-probe signal along delay time \(\tau\) in a spectral window from \(\nu_{t}=0.5\) to 0.9 THz are fitted by a sine function with a \(\tau\)-dependent amplitude and phase, plus a background with a \(\tau\)-dependent amplitude. The oscillatory signals derived by subtracting the background are plotted in Fig. 4(a) and (c) for \(T=1\) ps and 75 ps, respectively. Here, red lines represent the field \(-E_{\rm osc}(\tau)\) from the two-pulse experiments, while blue lines give the field \(E_{\rm osc}^{\rm Pert}(\tau)\) from the three-pulse experiments. As a reference, the electric field of the perturbing THz pulse is shown (green line). The oscillation frequency of \(-E_{\rm osc}(\tau)\) traces is 3.9 THz, which represents the polaron frequency for \(c_{e}\approx 800\)\(\mu\)M. For \(\tau<-0.5\) ps in panel (a), \(-E_{\rm osc}(\tau)\) and \(E_{\rm osc}^{\rm Pert}(\tau)\) display a constant relative phase shift of \(\pi\), i.e., \(+E_{\rm osc}(\tau)\) and \(E_{\rm osc}^{\rm Pert}(\tau)\) are in phase. From \(\tau=-0.5\) to 1 ps in panel (a), \(E_{\rm osc}(\tau)\) does not change, whereas the phase of \(E_{\rm osc}^{\rm Pert}(\tau)\) is strongly modified. This phase change reflects a momentary change of polaron frequency induced by the perturbing nonresonant THz pulse, a novel type of nonlinear response. Panel (c) shows a very similar behavior at \(T=75\) ps. In the SM, we provide a more detailed discussion of the properties of the momentary perturbed phase. The oscillatory pump-probe signals are due to coherent polaron oscillations, induced impulsively during the subpicosecond electron localization process [23; 25]. Each electron couples to electronic and nuclear degrees of freedom of the polar environment, thus forming a collective polaron excitation. In space, the relevant coupling range is roughly set by the Debye screening length of \(L_{D}=1.8\) nm, which includes some 190 IPA molecules and is much smaller than the average distance between electrons of 13 nm. These values were estimated from the real part of the dielectric function \(\mathrm{Re}[\varepsilon_{\rm IPA}(1\ {\rm THz})]=2.25\)[26] and \(c_{e}=800\)\(\mu\)M. The polaron oscillations are longitudinal coherent motions of space charge within a sphere of radius \(L_{D}\), which primarily arise from the Figure 3: 2D-THz data recorded at \(T=75\) ps. (a), (b) Contour plots of the 2D-THz signals \(E_{\rm NL}(t,\tau)\) and \(E_{\rm NL}^{\rm Pert}(t,\tau)\) for the two- and three-pulse configuration. The green line in panel (b) indicates the temporal position of the perturbing THz pulse. (c), (d) Contour plots of \(|E_{\rm NL}(\nu_{\rm r},\tau)|\) and \(|E_{\rm NL}^{\rm Pert}(\nu_{\rm r},\tau)|\) as functions of detection frequency \(\nu_{\rm r}\) and delay time \(\tau\). Figure 4: (a), (c) Oscillatory signal components \(E_{\rm osc}^{\rm Pert}(\tau)\) (blue lines) and \(-E_{\rm osc}(\tau)\) (red lines) with and without external THz perturbation for \(T=1\) ps and \(T=75\) ps. The thick blue lines mark the time range of THz perturbation. (b), (d) Transients calculated from the theoretical model for \(T=1\) ps and \(T=75\) ps. In all panels, the green lines give the perturbing THz field. superposition of orientational motions of IPA dipoles. The longitudinal electric field is fully screened by a surface charge on the shell of the sphere [2]. The polaron (oscillation) frequency is determined by the zero crossing of the real part of the longitudinal dielectric function inside the sphere [28]. The longitudinal oscillations of space charge modulate the radius of the sphere, in this way affecting the macroscopic _transversal_ polarizability of the liquid. In the SM, we compare experimental polaron frequencies for a wide range of electron concentration with calculations based on a Clausius-Mossotti local field picture [29; 22]. The nonresonant perturbing THz pulse induces a nonlinear polarization acting on the polaron. This nonlinear response causes the observed phase changes in the polaron oscillations (cf. Fig. 4). The modulations prevail in the time window of interaction with the perturbing pulse. Even after the interaction, however, the initially fixed phase relation between polaron oscillations with and without perturbing THz field is somewhat softened. This effect may arise from a correlation of polarizations in the system. The transverse polarization correlation function determines the dephasing of transversal excitations and accounts for correlations in electric currents. The phase deviations after the perturbing THz pulse suggest correlations persisting on a time scale longer than the 1-ps duration of the perturbing pulse [30]. To account for the nonlinear polaron response, we present a theoretical model based on a Clausius-Mossotti approach [29]. The total electric polarization \[P_{\rm tot}=P_{\rm bound}^{\rm el}+P_{\rm Debye}^{\rm nuc}+P_{\rm free}^{\rm el} \tag{1}\] includes contributions from all spatially disjunct dipoles, i.e., \(P_{\rm bound}^{\rm el}\) from dipoles generated by bound electrons, \(P_{\rm Debye}^{\rm nuc}\) for molecular dipoles related to nuclear motions, and \(P_{\rm free}^{\rm el}\) for electronic dipoles caused by free electron motions. Such dipoles are mutually coupled via the local electric field, consisting of the macroscopic electric field \(E(t)\) and \(P_{\rm tot}/(3\epsilon_{0})\). This approach leads to the equations \[3\frac{\epsilon_{\rm IPA}(\omega)-1}{\epsilon_{\rm IPA}(\omega)+2} = \chi_{\rm hf}+\frac{\chi_{\rm hf}-\chi_{\rm hf}}{1+i\omega\tau_{ \rm R}} \tag{2}\] \[P_{\rm bound}^{\rm el} = \epsilon_{0}\chi_{\rm hf}\left(E(t)+\frac{P_{\rm tot}}{3\epsilon _{0}}\right)\] (3) \[\frac{dP_{\rm Debye}^{\rm nuc}}{dt} = -\frac{1}{\tau_{\rm R}}P_{\rm Debye}^{\rm nuc}+\epsilon_{0}\frac{ \chi_{\rm hf}-\chi_{\rm hf}}{\tau_{\rm R}}\left(E(t)+\frac{P_{\rm tot}}{3 \epsilon_{0}}\right)\] (4) \[\frac{d^{2}P_{\rm free}^{\rm el}}{dt^{2}} = -\gamma\frac{dP_{\rm free}^{\rm el}}{dt}-\left(N_{\rm el}+N_{\rm osc }\right)\frac{e_{0}^{2}}{m_{e}}E_{\rm cr}\times\] (5) \[\sinh\left(\frac{1}{E_{\rm cr}}\left[E(t)+\frac{P_{\rm tot}}{3 \epsilon_{0}}\right]\right)\] \[\frac{dN_{\rm el}}{dt} = A\cdot I_{\rm NIR}\] (6) \[\frac{d^{2}N_{\rm osc}}{dt^{2}} = -B(N_{\rm el})\cdot N_{\rm osc}+A\cdot C\cdot I_{\rm NIR}\] (7) \[E_{\rm em} = -\frac{d}{2\epsilon_{0}c}\cdot\frac{dP_{\rm tot}}{dt} \tag{8}\] The high and low frequency susceptibilities \(\chi_{\rm hf}\) and \(\chi_{\rm hf}\), and the Debye relaxation time \(\tau_{\rm R}\) of neat IPA are derived from a fit of \(\epsilon_{\rm IPA}(\omega)\) to linear THz spectra [26]. The electric field is given by \(E(t)=E_{\rm pert}(t,\tau)+E_{\rm pr}^{0}(t)\) of the two experimental THz pulses [cf. Fig. 1(b)]. The electron density \(N_{\rm el}=c_{e}N_{\rm A}\) (\(N_{\rm A}\): Avogadro constant) and the amplitude \(N_{\rm osc}\) of longitudinal electron density oscillations are given in (6) and (7), where \(I_{\rm NIR}\) is the intensity of the electron generating NIR pulse. We treat the influence of longitudinal polaron oscillations (7) on the macroscopic transverse polarization as density oscillations (5), rather than considering an oscillating polaron size modulating the polarizability. \(\gamma\) in (5) is the damping constant of the transverse polarization. The parameters \(A\), \(C\), and the function \(B(N_{\rm el})\) are chosen to reproduce the experimental electron density, polaron oscillation amplitude and polaron frequency. The emitted electric field (8) is determined by \(P_{\rm tot}\) and the sample thickness \(d\) (\(\epsilon_{0}\): vacuum permittivity; \(c\): speed of light). Without generation of solvated electrons (\(N_{\rm el}=N_{\rm osc}=0\)), the THz response defined by (3) and (4) is strictly linear. The only nonlinearity considered here is the nonlinear polarizability of the solvated electrons described by the sinh term in (5) with a strength characterized by a critical electric field \(E_{\rm cr}\). Slight drifts of the experimental polaron frequency (cf. red time traces in Fig. 4), which originate from fluctuations in the density of photogenerated electrons, are taken into account in the calculations [26]. Results of the calculations are presented in Figs. 4(b) and (d) for \(T=1\) ps and \(T=75\) ps. In analogy to the experimental result [Figs. 4(a) and (c)], oscillatory electric field changes on the red wing of the THz emission spectrum \(E_{\rm em}(\nu_{\rm t},\tau)\) are plotted as a function of delay time \(\tau\). The experimental characteristics are best reproduced with \(E_{\rm cr}=100\) kV/cm, for which the oscillations show distinct phase and amplitude modulations in the time range of the external perturbation. In addition to changes of the momentary phase, the calculations suggest increased amplitudes during interaction with the perturbing THz pulse. With our method applied for data analysis [26], we have no direct access to amplitude variations. In conclusion, solvated electrons in a polar liquid show a polaronic nonlinear response induced by a nonresonant THz pulse. This novel effect is mapped via distinct phase modulations of coherent underdamped polaron oscillations at a high polaron frequency of 3.9 THz. Our findings underline the relevance of many-body excitations in polar molecular ensembles and represent a pathway of modifying their dielectric properties by external THz fields. This research has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 Research and Innovation Program (grant agreement 833365). [MISSING_PAGE_POST]
2307.10642
RetouchingFFHQ: A Large-scale Dataset for Fine-grained Face Retouching Detection
The widespread use of face retouching filters on short-video platforms has raised concerns about the authenticity of digital appearances and the impact of deceptive advertising. To address these issues, there is a pressing need to develop advanced face retouching techniques. However, the lack of large-scale and fine-grained face retouching datasets has been a major obstacle to progress in this field. In this paper, we introduce RetouchingFFHQ, a large-scale and fine-grained face retouching dataset that contains over half a million conditionally-retouched images. RetouchingFFHQ stands out from previous datasets due to its large scale, high quality, fine-grainedness, and customization. By including four typical types of face retouching operations and different retouching levels, we extend the binary face retouching detection into a fine-grained, multi-retouching type, and multi-retouching level estimation problem. Additionally, we propose a Multi-granularity Attention Module (MAM) as a plugin for CNN backbones for enhanced cross-scale representation learning. Extensive experiments using different baselines as well as our proposed method on RetouchingFFHQ show decent performance on face retouching detection. With the proposed new dataset, we believe there is great potential for future work to tackle the challenging problem of real-world fine-grained face retouching detection.
Qichao Ying, Jiaxin Liu, Sheng Li, Haisheng Xu, Zhenxing Qian, Xinpeng Zhang
2023-07-20T07:12:56Z
http://arxiv.org/abs/2307.10642v1
# RetouchingFFHQ: A Large-scale Dataset for Fine-grained Face Retouching Detection ###### Abstract The widespread use of face retouching filters on short-video platforms has raised concerns about the authenticity of digital appearances and the impact of deceptive advertising. To address these issues, there is a pressing need to develop advanced face retouching techniques. However, the lack of large-scale and fine-grained face retouching datasets has been a major obstacle to progress in this field. In this paper, we introduce RetouchingFFHQ, a large-scale and fine-grained face retouching dataset that contains over half a million conditionally-retouched images. RetouchingFFHQ stands out from previous datasets due to its large scale, high quality, fine-grainedness, and customization. By including four typical types of face retouching operations and different retouching levels, we extend the binary face retouching detection into a fine-grained, multi-retouching type, and multi-retouching level estimation problem. Additionally, we propose a Multi-granularity Attention Module (MAM) as a plugin for CNN backbones for enhanced cross-scale representation learning. Extensive experiments using different baselines as well as our proposed method on RetouchingFFHQ show decent performance on face retouching detection. 1 ## 1 Introduction The face retouchinging framework is a widely widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used and used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used and used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used and used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and used in face detection. The face retouching framework is a widely used and widely used and used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used and used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and used in face detection. The face retouching framework is a widely used and widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used and used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and used in face detection. The face retouching framework is a widely used and widely used and used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and used in face detection. The face retouching framework is a widely used and used in face detection. The face retouching framework is a widely used and widely used and widely used in face detection. The face retouching framework is a widely used and widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used and used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used and used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and used in face detection. The face retouching framework is a widely used and used in face detection. The face retouching framework is a widely used and used in face detection. The face retouching framework is a widely used and used in face detection. The face retouching framework is a widely used and used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and used in face detection. The face retouching framework is a widely used and used in face detection. The face retouching framework is a widely used and used in face detection. The face retouching framework is a widely used and used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and used in face detection. The face retouching framework is a widely used and used in face detection. The face retouching framework is a widely used and used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and used in face detection. The face retouching framework is a widely used and used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and used in face detection. The face retouching framework is a widely used and used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and widely used in face detection. The face retouching framework is a widely used and used in face detection. The face retouching framework is a widely used and widely used in face detection.
2303.13864
The generalized $4$-connectivity of bubble-sort graphs
For $S\subseteq V(G)$ with $|S|\ge 2$, let $\kappa_G (S)$ denote the maximum number of internally disjoint trees connecting $S$ in $G$. For $2\le k\le n$, the generalized $k$-connectivity $\kappa_k(G)$ of an $n$-vertex connected graph $G$ is defined to be $\kappa_k(G)=\min \{\kappa_G(S): S\in V(G) \mbox{ and } |S|=k\}$. The generalized $k$-connectivity can serve for measuring the fault tolerance of an interconnection network. The bubble-sort graph $B_n$ for $n\ge 2$ is a Cayley graph over the symmetric group of permutations on $[n]$ generated by transpositions from the set $\{[1,2],[2,3],\dots, [n-1,n]\}$. In this paper, we show that for the bubble-sort graphs $B_n$ with $n\ge 3$, $\kappa_4(B_n)=n-2$.
Leyou Xu, Bo Zhou
2023-03-24T08:59:44Z
http://arxiv.org/abs/2303.13864v1
# The generalized 4-connectivity of bubble-sort graphs ###### Abstract For \(S\subseteq V(G)\) with \(|S|\geq 2\), let \(\kappa_{G}(S)\) denote the maximum number of internally disjoint trees connecting \(S\) in \(G\). For \(2\leq k\leq n\), the generalized \(k\)-connectivity \(\kappa_{k}(G)\) of an \(n\)-vertex connected graph \(G\) is defined to be \(\kappa_{k}(G)=\min\{\kappa_{G}(S):S\in V(G)\text{ and }|S|=k\}\). The generalized \(k\)-connectivity can serve for measuring the fault tolerance of an interconnection network. The bubble-sort graph \(B_{n}\) for \(n\geq 2\) is a Cayley graph over the symmetric group of permutations on \([n]\) generated by transpositions from the set \(\{[1,2],[2,3],\ldots,[n-1,n]\}\). In this paper, we show that for the bubble-sort graphs \(B_{n}\) with \(n\geq 3\), \(\kappa_{4}(B_{n})=n-2\). **Keywords:** generalized 4-connectivity, internally disjoint trees, bubble-sort graphs, Cayley graphs ## 1 Introduction An interconnection network is usually modelled by its topological graph, a connected graph \(G\) with vertex set \(V(G)\) and edge set \(E(G)\), where vertices represent processors and edges represent communication links between processors. For an interconnection network, one mainly concerns about the reliability and fault tolerance, which usually can be measured by the traditional connectivity of its topological graph. The connectivity \(\kappa(G)\) of a graph \(G\) is defined to be the minimum cardinality of a subset \(S\in V(G)\) such that \(G-S\) is disconnected or trivial. A graph \(G\) is said to be \(k\)-connected if \(\kappa(G)\geq k\). For each 2-subset \(\{x,y\}\) of vertices of \(G\), let \(\kappa_{G}(x,y)\) denote the maximum number of internally vertex disjoint \((x,y)\)-paths in \(G\). A well-known theorem of Whitney [22] says that \(\kappa(G)=\min\{\kappa_{G}(x,y):\{x,y\}\subseteq V(G)\}\). For a set \(S\) of vertices in a connected graph \(G\) and trees \(T_{1},\ldots,T_{\ell}\) in \(G\), we say \(T_{1},\ldots,T_{\ell}\) are \(\ell\) internally edge disjoint trees connecting \(S\) in \(G\) if these trees are pairwise edge disjoint and \(V(T_{i})\cap V(T_{j})=S\) for every pair \(i,j\) of distinct integers with \(1\leq i,j\leq\ell\). Chartrand et al. [3] and Hager [8] proposed the concept of the generalized \(k\)-connectivity of an \(n\)-vertex graph \(G\) for \(k=2,\ldots,n\), see also [9, 4]. For any set \(S\) of vertices of \(G\) with \(|S|\geq 2\), the generalized connectivity of \(S\), written as \(\kappa_{G}(S)\), is the maximum number of internally disjoint trees connecting \(S\) in \(G\). For \(2\leq k\leq|V(G)|\), the generalized \(k\)-connectivity (or \(k\)-tree connectivity) of \(G\), \(\kappa_{k}(G)\), is the minimum value for \(\kappa_{G}(S)\) over all subsets \(S\) of vertices with \(|S|=k\). Note that \(\kappa_{2}(G)\) is the connectivity of \(G\), and \(\kappa_{n}(G)\) is the maximum number of edge disjoint spanning trees contained in \(G\)[19, 21] (or the spanning tree packing number of \(G\)[20]). The generalized \(k\)-connectivity has been used to measure the capability of a network to connect any \(k\) vertices. Cayley graphs have been used extensively to design interconnection networks. The Cayley graph \(\operatorname{Cay}(X,S)\), where \(X\) is a group with identity \(e\), \(e\not\in S\subseteq X\) and \(S\) is closed under inversion, is the graph with vertex set \(X\), such that \(g\) and \(h\) for \(g,h\in X\) are adjacent if and only if \(h=gs\) for some \(s\in S\). Denote \(\operatorname{Sym}(n)\) the symmetric group (i.e., the group of all permutations) on \([n]=\{1,\ldots,n\}\). For convenience, we use \((p_{1},\ldots,p_{n})\) to denote the permutation \(\sigma\) such that \(\sigma(i)=p_{i}\) for \(i\in[n]\), and \([i,j]\) with \(1\leq i<j\leq n\) to denote the permutation \((1,\ldots,i-1,j,i+1,\ldots,j-1,i,j+1,\ldots,n)\), which is called a transposition. The composition \(\sigma\pi\) of permutations \(\sigma\) and \(\pi\) is the function that maps any element \(i\in[n]\) to \(\sigma(\pi(i))\). Thus \[(p_{1},\ldots,p_{i},\ldots,p_{j},\ldots,p_{n})[i,j]=(p_{1},\ldots,p_{j}, \ldots,p_{i},\ldots,p_{n}),\] which swaps the objects at positions \(i\) and \(j\). Let \(\mathcal{T}\) be a set of transpositions from \([n]\). The (transposition generating) graph of \(\mathcal{T}\), denoted by \(G_{\mathcal{T}}\), is the graph with vertex set \([n]\) such that, for \(i,j\in[n]\), vertices \(i\) and \(j\) are adjacent if and only if \([i,j]\in\mathcal{T}\). It is known that the Cayley graph \(\operatorname{Cay}(\operatorname{Sym}(n),\mathcal{T})\) is connected if and only if \(G_{\mathcal{T}}\) is connected. If \(G_{\mathcal{T}}\) is the star, then \(\operatorname{Cay}(\operatorname{Sym}(n),\mathcal{T})\) is called a star graph, denoted by \(S_{n}\). If \(G_{\mathcal{T}}\) is the path, then \(\operatorname{Cay}(\operatorname{Sym}(n),\mathcal{T})\) is called a bubble-sort graph, denoted by \(B_{n}\). Observe that \(B_{2}\) is the 2-vertex complete graph and \(B_{3}\) is the 6-vertex cycle. Generally, \(B_{n}\) is an \(n!\)-vertex bipartite, vertex transitive and regular graph of degree \(n-1\). The generalized connectivity has been studied extensively, see the recent book [17]. There has been lots of results on the generalized 3-connectivity for various classes of graphs, see, e.g., [1, 7, 14, 15, 16, 24, 26]. For example, Li et al. [15] showed that \(\kappa_{3}(S_{n})=\kappa_{3}(B_{n})=n-2\) for \(n\geq 3\). The generalized 4-connectivity has also received attention, see [10, 25, 27, 18]. Li et al. [10] showed that \(\kappa_{4}(S_{n})=n-2\) for \(n\geq 3\). More closely related results may be found, see, e.g., [11, 5, 12]. In this paper, we will determine the generalized 4-connectivity of the bubble-sort graph \(B_{n}\). We show the following result. **Theorem 1.1**.: _For \(n\geq 3\), \(\kappa_{4}(B_{n})=n-2\)._ ## 2 Preliminaries For \(v\in V(G)\), denote by \(N_{G}(v)\) the set of neighbors of \(v\) in \(G\), \(\delta_{G}(v)=|N_{G}(v)|\) and \(N_{G}[v]=N_{G}(v)\cup\{v\}\). For a subset \(S\subseteq V(G)\), denote by \(G[S]\) the subgraph of \(G\) induced by \(S\). For \(x,y\in V(G)\), a path joining \(x\) and \(y\) in \(G\) is called an \((x,y)\)-path. For \(X,Y\subset V(G)\), an \((X,Y)\)-path is a path joining \(x\) and \(y\) in \(G\) for some \(x\in X\) and some \(y\in Y\), and any other vertex of the path (if any exists) are not in \(X\cup Y\). We write \((x,Y)\)-path instead of \((\{x\},Y)\)-path. **Lemma 2.1**.: _[_2_]_ _Let \(G\) be a \(k\)-connected graph, and let \(X,Y\subset V(G)\) with \(|X|,|Y|\geq k\). Then there are \(k\) pairwise vertex disjoint \((X,Y)\)-paths in \(G\)._ **Lemma 2.2**.: _[_2_]_ _Let \(G\) be a \(k\)-connected graph, and let \(x\in V(G)\) and \(Y\subset V(G)\setminus\{x\}\) with \(|Y|\geq k\). Then there are \(k\) internally vertex disjoint \((x,Y)\)-paths such that \(x\) is the only common terminal vertex._ The following lemma tells us an upper bound on \(\kappa_{k}(G)\) for a graph \(G\). **Lemma 2.3**.: _[_13_]_ _Let \(G\) be a connected graph with minimum degree \(\delta\). Then \(\kappa_{k}(G)\leq\delta\) for \(3\leq k\leq|V(G)|\). Furthermore, if there exist two adjacent vertices of degree \(\delta\) in \(G\), then \(\kappa_{k}(G)\leq\delta-1\)._ **Lemma 2.4**.: _[_6_]_ \(\kappa(B_{n})=n-1\) for \(n\geq 2\)._ **Lemma 2.5**.: _[_15_]_ \(\kappa_{3}(B_{n})=n-2\) for \(n\geq 3\)._ As we consider the bubble-sort graph \(B_{n}\), we may suppose without loss of generality that \(\mathcal{T}=\{[i,i+1]:i\in[n-1]\}\). Then \(E(G_{\mathcal{T}})=\{i(i+1):i\in[n-1]\}\). For \(i\in[n]\), let \(\mathrm{Sym}_{i}(n)\) denote the set of all permutations of \([n]\setminus\{i\}\). For \(\sigma=(p_{1},\ldots,p_{n-1})\in\mathrm{Sym}_{i}(n)\), we have \(\sigma(j)=p_{j}\) for \(j<i\) and \(\sigma(j)=p_{j-1}\) for \(j>i\). Let \[V_{i}=\{(p_{1},\ldots,p_{n-1},i):(p_{1},\ldots,p_{n-1})\in\mathrm{Sym}_{i}(n)\}\] and \(B_{n-1}^{i}=B_{n}[V_{i}]\) for \(i\in[n]\). Then \(V(B_{n})\) can be partitioned into \(V_{1},\ldots,V_{n}\) and \(B_{n-1}^{i}\cong B_{n-1}\) for \(i\in[n]\). We call \(B_{n-1}^{1},\ldots,B_{n-1}^{n}\) the main parts of \(B_{n}\). If \(u=(p_{1},\ldots,p_{n-1},k)\in V_{k}\), then \(u\) is in the main part \(B_{n-1}^{k}\). Let \(u_{i}=u[i,i+1]\) for \(i\in[n-1]\). Then \(N_{B_{n}}(u)=\{u_{i}:i\in[n-1]\}\) with \(u_{1},\ldots,u_{n-2}\in V_{k}\) and \(u_{n-1}\in V_{p_{n-1}}\). Note that \(u_{n-1}\) is the unique neighbor of \(u\) outside \(B_{n-1}^{k}\), which we call the out-neighbor of \(u\), written as \(u^{\prime}\) throughout this paper. The other \(n-2\) neighbors of \(u\) are called the in-neighbors of \(u\). The out-neighbor of \(u_{i}\) is \(u_{i}^{\prime}=u_{i}[n-1,n]\) for \(i\in[n-1]\). Then \(u_{i}^{\prime}\in V_{p_{n-1}}\) for \(i\in[n-3]\) and \(u_{n-2}^{\prime}\in V_{p_{n-2}}\). Note that \(u_{n-1}^{\prime}=u\). It can be verified that any two distinct vertices have different out-neighbors and \(|((\cup_{u\in V_{i}}N_{B_{n}}(u))\setminus V_{i})\cap V_{j}|=(n-2)!\) for \(i,j\in[n]\) with \(i\neq j\), see [6]. For \(\{i,j\}\subset[n]\) with \(n\geq 3\), it is shown in [15] that \[\kappa(B_{n}[V_{i}\cup V_{j}])=n-2.\] By the proof in [15], there are \(n-2\) internally vertex disjoint paths between any two vertices in \(B_{n}[V_{i}\cup V_{j}]\). So we have the following result. **Lemma 2.6**.: _Let \(B_{n-1}^{1},\ldots,B_{n-1}^{n}\) be the main parts of \(B_{n}\), where \(n\geq 3\). For any \(\emptyset\neq I\subset[n]\),_ \[\kappa(B_{n}[\cup_{i\in I}V_{i}])=n-2.\] Suppose that \(T_{1},\ldots,T_{s}\) are \(s\geq 2\) trees such that \(|V(T_{i})\cap V(T_{j})|=0,1\) for any \(i,j\) with \(1\leq i<j\leq s\). If the graph with vertex set \(\cup_{i=1}^{s}V(T_{i})\) and edge set \(\cup_{i=1}^{s}E(T_{i})\) connected, then it is a tree, denoted by \(T_{1}+\cdots+T_{s}\). It is possible that \(T_{i}\) is a path. Fix \(i\in[n]\). For \(j\in[n]\setminus\{i\}\), let \[V_{j}^{i}=\{(p_{1},\ldots,p_{n-2},j,i):(p_{1},\ldots,p_{n-2})\in\operatorname {Sym}_{i,j}(n)\},\] where \(\operatorname{Sym}_{i,j}(n)\) denotes the set of permutations of \([n]\setminus\{i,j\}\). Denote the induced subgraph \(B_{n}[V_{j}^{i}]\) by \(B_{n-2}^{(i,j)}\). ## 3 Proof of Theorem 1.1 Proof of Theorem 1.1.: By Lemma 2.3 and the fact that \(B_{n}\) is \((n-1)\)-regular, we have \(\kappa_{4}(B_{n})\leq n-2\). So it suffices to show that \(\kappa_{4}(B_{n})\geq n-2\). Let \(S\) be an arbitrary subset of \(V(B_{n})\) with \(|S|=4\), say \(S=\{x,y,z,w\}\). Then, it suffices to show that there are \[n-2\] internally edge disjoint trees connecting \[S\] in \[B_{n}\] We prove this statement by induction on \(n\). If \(n=3\), it is evident that there exists a tree containing vertices in \(S\), so the statement is true. Suppose that \(n\geq 4\) and the statement is true for \(B_{n-1}\). Recall that \(B_{n-1}^{1},\ldots,B_{n-1}^{n}\) are the main parts of \(B_{n}\). We consider the following five cases separately in subsections 3.1-3.5: * **Case 1.** The four vertices of \(S\) lie in a main part of \(B_{n}\); * **Case 2.** Two vertices of \(S\) lie in a main part and the other two vertices in \(S\) lie in another main part of \(B_{n}\); * **Case 3.** The four vertices of \(S\) lie in three different main parts of \(B_{n}\); * **Case 4.** The four vertices of \(S\) lie in four different main parts of \(B_{n}\); * **Case 5.** Three vertices of \(S\) lie in a main part and the remaining one lies in another main part of \(B_{n}\). ### Case 1 Assume that \(x,y,z,w\) are in \(B_{n-1}^{1}\). Note that \(B_{n-1}^{1}\cong B_{n-1}\). By the induction hypothesis, there are \(n-3\) internally edge disjoint trees \(T_{1},\ldots,T_{n-3}\) connecting \(S\) in the main part \(B_{n-1}^{1}\) of \(B_{n}\). By Lemma 2.6, \(B_{n}[V(B_{n})\setminus V_{1}]\) is connected, so there is a spanning tree \(T\) in \(B_{n}[V(B_{n})\setminus V_{1}]\). Note that \(x^{\prime},y^{\prime},z^{\prime},w^{\prime}\) are distinct four vertices in \(B_{n}[V(B_{n})\setminus V_{1}]\). So \(T_{n-2}=T+xx^{\prime}+yy^{\prime}+zz^{\prime}+ww^{\prime}\) is a tree containing vertices in \(S\) and \(V(T_{n-2})\cap V_{1}=S\). It thus follows that \(T_{1},\ldots,T_{n-2}\) are \(n-2\) internally edge disjoint trees connecting \(S\) in \(B_{n}\). ### Case 2 Assume that \(x,y\in V(B^{1}_{n-1})\) and \(z,w\in V(B^{2}_{n-1})\). By Lemma 2.4, \(\kappa(B^{2}_{n-1})=\kappa(B_{n-1})=n-2\), so there are \(n-2\) internally vertex disjoint \((z,w)\)-paths \(Q_{1},\ldots,Q_{n-2}\) in \(B^{2}_{n-1}\). Since \(|N_{B^{2}_{n-1}}(z)|=n-2\) and \(Q_{1},\ldots,Q_{n-2}\) are internally vertex disjoint \((z,w)\)-paths, we may assume that \(V(Q_{i})\cap N_{B^{2}_{n-1}}(z)=\{z_{i}\}\) for \(i\in[n-2]\). **Case 2.1.** One of \(x^{\prime}\) and \(y^{\prime}\), say \(x^{\prime}\), is not in \(B^{2}_{n-1}\), and one of \(z^{\prime}\) and \(w^{\prime}\), say \(z^{\prime}\), is not in \(B^{1}_{n-1}\). By Lemma 2.4, there are \(n-2\) internally vertex disjoint \((x,y)\)-paths \(L_{1}\), \(\ldots,L_{n-2}\), and we may assume that \(V(L_{i})\cap N_{B^{1}_{n-1}}(x)=\{x_{i}\}\) for \(i\in[n-2]\). Note that there is exactly one in-neighbor of \(x\), say \(x_{n-2}\), whose out-neighbor and \(x^{\prime}\) lie in different main parts, and there is exactly one in-neighbor of \(z\), say \(z_{n-2}\), whose out-neighbor and \(z^{\prime}\) lie in different main parts. Let \[X=\{x^{\prime}_{i}:i\in[n-3]\}\cup\{x^{\prime}\}\mbox{ and }Z=\{z^{\prime}_{i}:i \in[n-3]\}\cup\{z^{\prime}\}.\] It is evident that \(|X|=|Z|=n-2\). By Lemmas 2.1 and 2.6, there are \(n-2\) disjoint \((X,Z)\)-paths \(R_{1},\ldots,R_{n-2}\) in \(B_{n}[\cup_{i=3}^{n}V_{i}]\). Suppose that \(x^{\prime}\in V(R_{n-2})\), \(x^{\prime}_{i}\in V(R_{i})\) for \(i\in[n-3]\), \(z^{\prime}\in V(R_{s})\) for some \(s\in[n-2]\), \(z^{\prime}_{i}\in V(R_{i})\) for \(i\in[n-3]\setminus\{s\}\) and \(z^{\prime}_{s}\in V(R_{n-2})\). If \(s=n-2\), let \[T_{i}=L_{i}+x_{i}x^{\prime}_{i}+R_{i}+z^{\prime}_{i}z_{i}+Q_{i}\mbox{ for }i\in[n-3]\] and \[T_{n-2}=L_{n-2}+xx^{\prime}+R_{n-2}+z^{\prime}z+Q_{n-2}.\] Otherwise, let \[T_{i}=L_{i}+x_{i}x^{\prime}_{i}+R_{i}+z^{\prime}_{i}z_{i}+Q_{i}\mbox{ for }i\in[n-3] \setminus\{s\},\] \[T_{s}=L_{s}+x_{s}x^{\prime}_{s}+R_{s}+z^{\prime}z+Q_{n-2}\] and \[T_{n-2}=L_{n-2}+xx^{\prime}+R_{n-2}+z^{\prime}_{s}z_{s}+Q_{s}.\] Then it is easy to see that \(T_{1},\ldots,T_{n-2}\) are \(n-2\) internally edge disjoint trees connecting \(S\). **Case 2.2.**\(x^{\prime}\) and \(y^{\prime}\) are both in \(B^{2}_{n-1}\) and one of \(z^{\prime}\) and \(w^{\prime}\) is not in \(B^{1}_{n-1}\), or \(z^{\prime}\) and \(w^{\prime}\) are both in \(B^{1}_{n-1}\) and one of \(x^{\prime}\) and \(y^{\prime}\) is not in \(B^{2}_{n-1}\). Assume that \(x^{\prime}\) and \(y^{\prime}\) are both in \(B^{2}_{n-1}\) and one of \(z^{\prime}\) and \(w^{\prime}\), say \(z^{\prime}\), is not in \(B^{1}_{n-1}\). Suppose that \(n=4\). Then \(x\) and \(y\) are adjacent. If \(w^{\prime}\in V_{1}\), then \(w^{\prime}=x\) or \(w^{\prime}=y\), say \(w^{\prime}=y\). Let \(x_{1}=x[2,3]\) and \(y_{1}=y[2,3]\). Then \(x^{\prime}_{1},y^{\prime}_{1},z^{\prime}\in V_{3}\cup V_{4}\). As \(B_{4}[V_{3}\cup V_{4}]\) is connected, there is a tree \(T_{1}\) containing \(x^{\prime}_{1},y^{\prime}_{1},z^{\prime}\). Let \[T^{*}_{1}=xx_{1}+x_{1}x^{\prime}_{1}+yy_{1}+y_{1}y^{\prime}_{1}+T_{1}+z^{ \prime}z+Q_{1}\] and \[T_{2}=xy+yw+Q_{2}.\] Then \(T_{1}^{*}\) and \(T_{2}\) are two internally edge disjoint trees connecting \(S\). Otherwise, \(w^{\prime}\notin V_{1}\). Since \(x^{\prime},y^{\prime}\in V_{2}\), there is a tree \(F_{1}\) in \(B_{3}^{2}\) containing \(x^{\prime},y^{\prime},z,w\). Similarly, there is a tree \(F_{2}\) in \(B_{4}[V_{1}\cup V_{3}\cup V_{4}]\) containing \(\{x,y,z^{\prime},w^{\prime}\}\). Then \(F_{1}^{*}=F_{1}+x^{\prime}x+y^{\prime}y\) and \(F_{2}^{*}=F_{2}+w^{\prime}w+z^{\prime}z\) are two internally edge disjoint trees connecting \(S\). Suppose that \(n\geq 5\). For \(j=2,\ldots,n\), let \[V_{j}^{1}=\{(p_{1},\ldots,p_{n-2},j,1):(p_{1},\ldots,p_{n-2})\in\mbox{Sym}_{1, j}(n)\},\] where \(\mbox{Sym}_{1,j}(n)\) denotes the set of permutations of \([n]\backslash\{1,j\}\). Denote the induced subgraph \(B_{n}[V_{j}^{1}]\) by \(B_{n-2}^{(j)}\). Since \(B_{n-2}^{(1,j)}\cong B_{n-2}\) and \(B_{n-1}^{1}\cong B_{n-1}\), we view \(B_{n-2}^{(1,2)},\ldots,B_{n-2}^{(1,n)}\) as the main parts of \(B_{n-1}^{1}\). Then \(x\) and \(y\) are in \(B_{n-2}^{(1,2)}\). By Lemma 2.4, \(\kappa(B_{n-2}^{(1,2)})=n-3\), so there exist \(n-3\) internally disjoint \((x,y)\)-paths \(L_{1},L_{2},\ldots,L_{n-3}\) in \(B_{n-2}^{(1,2)}\). Note that there are \(n-3\) vertices adjacent to \(x\) in \(B_{n-2}^{(1,2)}\). Then each \(L_{i}\) contains exactly one vertex in \(N_{B_{n-2}^{(1,2)}}(x)\), which we denote by \(x_{i}\), where \(i\in[n-3]\). Assume that \(z_{n-2}\) is the vertex whose out-neighbor is not in the same main part as \(z^{\prime}\). Let \(x=(p_{1},\ldots,p_{n-2},2,1)\) and \(y=(r_{1},\ldots,r_{n-2},2,1)\). Let \(x_{n-2}=x[n-2,n-1]\), \(x_{n-2,1}=x_{n-2}[n-3,n-2]\), \(x_{n-2,2}=x_{n-2,1}[n-4,n-3]\), \(x_{n-2,3}=x_{n-2,2}[n-3,n-2]\) and \(\widehat{x}_{n-2}=x_{n-2,3}[n-2,n-1]\). That is, \[x_{n-2}=(p_{1},\ldots,p_{n-4},p_{n-3},2,p_{n-2},1),\] \[x_{n-2,1}=(p_{1},\ldots,p_{n-4},2,p_{n-3},p_{n-2},1),\] \[x_{n-2,2}=(p_{1},\ldots,2,p_{n-4},p_{n-3},p_{n-2},1),\] \[x_{n-2,3}=(p_{1},\ldots,2,p_{n-3},p_{n-4},p_{n-2},1)\] and \[\widehat{x}_{n-2}=(p_{1},\ldots,2,p_{n-3},p_{n-2},p_{n-4},1).\] Let \[P_{x}=xx_{n-2}x_{n-2,1}x_{n-2,2}x_{n-2,3}\widehat{x}_{n-2}.\] There are three probabilities: (i) If \(\{r_{n-3},r_{n-2}\}=\{p_{n-3},p_{n-2}\}\), then set \(y_{n-2}=y[n-2,n-1]\), \(y_{n-2,1}=y_{n-2}[n-3,n-2]\), \(y_{n-2,2}=y_{n-2,1}[n-4,n-3]\), \(y_{n-2,3}=y_{n-2,2}[n-3,n-2]\), \(\widehat{y}_{n-2}=y_{n-2,3}[n-2,n-1]\) and \(P_{y}=yy_{n-2}y_{n-2,1}y_{n-2,2}y_{n-2,3}\widehat{y}_{n-2}\). (ii) If \(r_{n-2}\in\{p_{n-3},p_{n-2}\}\) and \(r_{n-3}\notin\{p_{n-3},p_{n-2}\}\), then set \(y_{n-2}=y[n-2,n-1]\), \(y_{n-2,1}=y_{n-2}[n-3,n-2]\), \(\widehat{y}_{n-2}=y_{n-2,1}[n-2,n-1]\) and \(P_{y}=yy_{n-2}y_{n-2,1}\widehat{y}_{n-2}\). (iii) Otherwise, set \(\widehat{y}_{n-2}=y[n-2,n-1]\) and \(P_{y}=y\widehat{y}_{n-2}\). As \(x\neq y\), we have \(V(P_{x})\cap V(P_{y})=\emptyset\). Correspondingly to (i)-(iii), we have by Lemma 2.6 that each of \(B_{n-1}^{1}[V_{p_{n-4}}^{1}\cup V_{r_{n-4}}^{1}]\), \(B_{n-1}^{1}[V_{p_{n-4}}^{1}\cup V_{r_{n-3}}^{1}]\), or \(B_{n-1}^{1}[V_{p_{n-4}}^{1}\cup V_{r_{n-2}}^{1}]\) is connected, so there is a \((\widehat{x}_{n-2},\widehat{y}_{n-2})\)-path \(P_{xy}\) in one of them. Let \[L_{n-2}=P_{x}+P_{xy}+P_{y}.\] Since \(V(L_{n-2})\cap V_{2}^{1}=\{x,y\}\), we have \(n-2\) internally disjoint \((x,y)\)-paths in \(B_{n-1}^{1}\). **Case 2.2.1**.: \(x,y\) are not adjacent. Let \(\widehat{x}_{i}=x_{i}[n-2,n-1]\) for \(i\in[n-3]\). Then \(|\{\widehat{x}_{i}^{\prime}:i\in[n-3]\}\cap V_{p_{n-2}}^{1}|=n-4\) and \(|\{\widehat{x}_{i}^{\prime}:i\in[n-3]\}\cap V_{p_{n-3}}^{1}|=1\). Since \(x_{i}\in N_{B_{n}}(x)\) for \(1\leq i\leq n-3\), we have \(\widehat{x}_{i}\neq\widehat{x}_{j}\) if \(i\neq j\). Note that \(xy\notin E(B_{n})\). By comparing the position of '2' in the permutation corresponding to the vertices on \(P_{x}\) and in \(\widehat{x}_{i}\) for \(i\in[n-3]\), we have \(V(P_{x})\cap\{\widehat{x}_{i}:i\in[n-3]\}=\emptyset\). Similarly, \(V(P_{y})\cap\{\widehat{x}_{i}:i\in[n-3]\}=\emptyset\). Let \(X=\{\widehat{x}_{i}^{\prime}:i\in[n-2]\}\), and \(Z=\{z_{i}^{\prime}:i\in[n-3]\}\cup\{z^{\prime}\}\). Note that \(X\subseteq\cup_{i=3}^{n}V_{i}\) and \(Z\subseteq\cup_{i=3}^{n}V_{i}\). By Lemmas 2.1 and 2.6, there are \(n-2\) disjoint \((X,Z)\)-paths \(R_{1},\ldots,R_{n-2}\) in \(B_{n}[\cup_{i=3}^{n}V_{i}]\). Assume that \(z^{\prime}\in V(R_{n-2})\), \(z_{i}^{\prime}\in V(R_{i})\) for \(i\in[n-3]\), \(\widehat{x}_{n-2}^{\prime}\in V(R_{s})\) for some \(s\in[n-2]\), \(\widehat{x}_{i}^{\prime}\in V(R_{i})\) for \(i\in[n-3]\setminus\{s\}\) and \(\widehat{z}_{s}^{\prime}\in V(R_{n-2})\). If \(s=n-2\), let \[T_{i}=L_{i}+x_{i}\widehat{x}_{i}+\widehat{x}_{i}\widehat{x}_{i}^{\prime}+R_{i }+z_{i}^{\prime}z_{i}+Q_{i}\text{ for }i\in[n-3]\] and \[T_{n-2}=L_{n-2}+\widehat{x}_{n-2}\widehat{x}_{n-2}^{\prime}+R_{n-2}+z^{\prime }z+Q_{n-2}.\] Otherwise, let \[T_{i}=L_{i}+x_{i}\widehat{x}_{i}+\widehat{x}_{i}\widehat{x}_{i}^{\prime}+R_{i }+z_{i}^{\prime}z_{i}+Q_{i}\text{ for }i\in[n-3]\setminus\{s\},\] \[T_{s}=L_{n-2}+\widehat{x}_{n-2}\widehat{x}_{n-2}^{\prime}+R_{s}+z_{s}^{\prime }z_{s}+Q_{s}\] and \[T_{n-2}=L_{s}+x_{s}\widehat{x}_{s}+\widehat{x}_{s}\widehat{x}_{s}^{\prime}+R_{ n-2}+z^{\prime}z+Q_{n-2}.\] Then \(T_{1},\ldots,T_{n-2}\) are \(n-2\) internally edge disjoint trees connecting \(S\) in \(B_{n}\). **Case 2.2.2**.: \(x,y\) are adjacent. Assume that \(L_{1}=xy\). Let \(\widehat{x}_{i}=x_{i}[n-2,n-1]\) for \(i=2,\ldots,n-3\). By similar argument as in Case 2.2.1, we have \(V(P_{x})\cap\{\widehat{x}_{i}:i=2,\ldots,n-3\}=\emptyset\) and \(V(P_{y})\cap\{\widehat{x}_{i}:i=2,\ldots,n-3\}=\emptyset\). Suppose that \(N_{B_{n-1}^{2}}[x^{\prime}]\cap(\cup_{i=1}^{n-2}V(Q_{i}))=\emptyset\). Let \(\widehat{x}_{1}=x^{\prime}[n-2,n-1]\). Let \(X\) and \(Z\) be defined the same as that in Case 2.2.1. Then there are \(n-2\) internally vertex disjoint \((X,Z)\)-paths \(R_{i}\) in \(B_{n}[\cup_{i=3}^{n}V_{i}]\) for \(i\in[n-2]\). If \(s\neq 1\), let \(T_{i}\) be defined as in Case 2.2.1 for \(i=2,\ldots,n-2\), and let \[T_{1}=xy+xx^{\prime}+x^{\prime}\widehat{x}_{1}+\widehat{x}_{1}\widehat{x}_{1}^{ \prime}+R_{1}+z_{1}^{\prime}z_{1}+Q_{1}.\] Otherwise, let \(T_{i}\) be defined as in Case 2.2.1 for \(i=2,\ldots,n-3\), \[T_{1}=L_{n-2}+\widehat{x}_{n-2}^{\prime}\widehat{x}_{n-2}^{\prime}+R_{1}+z_{1} ^{\prime}z_{1}+Q_{1}\] and \[T_{n-2}=xy+xx^{\prime}+x^{\prime}\widehat{x}_{1}+\widehat{x}_{1}\widehat{x}_{1 }^{\prime}+R_{n-2}+z^{\prime}z+Q_{n-2}.\] In either case, there are \(n-2\) internally edge disjoint trees connecting \(S\). Otherwise, assume that \(\widehat{x}_{1}\in N_{B_{n-1}^{2}}[x^{\prime}]\cap V(Q_{\ell})\) for some \(\ell\in[n-2]\). So \[T_{1}=\begin{cases}Q_{\ell}+x^{\prime}\widehat{x}_{1}+xx^{\prime}+xy&\text{ if } \widehat{x}_{1}\neq x^{\prime}\\ Q_{\ell}+xx^{\prime}+xy&\text{ otherwise}\end{cases}\] is a tree containing vertices in \(S\). Let \(X=\{\widehat{x}_{i}^{\prime}:i=2,\ldots,n-2\}\), \(Z=\{z_{i}^{\prime}:i\in[n-3]\}\) if \(\ell=n-2\) and \(Z=\{z_{i}^{\prime}:i\in[n-3]\setminus\{\ell\}\}\cup\{z^{\prime}\}\) otherwise. By Lemmas 2.1 and 2.6, there are \(n-3\) internally vertex disjoint \((X,Z)\)-paths \(R_{1},\ldots,R_{n-3}\) in \(B_{n}[\cup_{i=3}^{n}V_{i}]\). Assume that \(\widehat{x}_{i+1}^{\prime},z_{i}^{\prime}\in V(R_{i})\) for \(i\in[n-3]\) if \(\ell=n-2\). Let \[T_{i}=L_{i}+x_{i}\widehat{x}_{i}+\widehat{x}_{i}\widehat{x}_{i}^{\prime}+R_{i -1}+z_{i-1}^{\prime}z_{i-1}+Q_{i-1}\mbox{ for }i=2,\ldots,n-3,\] and \[T_{n-2}=L_{n-2}+\widehat{x}_{n-2}\widehat{x}_{n-2}^{\prime}+R_{n-3}+z_{n-3}^{ \prime}z_{n-3}+Q_{n-3}.\] Otherwise, we may suppose without loss of generality that \(\ell=1\). Assume that \(z^{\prime}\in V(R_{n-3})\), \(z_{i}^{\prime}\in V(R_{i-1})\) for \(i=2,\ldots,n-3\), \(\widehat{x}_{s}^{\prime}\in V(R_{n-3})\), \(\widehat{x}_{n-2}^{\prime}\in V(R_{s-1})\) and \(\widehat{x}_{i}^{\prime}\in V(R_{i-1})\) for \(i\in[n-3]\setminus\{1,s\}\). If \(s=n-2\), let \[T_{i}=L_{i}+x_{i}\widehat{x}_{i}+\widehat{x}_{i}\widehat{x}_{i}^{\prime}+R_{i -1}+z_{i}^{\prime}z_{i}+Q_{i}\mbox{ for }i=2,\ldots,n-3,\] and \[T_{n-2}=L_{n-2}+\widehat{x}_{n-2}\widehat{x}_{n-2}^{\prime}+R_{n-3}+z^{\prime }z+Q_{n-2}.\] Otherwise, let \[T_{i}=L_{i}+x_{i}\widehat{x}_{i}+\widehat{x}_{i}\widehat{x}_{i}^{\prime}+R_{i -1}+z_{i}^{\prime}z_{i}+Q_{i}\mbox{ for }i\in[n-3]\setminus\{1,s\},\] \[T_{s}=L_{n-2}+\widehat{x}_{n-2}\widehat{x}_{n-2}^{\prime}+R_{s-1}+z_{s}^{\prime }z_{s}+Q_{s}\] and \[T_{n-2}=L_{s}+x_{s}\widehat{x}_{s}+\widehat{x}_{s}\widehat{x}_{s}^{\prime}+R_{ n-3}+z^{\prime}z+Q_{n-2}.\] Then \(T_{1},\ldots,T_{n-2}\) are \(n-2\) internally edge disjoint trees connecting \(S\) in \(B_{n}\). **Case 2.3.** Both \(x^{\prime},y^{\prime}\) are in \(B_{n-1}^{2}\) and \(z^{\prime},w^{\prime}\) are in \(B_{n-1}^{1}\). If \(n=4\), then \(B_{4}[S]\) is a cycle of length four with edges \(xy,zw,xz,yw\). Let \(x_{1}=x[2,3]\), \(y_{1}=y[2,3]\) and \(z_{1}=z[2,3]\). Then \(x_{1}^{\prime},y_{1}^{\prime},z_{1}^{\prime}\in V_{3}\cup V_{4}\). So there is a tree \(T_{1}^{\prime}\) connecting \(x_{1}^{\prime},y_{1}^{\prime},z_{1}^{\prime}\). Then \[T_{1}=xx_{1}+x_{1}x_{1}^{\prime}+yy_{1}+y_{1}y_{1}^{\prime}+T_{1}^{\prime}+z_{ 1}^{\prime}z_{1}+z_{1}z+zw\] and \[T_{2}=zx+xy+yw\] are two internally edge disjoint trees connecting \(S\). For \(n\geq 5\), by the same way as in Case 2.2, we may construct \(n-2\) internally vertex disjoint \((x,y)\)-paths in \(B_{n-1}^{1}\), and \(n-2\) internally vertex disjoint \((z,w)\)-paths in \(B_{n-1}^{2}\), and so we may obtain \(n-2\) internally edge disjoint trees connecting \(S\). ### Case 3 Assume that \(x,y\in V_{1}\), \(z\in V_{2}\) and \(w\in V_{3}\). Let \[x=(p_{1},\ldots,p_{n-1},1)\mbox{ and }y=(r_{1},\ldots,r_{n-1},1).\] Then \(x^{\prime}\in V_{p_{n-1}}\) and \(y^{\prime}\in V_{r_{n-1}}\). By considering whether the out-neighbors of \(x\) and \(y\) are in the same main part of \(B_{n}\), we discuss the following two cases. **Case 3.1.**\(x^{\prime}\) and \(y^{\prime}\) are in the different main parts, i.e., \(p_{n-1}\neq r_{n-1}\). Since \(\kappa(B_{n-1}^{1})=n-2\), there are \(n-2\) internally vertex disjoint \((x,y)\)-paths \(L_{1},\ldots,L_{n-2}\) in \(B_{n-1}^{1}\). Let \(\widehat{x}=x[n-2,n-1]\) and \(\widehat{y}=y[n-2,n-1]\). Note that each \(L_{i}\) contains exactly one vertex in \(N_{B_{n-1}^{1}}(x)\) and exactly one vertex in \(N_{B_{n-1}^{1}}(y)\) for \(i\in[n-2]\). Assume that \(\widehat{x}\in V(L_{n-2})\) and \(\widehat{y}\in V(L_{s})\) for some \(s\in[n-2]\). Assume that \(V(L_{i})\cap N_{B_{n}}(x)=\{x_{i}\}\) for \(i\in[n-3]\) and \(V(L_{i})\cap N_{B_{n}}(y)=\{y_{i}\}\) for \(i\in[n-2]\setminus\{s\}\). Let \(X=\{x^{\prime}_{i}:i\in[n-3]\}\cup\{x^{\prime}\}\) and \(Y=\{y^{\prime}_{i}:i\in[n-2]\setminus\{s\}\}\cup\{y^{\prime}\}\). Assume that \(p_{n-1}\neq 3\) and \(r_{n-1}\neq 2\), otherwise, we change the role of \(x\) and \(y\) in the following proof. By Lemmas 2.2 and 2.6, there are \(n-2\) internally vertex disjoint \((z,X)\)-paths \(Q_{1},\ldots,Q_{n-2}\) in \(B_{n}[V_{2}\cup V_{p_{n-1}}]\) and \(n-2\) internally vertex disjoint \((w,Y)\)-paths \(R_{1},\ldots,R_{n-2}\) in \(B_{n}[V_{3}\cup V_{r_{n-1}}]\). Assume that \(x^{\prime}\in V(Q_{n-2})\), \(x^{\prime}_{i}\in V(Q_{i})\) for \(i\in[n-3]\), and \(y^{\prime}\in V(R_{s})\) and \(y^{\prime}_{i}\in V(R_{i})\) for \(i\in[n-2]\setminus\{s\}\). If \(s=n-2\), let \[T_{i}=Q_{i}+x^{\prime}_{i}x_{i}+L_{i}+y_{i}y^{\prime}_{i}+R_{i}\mbox{ for }i\in[n-3],\] and \[T_{n-2}=Q_{n-2}+x^{\prime}x+L_{n-2}+yy^{\prime}+R_{n-2}.\] Otherwise, let \[T_{i}=Q_{i}+x^{\prime}_{i}x_{i}+L_{i}+y_{i}y^{\prime}_{i}+R_{i}\mbox{ for }i\in[n-3] \setminus\{s\},\] \[T_{s}=Q_{s}+x^{\prime}_{s}x_{s}+L_{s}+y^{\prime}y+R_{s},\] and \[T_{n-2}=Q_{n-2}+xx^{\prime}+L_{n-2}+y_{n-2}y^{\prime}_{n-2}+R_{n-2}.\] Then \(T_{1},\ldots,T_{n-2}\) are \(n-2\) internally disjoint edge disjoint trees connecting \(S\). **Case 3.2.**\(x^{\prime}\) and \(y^{\prime}\) are in the same main part, i.e., \(p_{n-1}=r_{n-1}\). Assume that \(p_{n-1}\neq 3\). By similar argument as in Case 2.2, we obtain \(n-2\) internally vertex disjoint \((x,y)\)-paths \(L_{1},\ldots,L_{n-2}\). Let \(x_{i},\widehat{x}_{i}\) for \(i\in[n-2]\) and \(X\) be defined the same way as in Case 2.2. Suppose that \(V(L_{i})\cap N_{B_{n}}(y)=\{y_{i}\}\) for \(i\in[n-3]\). Let \(Y=\{y_{i}:i\in[n-3]\}\cup\{y^{\prime}\}\). By Lemmas 2.2 and 2.6, there are \(n-2\) internally disjoint \((w,X)\)-paths \(Q_{1},\ldots,Q_{n-2}\) in \(B_{n}[V(B_{n})\setminus(V_{1}\cup V_{2}\cup V_{p_{n-1}})]\) and there are \(n-2\) internally disjoint \((z,Y)\)-paths \(R_{1},\ldots,R_{n-2}\) in \(B_{n}[V_{2}\cup V_{p_{n-1}}]\). Assume that \(\widehat{x}^{\prime}_{i}\in V(Q_{i})\) for \(i\in[n-2]\), \(y^{\prime}_{i}\in V(R_{i})\) for \(i\in[n-3]\) and \(y^{\prime}\in V(R_{n-2})\). Let \[T_{i}=Q_{i}+\widehat{x}^{\prime}_{i}\widehat{x}_{i}+x_{i}\widehat{x}_{i}+L_{i} +y_{i}y^{\prime}_{i}+R_{i}\mbox{ for }i\in[n-3]\] and \[T_{n-2}=Q_{n-2}+\widehat{x}^{\prime}_{n-2}\widehat{x}_{n-2}+L_{n-2}+yy^{\prime }+R_{n-2}.\] Then there are \(n-2\) internally edge disjoint trees \(T_{1},\ldots,T_{n-2}\) connecting \(S\). ### Case 4 Assume that \(x\in V_{1}\), \(y\in V_{2}\), \(z\in V_{3}\) and \(w\in V_{4}\). Suppose first that there are at least two vertices in \(S\) whose out-neighbors lie in \(\cup_{i=5}^{n}V_{i}\), say \(x^{\prime},y^{\prime}\in\cup_{i=5}^{n}V_{i}\). By Lemma and 2.6, there are \(n-2\) internally vertex disjoint \((x,z)\)-paths \(L_{1},\ldots,L_{n-2}\) in \(B_{n}[V_{1}\cup V_{3}]\) and \(n-2\) internally vertex disjoint \((y,w)\)-paths \(Q_{1},\ldots,Q_{n-2}\) in \(B_{n}[V_{2}\cup V_{4}]\). Then by similar argument as in Case 2.1, we can obtain \(n-2\) internally edge disjoint trees connecting \(S\). Suppose next that there is at most one vertex in \(S\) whose unique out-neighbor lies in \(\cup_{i=5}^{n}V_{i}\), that is, there are three vertices in \(S\), say \(x,y,z\), with \(x^{\prime},y^{\prime},z^{\prime}\in\cup_{i=1}^{4}V_{i}\). Note that \(x^{\prime}\not\in V_{1}\). Assume that \(x^{\prime}\in V_{2}\) (if \(x^{\prime}\in V_{3}\) or \(x^{\prime}\in V_{4}\), the argument is similar by viewing \(z\) or \(w\) as \(y\)). We consider the following two cases. **Case 4.1.**\(y^{\prime}\in V_{1}\). Recall that \(z^{\prime}\in V_{1}\cup V_{2}\cup V_{4}\). Suppose first that \(z^{\prime}\in V_{4}\). By Lemma 2.6, there are \(n-2\) internally vertex disjoint \((x,z)\)-paths \(L_{1},\ldots,L_{n-2}\) in \(B_{n}[V_{1}\cup V_{3}]\). Let \(\widehat{x}=x[n-2,n-1]\) and \(\widehat{z}=z[n-2,n-1]\). Note that each \(L_{i}\) contains exactly one vertex in \(N_{B_{n-1}^{1}}(x)\). Assume that \(\widehat{x}\in V(L_{n-2})\) and \(V(L_{i})\cap N_{B_{n}}(x)=\{x_{i}\}\) for \(i\in[n-3]\). Similarly, we may assume that \(\widehat{z}\in V(L_{s})\) for some \(s\in[n-2]\) and \(V(L_{i})\cap N_{B_{n}}(z)=\{z_{i}\}\) for \(i\in[n-2]\setminus\{s\}\). Let \(X=\{x_{i}^{\prime}:i\in[n-3]\}\cup\{x^{\prime}\}\) and \(Z=\{z_{i}^{\prime}:i\in[n-2]\setminus\{s\}\}\cup\{z^{\prime}\}\). Then \(X\subseteq V_{2}\) with \(|X|=n-2\) and \(Z\subseteq V_{4}\) with \(|Z|=n-2\). By Lemmas 2.2 and 2.4, there are \(n-2\) internally vertex disjoint \((y,X)\)-paths \(Q_{1},\ldots,Q_{n-2}\) in \(B_{n-1}^{2}\) and there are \(n-2\) internally vertex disjoint \((w,Z)\)-paths \(R_{1},\ldots,R_{n-2}\) in \(B_{n-1}^{4}\). Assume that \(x_{i}^{\prime}\in V(Q_{i})\) for \(i\in[n-3]\), \(x^{\prime}\in V(Q_{n-2})\) and \(z^{\prime}\in V(R_{s})\), \(z_{i}^{\prime}\in V(R_{i})\) for \(i\in[n-2]\setminus\{s\}\). If \(s=n-2\), let \[T_{i}=L_{i}+x_{i}x_{i}^{\prime}+Q_{i}+z_{i}z_{i}^{\prime}+R_{i}\text{ for }i\in[n-3]\] and \[T_{n-2}=L_{n-2}+xx^{\prime}+Q_{n-2}+zz^{\prime}+R_{n-2}.\] Otherwise, let \[T_{i}=L_{i}+x_{i}x_{i}^{\prime}+Q_{i}+z_{i}z_{i}^{\prime}+R_{i}\text{ for }i\in[n-3] \setminus\{s\},\] \[T_{s}=L_{s}+x_{s}x_{s}^{\prime}+Q_{s}+zz^{\prime}+R_{s}\] and \[T_{n-2}=L_{n-2}+xx^{\prime}+Q_{n-2}+z_{n-2}z_{n-2}^{\prime}+R_{n-2}.\] Then \(T_{1},\ldots,T_{n-2}\) are \(n-2\) internally edge disjoint trees connecting \(S\). Next suppose \(z^{\prime}\in V_{1}\cup V_{2}\), say \(z^{\prime}\in V_{1}\). There are \(n-2\) internally vertex disjoint \((z,w)\)-paths \(L_{1},\ldots,L_{n-2}\) by Lemma 2.6. Let \(\widehat{z}=z[n-2,n-1]\). Note that each \(L_{i}\) contains exactly one vertex in \(N_{B_{n-1}^{3}}(z)\). Assume that \(\widehat{z}\in V(L_{n-2})\) and \(V(L_{i})\cap N_{B_{n}}(z)=\{z_{i}\}\) for \(i\in[n-3]\). Let \(Z=\{z_{i}^{\prime}:i\in[n-3]\}\cup\{z^{\prime}\}\). Then \(Z\subseteq V_{1}\) with \(|Z|=n-2\). By Lemma 2.4, there are \(n-2\) internally vertex disjoint \((x,Z)\)-paths \(Q_{1},\ldots,Q_{n-2}\). Assume that \(z_{i}^{\prime}\in V(Q_{i})\) for \(i\in[n-3]\) and \(z^{\prime}\in V(Q_{n-2})\). Let \(\widehat{x}=x[n-2,n-1]\). Note that each \(Q_{i}\) contains exactly one vertex in \(N_{B_{n-1}^{i}}(x)\). Assume that \(\widehat{x}\in V(Q_{s})\) for some \(s\in[n-2]\) and \(V(Q_{i})\cap N_{B^{1}_{n-1}}(x)=\{y_{i}\}\) for \(i\in[n-2]\setminus\{s\}\). Let \(X=\{x^{\prime}_{i}:i\in[n-2]\setminus\{s\}\}\cup\{x^{\prime}\}\). Then \(X\subseteq V(B^{2}_{n-1})\) with \(|X|=n-2\). There are \(n-2\) internally vertex disjoint \((y,X)\)-paths \(R_{1},\ldots,R_{n-2}\) by Lemma 2.4. Assume that \(x^{\prime}_{i}\in V(R_{i})\) for \(i\in[n-2]\setminus\{s\}\) and \(x^{\prime}\in V(R_{s})\). If \(s=n-2\), let \[T_{i}=L_{i}+z_{i}z^{\prime}_{i}+Q_{i}+x_{i}x^{\prime}_{i}+R_{i}\mbox{ for }i\in[n-3]\] and \[T_{n-2}=L_{n-2}+zz^{\prime}+Q_{n-2}+xx^{\prime}+R_{n-2}.\] Otherwise, let \[T_{i}=L_{i}+z_{i}z^{\prime}_{i}+Q_{i}+x_{i}x^{\prime}_{i}+R_{i}\mbox{ for }i\in[n-3] \setminus\{s\},\] \[T_{s}=L_{s}+z_{s}z^{\prime}_{s}+Q_{s}+xx^{\prime}+R_{s}\] and \[T_{n-2}=L_{n-2}+zz^{\prime}+Q_{n-2}+x_{n-2}x^{\prime}_{n-2}+R_{n-2}.\] Then there are \(n-2\) internally edge disjoint trees \(T_{1},\ldots,T_{n-2}\) connecting \(S\). **Case 4.2.**\(y^{\prime}\notin V_{1}\). Note that \(y^{\prime}\in V_{3}\cup V_{4}\). Assume that \(y^{\prime}\in V_{3}\). By Lemma 2.6, there are \(n-2\) internally vertex disjoint \((x,w)\)-paths \(L_{1},\ldots,L_{n-2}\) in \(B_{n}[V_{1}\cup V_{4}]\). Let \(\widehat{x}=x[n-2,n-1]\). Note that each \(L_{i}\) contains exactly one vertex in \(N_{B_{n-1}}(x)\). Assume that \(\widehat{x}\in V(L_{n-2})\) and \(V(L_{i})\cap N_{B^{1}_{n-1}}(x)=\{x_{i}\}\) for \(i\in[n-3]\). Let \(X=\{x^{\prime}_{i}:i\in[n-3]\}\cup\{x^{\prime}\}\). Then \(X\subseteq V_{2}\) with \(|X|=n-2\). By Lemmas 2.2 and 2.4, there are \(n-2\) internally vertex disjoint \((y,X)\)-paths \(Q_{1},\ldots,Q_{n-2}\) in \(B^{2}_{n-1}\). Assume that \(x^{\prime}_{i}\in V(Q_{i})\) for \(i\in[n-3]\) and \(x^{\prime}\in V(Q_{n-2})\). Let \(\widehat{y}=y[n-2,n-1]\). Note that each \(Q_{i}\) contains exactly one vertex in \(N_{B_{n-1}}(y)\). Assume that \(\widehat{y}\in V(Q_{s})\) for some \(s\in[n-2]\) and \(V(Q_{i})\cap N_{B_{n-1}}(y)=\{y_{i}\}\) for \(i\in[n-2]\setminus\{s\}\). Let \(Y=\{y^{\prime}_{i}:i\in[n-2]\setminus\{s\}\}\cup\{y^{\prime}\}\). Then \(Y\subseteq V_{3}\) with \(|Y|=n-2\). Since \(\kappa(B^{3}_{n-1})=n-2\), there are \(n-2\) internally vertex disjoint \((z,Y)\)-paths \(R_{1},\ldots,R_{n-2}\) in \(B^{3}_{n-1}\). Assume that \(y^{\prime}_{i}\in V(R_{i})\) for \(i\in[n-2]\setminus\{s\}\) and \(y^{\prime}\in V(R_{s})\). If \(s=n-2\), let \[T_{i}=L_{i}+x_{i}x^{\prime}_{i}+Q_{i}+y_{i}y^{\prime}_{i}+R_{i}\mbox{ for }i\in[n-3],\] and \[T_{n-2}=L_{n-2}+xx^{\prime}+Q_{n-2}+yy^{\prime}+R_{n-2}.\] Otherwise, let \[T_{i}=L_{i}+x_{i}x^{\prime}_{i}+Q_{i}+y_{i}y^{\prime}_{i}+R_{i}\mbox{ for }i\in[n-3] \setminus\{s\},\] \[T_{s}=L_{s}+x_{s}x^{\prime}_{s}+Q_{s}+yy^{\prime}+R_{s}\] and \[T_{n-2}=L_{n-2}+xx^{\prime}+Q_{n-2}+y_{n-2}y^{\prime}_{n-2}+R_{n-2}.\] Then there are \(n-2\) internally edge disjoint trees \(T_{1},\ldots,T_{n-2}\) connecting \(S\). ### Case 5 Assume that \(x,y,z\in V_{1}\) and \(w\in V_{2}\). Suppose first that \(n=4\). Note that \(B_{3}\) is a cycle of length \(6\). Let \(P_{xy}\), \(P_{xz}\), and \(P_{yz}\) be the \((x,y)\)-path, \((x,z)\)-path and \((y,z)\)-path in \(B_{3}^{1}\) with \(z\notin V(P_{xy})\), \(y\notin V(P_{xz})\) and \(x\notin V(P_{yz})\), respectively. Suppose that \(w^{\prime}\in V_{1}\). If \(w^{\prime}\notin\{x,y,z\}\), then there is a spanning tree \(T_{1}\) in \(B_{3}^{1}\) and a spanning tree \(T_{2}\) in \(B_{4}[V(B_{4})\setminus V_{1}]\), so \(T_{1}^{*}=T_{1}+w^{\prime}w\) and \(T_{2}^{*}=T_{2}+x^{\prime}x+y^{\prime}y+z^{\prime}z\) are two internally edge disjoint trees connecting \(S\). If \(w^{\prime}\in\{x,y,z\}\), say \(w^{\prime}=x\), then there is a spanning tree \(T\) in \(B_{4}[V(B_{4})\setminus V_{1}]\), so \(T_{1}=wx+P_{xy}+P_{yz}\) and \(T_{2}=P_{xz}+zz^{\prime}+T+y^{\prime}y\) are two internally edge disjoint trees connecting \(S\). Next suppose that \(w^{\prime}\notin V_{1}\). Note that one of \(x^{\prime},y^{\prime},z^{\prime}\), say \(x^{\prime}\), lies outside \(B_{3}^{2}\). Then \(x^{\prime}\in V_{3}\cup V_{4}\). Assume that \(x^{\prime}\in V_{3}\). Let \(x_{1}=x[1,2]\) and assume that \(x_{1}\in V(P_{xy})\). If \(z^{\prime}\in V_{2}\), then we choose a vertex \(w_{1}\) in \(B_{3}^{2}\) different from \(w,z^{\prime}\) such that \(w_{1}^{\prime}\in V_{3}\). By Lemmas 2.2 and 2.4, there are two \((w,\{w_{1},z^{\prime}\})\)-paths \(L_{1}\) and \(L_{2}\). Assume that \(w_{1}\in V(L_{1})\) and \(z^{\prime}\in V(L_{2})\). Since \(w,w_{1}^{\prime}\in V_{3}\cup V_{4}\), there are two \((\{w^{\prime},w_{1}^{\prime}\},\{x^{\prime},x_{1}^{\prime}\})\)-paths \(Q_{1}\) and \(Q_{2}\) in \(B_{4}[V_{1}\cup V_{2}]\) by Lemmas 2.1 and 2.6. Assume that \(w_{1}^{\prime}\in V(Q_{1})\). If \(x^{\prime}\in V(Q_{1})\), let \(T_{1}=P_{yz}+P_{xz}+xx^{\prime}+Q_{1}+w_{1}^{\prime}w_{1}+L_{1}\) and \(T_{2}=P_{xy}+x_{1}x_{1}^{\prime}+Q_{2}+w^{\prime}w+L_{2}+z^{\prime}z\). If \(x^{\prime}\in V(Q_{2})\), let \(T_{1}=P_{yz}+P_{xz}+xx^{\prime}+Q_{2}+w^{\prime}w\) and \(T_{2}=P_{xy}+x_{1}x_{1}^{\prime}+Q_{1}+w_{1}^{\prime}w_{1}+L_{1}+L_{2}+z^{ \prime}z\). Then \(T_{1}\) and \(T_{2}\) are two internally edge disjoint trees connecting \(S\). If \(z^{\prime}\in V_{3}\cup V_{4}\), say \(z^{\prime}\in V_{4}\). If \(w^{\prime}\in V_{4}\), let \(w_{1}\) and \(w_{2}\) be two vertices in \(B_{3}^{2}\) with \(w_{1}^{\prime},w_{2}^{\prime}\in V_{3}\), and there are two internally vertex disjoint \((w,w_{i})\)-path \(L_{i}\) for \(i=1,2\) in \(B_{3}^{2}\) by Lemma 2.4. Similarly, there are two internally vertex disjoint \((\{x^{\prime},x_{1}^{\prime}\},\{w_{1}^{\prime},w_{2}^{\prime}\})\)-paths \(Q_{1}\) and \(Q_{2}\) in \(B_{3}^{3}\) and one \((w^{\prime},z^{\prime})\)-path \(K\) in \(B_{3}^{4}\). Then \(T_{1}=P_{yz}+P_{xz}+xx^{\prime}+Q_{1}+w_{1}^{\prime}w_{1}+L_{1}\) and \(T_{2}=P_{xy}+x_{1}x_{1}^{\prime}+Q_{2}+w_{2}^{\prime}w_{2}+L_{2}+ww^{\prime}+ K+z^{\prime}z\) are two internally edge disjoint trees connecting \(S\). Otherwise, \(w^{\prime}\in V_{3}\). Let \(w_{1}\) and \(w_{2}\) be two vertices in \(B_{3}^{2}\) with \(w_{1}^{\prime}\in V_{3}\) and \(w_{2}^{\prime}\in V_{4}\). By similar argument above, we may obtain two internally edge disjoint trees connecting \(S\). Now suppose that \(n\geq 5\). Let \[x=(p_{1},\ldots,p_{n-1},1),y=(q_{1},\ldots,q_{n-1},1),z=(r_{1},\ldots,r_{n-1},1).\] Then \(x\in V_{p_{n-1}}^{1}\), \(y\in V_{q_{n-1}}^{1}\) and \(z\in V_{r_{n-1}}^{1}\). **Case 5.1.**\(x^{\prime},y^{\prime}\) and \(z^{\prime}\) lie in three different main parts. Let \(x_{i}=x[i,i+1]\) for \(i\in[n-2]\). Then \(x_{1},\ldots,x_{n-3}\in V_{p_{n-1}}^{1}\) and \(x_{n-2}\in V_{p_{n-2}}^{1}\). Since \(q_{n-1}\neq r_{n-1}\), we may assume that \(x_{n-2}\notin V_{q_{n-1}}^{1}\). By Lemma 2.6, there are \(n-3\) internally vertex disjoint \((x,y)\)-paths \(L_{1},\ldots,L_{n-3}\) in \(B_{n-1}^{1}\). Assume that \(x_{i}\in V(L_{i})\) for \(i\in[n-3]\). Let \(\widehat{x}_{i}=x_{i}[n-2,n-1]\) for \(i\in[n-4]\) and let \(Z=\{\widehat{x}_{i}:i\in[n-4]\}\cup\{x_{n-2}\}\). We have \(Z\subseteq V_{1}\setminus(V_{p_{n-1}}^{1}\cup V_{q_{n-1}}^{1})\). As \(\kappa(B_{n-1}^{1}[V_{1}\setminus(V_{p_{n-1}}^{1}\cup V_{q_{n-1}}^{1})])=n-3\), there are \(n-3\) internally vertex disjoint \((z,Z)\)-paths \(Q_{1},\ldots,Q_{n-3}\). Assume that \(\widehat{x}_{i}\in V(Q_{i})\) for \(i\in[n-4]\) and \(x_{n-2}\in V(Q_{n-3})\). Let \(F=\{x_{i}^{\prime}:i\in[n-3]\}\cup\{x^{\prime},y^{\prime},z^{\prime}\}\) and \(F_{1}=F\cap V_{2}\). **Case 5.1.1.**\(F_{1}=\emptyset\). There are three possibilities: (i) \(w^{\prime}\notin V_{1}\cup V_{p_{n-1}}\), (ii) \(w^{\prime}\in V_{p_{n-1}}\) and (iii) \(w^{\prime}\in V_{1}\). For (i), choose \(n-2\) vertices \(w_{1},\ldots,w_{n-2}\in V_{2}\) with out-neighbors in \(V_{p_{n-1}}\). Then there are \(n-2\) internally vertex disjoint \((w,w_{i})\)-paths \(H_{i}\) for \(i\in[n-2]\) in \(B_{n-1}^{2}\). Let \(X=\{x_{i}^{\prime}:i\in[n-3]\}\cup\{x^{\prime}\}\) and \(W=\{w_{i}^{\prime}:i\in[n-2]\}\). Then \(X,W\subseteq V_{p_{n-1}}\) with \(|X|=|W|=n-2\). By Lemma 2.1, there are \(n-2\) internally vertex disjoint \((X,W)\)-paths \(R_{1},\ldots,R_{n-2}\) in \(B_{n-1}^{p_{n-1}}\). Assume that \(x_{i}^{\prime},w_{i}^{\prime}\in V(R_{i})\) for \(i\in[n-3]\) and \(x^{\prime},w_{n-2}\in V(R_{n-2})\). Since \(y^{\prime},z^{\prime},w^{\prime}\notin V_{1}\cup V_{p_{n-1}}\) and \(B_{n}[V(B_{n})\setminus(V_{1}\cup V_{p_{n-1}})]\) is connected, there is a tree \(T\) containing \(y^{\prime},z^{\prime},w^{\prime}\). Let \[T_{i}=H_{i}+w_{i}w_{i}^{\prime}+R_{i}+x_{i}^{\prime}x_{i}+L_{i}+x_{i}\widehat{ x}_{i}+Q_{i}\text{ for }i\in[n-4],\] \[T_{n-3}=H_{n-3}+w_{n-3}w_{n-3}^{\prime}+R_{n-3}+x_{n-3}^{\prime}x_{n-3}+L_{n-3 }+xx_{n-2}+Q_{n-3},\] and \[T_{n-2}=xx^{\prime}+R_{n-2}+w_{n-2}^{\prime}w_{n-2}+H_{n-2}+ww^{\prime}+T+y^{ \prime}y+z^{\prime}z\] Then there are \(n-2\) internally edge disjoint trees \(T_{1},\ldots,T_{n-2}\) connecting \(S\). For (ii), let \(w_{1},\ldots,w_{n-3}\) be \(n-3\) vertices in \(B_{n-1}^{2}\) with out-neighbors in \(B_{n-1}^{p_{n-1}}\) and \(w_{n-2}\in V_{2}\) be one vertex with out-neighbor in \(B_{n}[V(B_{n})\setminus(V_{1}\cup V_{p_{n-1}})]\). By Lemmas 2.2 and 2.4, there are \(n-2\) internally vertex disjoint \((w,w_{i})\)-paths \(H_{i}\) for \(i\in[n-2]\) in \(B_{n-1}^{2}\). Let \(X=\{x_{i}^{\prime}:i\in[n-3]\}\cup\{x^{\prime}\}\) and \(W=\{w_{i}^{\prime}:i\in[n-3]\}\cup\{w^{\prime}\}\). Then \(X,W\subseteq V_{p_{n-1}}\) with \(|X|=|W|=n-2\). By Lemma 2.1, there are \(n-2\) internally vertex disjoint \((X,W)\)-paths \(R_{1},\ldots,R_{n-2}\) in \(B_{n-1}^{p_{n-1}}\). Assume that \(x_{i}^{\prime}\in V(R_{i})\) for \(i\in[n-3]\), \(x^{\prime}\in V(R_{n-2})\), \(w^{\prime}\in V(R_{s})\) for some \(s\in[n-2]\), \(w_{i}^{\prime}\in V(R_{i})\) for \(i\in[n-3]\setminus\{s\}\) and \(w_{s}^{\prime}\in V(Q_{n-2})\). Since \(y^{\prime},z^{\prime},w_{n-2}^{\prime}\notin V_{1}\cup V_{p_{n-1}}\), there is a tree \(T\) with \(y^{\prime},z^{\prime},w_{n-2}^{\prime}\in V(T)\) in \(B_{n}[V(B_{n})\setminus(V_{1}\cup V_{p_{n-1}})]\). If \(s=n-2\), let \[T_{i}=H_{i}+w_{i}w_{i}^{\prime}+R_{i}+x_{i}^{\prime}x_{i}+L_{i}+x_{i}\widehat{ x}_{i}+Q_{i}\text{ for }i\in[n-4],\] \[T_{n-3}=H_{n-3}+w_{n-3}w_{n-3}^{\prime}+R_{n-3}+x_{n-3}^{\prime}x_{n-3}+L_{n-3 }+xx_{n-2}+Q_{n-3},\] and \[T_{n-2}=xx^{\prime}+R_{n-2}+w^{\prime}w+H_{n-2}+w_{n-2}w_{n-2}^{\prime}+T+y^{ \prime}y+z^{\prime}z.\] If \(s=n-3\), let \[T_{i}=H_{i}+w_{i}w_{i}^{\prime}+R_{i}+x_{i}^{\prime}x_{i}+L_{i}+x_{i}\widehat{ x}_{i}+Q_{i}\text{ for }i\in[n-4],\] \[T_{n-3}=ww^{\prime}+R_{n-3}+x_{n-3}^{\prime}x_{n-3}+L_{n-3}+xx_{n-2}+Q_{n-3}\] and \[T_{n-2}=xx^{\prime}+R_{n-2}+w_{n-3}^{\prime}w_{n-3}+H_{n-3}+H_{n-2}+w_{n-2}w_{n -2}^{\prime}+T+y^{\prime}y+z^{\prime}z.\] Otherwise, let \[T_{i}=H_{i}+w_{i}w_{i}^{\prime}+R_{i}+x_{i}^{\prime}x_{i}+L_{i}+x_{i}\widehat{ x}_{i}+Q_{i}\text{ for }i\in[n-4]\setminus\{s\},\] \[T_{s}=ww^{\prime}+R_{s}+x_{s}^{\prime}x_{s}+L_{s}+x_{s}\widehat{x}_{s}+Q_{s},\] \[T_{n-3}=H_{n-3}+w_{n-3}w_{n-3}^{\prime}+R_{n-3}+x_{n-3}^{\prime}x_{n-3}+L_{n-3 }+xx_{n-2}+Q_{n-3}\] \[T_{n-2}=xx^{\prime}+R_{n-2}+w^{\prime}_{s}w_{s}+H_{s}+H_{n-2}+w_{n-2}w^{\prime}_{n- 2}+T+y^{\prime}y+z^{\prime}z.\] Then \(T_{1},\dots,T_{n-2}\) are \(n-2\) internally edge disjoint trees connecting \(S\). Now we consider (iii). Suppose that \(N_{B^{1}_{n-1}}[w^{\prime}]\cap\cup_{i=1}^{n-3}(V(L_{i})\cup V(Q_{i}))=\emptyset\). Let \(\widehat{w}=w^{\prime}[n-2,n-1]\). If \(\widehat{w}^{\prime}\notin V_{p_{n-1}}\) (\(\widehat{w}^{\prime}\in V_{p_{n-1}}\), respectively), then we use \(\widehat{w}^{\prime}\) for \(w^{\prime}\) in the above argument in (i) ((ii), respectively). So we obtain \(n-2\) internally edge disjoint trees connecting \(S\). Otherwise, assume that \(\widetilde{w}\in N_{B^{1}_{n-1}}[w^{\prime}]\cap\cup_{i=1}^{n-3}(V(L_{i})\cup V (Q_{i}))\). Since \(F_{1}=\emptyset\), \(\widetilde{w}\in V(Q_{s})\) for some \(s\in[n-3]\). Let \(w_{1},\dots,w_{n-3}\) be \(n-3\) vertices in \(B^{2}_{n-1}\) with out-neighbors in \(B^{p_{n-1}}_{n-1}\) and \(w_{n-2}\) be a vertex in \(B^{2}_{n-1}\) with out-neighbor in \(B_{n}[V(B_{n})\setminus(V_{1}\cup V_{p_{n-1}})]\). By Lemma 2.6, there are \(n-2\) internally vertex disjoint \((w,w_{i})\)-paths \(H_{i}\) for \(i\in[n-2]\). Let \(X=\{x^{\prime}_{i}:i\in[n-3]\setminus\{s\}\}\cup\{x^{\prime}\}\) and \(W=\{w^{\prime}_{i}:i\in[n-3]\}\). Then \(X,W\subseteq V_{p_{n-1}}\) with \(|X|=|W|=n-3\). By Lemmas 2.1 and 2.4, there are \(n-3\) internally vertex disjoint \((X,W)\)-paths \(R_{1},\dots,R_{n-3}\) in \(B^{p_{n-1}}_{n-1}\). Assume that \(x^{\prime}_{i},w^{\prime}_{i}\in V(R_{i})\) for \(i\in[n-3]\setminus\{s\}\) and \(x^{\prime},w^{\prime}_{s}\in V(R_{s})\). Since \(w^{\prime}_{n-2},y^{\prime},z^{\prime}\notin V_{1}\cup V_{p_{n-1}}\), there is a spanning tree \(T\) in \(B_{n}[V(B_{n})\setminus(V_{1}\cup V_{p_{n-1}})]\) with \(w^{\prime}_{n-2},y^{\prime},z^{\prime}\in V(T)\) by Lemma 2.6. If \(s=n-3\), let \[T_{i}=H_{i}+w_{i}w^{\prime}_{i}+R_{i}+x^{\prime}_{i}x_{i}+L_{i}+x_{i}\widehat{ x}_{i}+Q_{i}\text{ for }i\in[n-4],\] \[T_{n-3}=\begin{cases}L_{n-3}+xx_{n-2}+Q_{n-3}+w^{\prime}w&\text{ if }\widetilde{w}=w^{\prime}\\ L_{n-3}+xx_{n-2}+Q_{n-3}+\widetilde{w}w^{\prime}+w^{\prime}w&\text{ otherwise}\end{cases}\] and \[T_{n-2}=xx^{\prime}+R_{n-3}+w^{\prime}_{n-3}w_{n-3}+H_{n-3}+H_{n-2}+w_{n-2}w^ {\prime}_{n-2}+T+y^{\prime}y+z^{\prime}z.\] Otherwise, let \[T_{i}=H_{i}+w_{i}w^{\prime}_{i}+R_{i}+x^{\prime}_{i}x_{i}+L_{i}+x_{i}\widehat {x}_{i}+Q_{i}\text{ for }i\in[n-4]\setminus\{s\},\] \[T_{s}=\begin{cases}L_{s}+x_{s}\widehat{x}_{s}+Q_{s}+w^{\prime}w,&\text{ if } \widetilde{w}=w^{\prime},\\ L_{s}+x_{s}\widehat{x}_{s}+Q_{s}+\widetilde{w}w^{\prime}+w^{\prime}w&\text{ otherwise},\end{cases}\] \[T_{n-3}=H_{n-3}+w_{n-3}w^{\prime}_{n-3}+R_{n-3}+x^{\prime}_{n-3}x_{n-3}+L_{n-3 }+xx_{n-2}+Q_{n-3},\] and \[T_{n-2}=xx^{\prime}+R_{n-3}+w^{\prime}_{s}w_{s}+H_{s}+H_{n-2}+w_{n-2}w^{\prime }_{n-2}+T+y^{\prime}y+z^{\prime}z.\] Then \(T_{1},\dots,T_{n-2}\) are \(n-2\) internally edge disjoint trees connecting \(S\). **Case 5.1.2.**\(F_{1}=\{x^{\prime}_{i}:i\in[n-3]\}\cup\{x^{\prime}\}\). Suppose that \(w^{\prime}\notin V_{1}\). By Lemmas 2.2 and 2.4, there are \(n-2\) internally vertex disjoint \((w,F_{1})\)-paths \(H_{1},\dots,H_{n-2}\) in \(B^{2}_{n-1}\). Assume that \(x^{\prime}_{i}\in V(H_{i})\) for \(i\in[n-3]\) and \(x^{\prime}\in V(H_{n-2})\). Since \(y^{\prime},z^{\prime},w^{\prime}\in V(B_{n})\setminus(V_{1}\cup V_{2})\), there is a spanning tree \(T\) in \(B_{n}[V(B_{n})\setminus(V_{1}\cup V_{2})]\) with \(y^{\prime},z^{\prime},w^{\prime}\in V(T)\). Let \[T_{i}=H_{i}+x^{\prime}_{i}x_{i}+L_{i}+x_{i}\widehat{x}_{i}+Q_{i}\text{ for }i\in[n-4],\] \[T_{n-3}=H_{n-3}+x^{\prime}_{n-3}x_{n-3}+L_{n-3}+xx_{n-2}+Q_{n-3},\] and \[T_{n-2}=T+yy^{\prime}+zz^{\prime}+w^{\prime}w+H_{n-2}+x^{\prime}x.\] Then there are \(n-2\) internally edge disjoint trees \(T_{1},\ldots,T_{n-2}\) connecting \(S\). Suppose that \(w^{\prime}\in V_{1}\). If \(N_{B^{1}_{n-1}}[w^{\prime}]\cap\cup_{i=1}^{n-3}(V(L_{i})\cup V(Q_{i}))=\emptyset\), then we may consider \(\widehat{w}^{\prime}\) as \(w^{\prime}\) in the argument above with \(\widehat{w}=w^{\prime}[n-2,n-1]\), and hence obtain \(n-2\) internally edge disjoint trees connecting \(S\). Otherwise, some vertex in \(N_{B^{1}_{n-1}}[w^{\prime}]\) lies on some path \(L_{i}\) or \(Q_{i}\), so the argument is similar to that in Case 5.1.1. **Case 5.1.3.**\(F_{1}=\{y^{\prime}\}\) or \(F_{1}=\{z^{\prime}\}\), say \(F_{1}=\{y^{\prime}\}\). Let \(y_{i}=y[i,i+1]\) for \(i\in[n-2]\). Note \(p_{n-1}\neq r_{n-1}\). Assume that \(y_{n-2}\notin V^{1}_{p_{n-1}}\). By Lemma 2.6, there are \(n-3\) internally vertex disjoint \((x,y)\)-paths \(L_{1},\ldots,L_{n-3}\) in \(B^{1}_{n-1}[V^{1}_{p_{n-1}}\cup V^{1}_{q_{n-1}}]\). Assume that \(y_{i}\in V(L_{i})\) for \(i\in[n-3]\). Let \(\widehat{y}_{i}=y_{i}[n-2,n-1]\) for \(i\in[n-4]\) and let \(Z=\{\widehat{y}_{i}:i\in[n-4]\}\cup\{y_{n-2}\}\). Then \(Z\subseteq V_{1}\setminus(V^{1}_{p_{n-1}}\cup V^{1}_{q_{n-1}})\). Since \(\kappa(B^{1}_{n-1}[V_{1}\setminus(V^{1}_{p_{n-1}}\cup V^{1}_{q_{n-1}})])=n-3\), there are \(n-3\) internally vertex disjoint \((z,Z)\)-paths \(Q_{1},\ldots,Q_{n-3}\). Assume that \(\widehat{y}_{i}\in V(Q_{i})\) for \(i\in[n-4]\) and \(y_{n-2}\in V(Q_{n-3})\). Let \(F_{2}=(\{y^{\prime}_{i}:i\in[n-3]\}\cup\{x^{\prime},y^{\prime},z^{\prime}\}) \cap V_{2}\). Recall that \(y^{\prime}\in V_{2}\), then \(F_{2}=\{y^{\prime}_{i}:i\in[n-3]\}\cup\{y^{\prime}\}\). Now by considering whether \(w^{\prime}\) is in \(V_{1}\) and similar argument as in Case 5.1.2, there are \(n-2\) internally edge disjoint trees connecting \(S\). **Case 5.2.**\(x^{\prime},y^{\prime}\) and \(z^{\prime}\) lie in two different main parts. Assume that \(x^{\prime}\) lies in different main part from \(y^{\prime}\) and \(z^{\prime}\). For \(j\in[n]\setminus\{2\}\), Since \(\kappa(B^{(1,j)}_{n-2})=n-3\), there are \(n-3\) internally vertex disjoint \((y,z)\)-paths \(L_{1},\ldots,L_{n-3}\) in \(B^{(1,q_{n-1})}_{n-2}\). Assume that \(y_{i}\in V(L_{i})\) and let \(\widehat{y}_{i}=y_{i}[n-2,n-1]\) for \(i\in[n-3]\). Then \(\widehat{y}_{i}\in(V^{1}_{q_{n-2}}\cup V^{1}_{q_{n-3}})\subseteq V_{1}\). Since \(x\in V^{1}_{p_{n-1}}\subseteq V_{1}\), there are \(n-3\) internally vertex disjoint \((x,\widehat{y}_{i})\)-paths \(Q_{i}\) in \(B^{1}_{n-1}[V_{p_{n-1}}\cup V_{q_{n-2}}\cup V^{1}_{q_{n-3}}]\) for \(i\in[n-3]\) by Lemma 2.6. Let \(x_{i}=x[i,i+1]\) for \(i\in[n-3]\). Assume that \(x_{i}\in V(Q_{i})\) with \(i\in[n-3]\). Let \(F=\{x^{\prime}_{i}:i\in[n-3]\}\cup\{x^{\prime},y^{\prime},z^{\prime}\}\) and \(F_{1}=F\cap V_{2}\). There are three possibilities: (i) \(F_{1}=\emptyset\), (ii) \(F_{1}=\{x^{\prime}_{i}:i\in[n-3]\}\cup\{x^{\prime}\}\) and (iii) \(F_{1}=\{y^{\prime},z^{\prime}\}\). The argument for (i) and (ii) is similar as in Case 5.1.1 and Case 5.1.2, respectively. So we only consider (iii). Suppose that \(w\in V^{2}_{\ell}\) with \(\ell\neq 2\). **Case 5.2.1.**\(w^{\prime}\notin V_{1}\). We choose \(n-3\) vertices \(w_{1},\ldots,w_{n-3}\in V^{2}_{\ell}\), then there are \(n-3\) internally vertex disjoint \((w,w_{i})\)-paths \(H_{i}\) for \(i\in[n-3]\) in \(B^{(2,\ell)}_{n-2}\). Let \(\widehat{w}=w[n-2,n-1]\). As \(\widehat{w},y^{\prime},z^{\prime}\in V_{2}\setminus V^{2}_{\ell}\), there is a tree \(T^{*}_{n-2}\) containing \(\widehat{w},y^{\prime},z^{\prime}\) in \(B^{(2,\ell)}_{n-2}\) by Lemma 2.6. Let \(Y=\{\widehat{y}^{\prime}_{i}:i\in[n-3]\}\cup\{x^{\prime}\}\) and \(W=\{w^{\prime}_{i}:i\in[n-3]\}\cup\{w^{\prime}\}\). Then \(Y,W\subseteq V(B_{n})\setminus(V_{1}\cup V_{2})\), and so there are \(n-2\) internally vertex disjoint \((Y,W)\)-paths \(R_{1},\ldots,R_{n-2}\) by Lemma 2.6. Assume that \(\widehat{y}^{\prime}_{i}\in V(R_{i})\) for \(i\in[n-3]\), \(x^{\prime}\in V(R_{n-2})\), \(w^{\prime}\in V(R_{s})\) for some \(s\in[n-2]\), \(w^{\prime}_{i}\in V(R_{i})\) for \(i\in[n-3]\setminus\{s\}\) and \(w^{\prime}_{s}\in V(R_{n-2})\). If \(s=n-2\), let \[T_{i}=L_{i}+y_{i}\widehat{y}_{i}+Q_{i}+\widehat{y}_{i}\widehat{y}^{\prime}_{i}+ R_{i}+w^{\prime}_{i}w_{i}+H_{i}\text{ for }i\in[n-3]\] and \[T_{n-2}=xx^{\prime}+R_{n-2}+w^{\prime}w+w\widehat{w}+T^{*}_{n-2}.\] Otherwise, let \[T_{i}=L_{i}+y_{i}\widehat{y}_{i}+Q_{i}+\widehat{y}_{i}\widehat{y}_{i}^{\prime}+R _{i}+w_{i}^{\prime}w_{i}+H_{i}\text{ for }i\in[n-3]\setminus\{s\},\] \[T_{s}=L_{s}+y_{s}\widehat{y}_{s}+Q_{s}+\widehat{y}_{s}\widehat{y}_{s}^{\prime}+ R_{s}+ww^{\prime}\] and \[T_{n-2}=xx^{\prime}+R_{n-2}+w_{s}^{\prime}w_{s}+H_{s}+w\widehat{w}+T_{n-2}^{*}.\] Hence, we obtain \(n-2\) internally edge disjoint trees connecting \(S\). **Case 5.2.2.**\(w^{\prime}\in V_{1}\). If \(N_{B_{n-1}^{1}}[w^{\prime}]\cap\cup_{i=1}^{n-3}(V(L_{i})\cup V(Q_{i}))=\emptyset\), the result follows by considering \(\widehat{w}^{\prime}\) for \(w^{\prime}\) in the above proof with \(\widehat{w}=w^{\prime}[n-2,n-1]\). Suppose that \(N_{B_{n-1}^{1}}[w^{\prime}]\cap\cup_{i=1}^{n-3}(V(L_{i})\cup V(Q_{i}))\neq\emptyset\). Let \(y_{i}=y[i,i+1]\), \(w_{i}=w[i,i+1]\) and assume that \(y_{i}\in V(L_{i})\), \(\widehat{y}_{i}\in V(Q_{i})\) for \(i\in[n-3]\). Let \(\widehat{x}=x[n-2,n-1]\), \(\widehat{z}=z^{\prime}[n-2,n-1]\), \(\widehat{w}=w[n-2,n-1]\) and \(\widehat{w}_{i}=w_{i}[n-2,n-1]\) for \(i\in[n-3]\). Suppose that \(y\) or \(z\), say \(y\), is adjacent to \(w\). Then \(y_{i}^{\prime}=w_{i}\) for \(i\in[n-3]\). If \(z\) is not adjacent to \(y\), then, since \(\kappa(B_{n-1}^{2})=n-2\), there is a \((z^{\prime},\widehat{w})\)-path \(R\) in \(B_{n-1}^{2}[V_{2}\setminus\{w_{i}:i\in[n-3]\}]\). Noting that \(\widehat{w}^{\prime},x^{\prime}\in V(B_{n})\setminus(V_{1}\cup V_{2})\), there is an \((x^{\prime},\widehat{w}^{\prime})\)-path \(K\). Let \[T_{i}=ww_{i}+w_{i}^{\prime}y_{i}+L_{i}+y_{i}\widehat{y}_{i}+Q_{i}\text{ for } \in[n-3]\] and \[T_{n-2}=xx^{\prime}+K+\widehat{w}^{\prime}\widehat{w}+R+z^{\prime}z+\widehat{ w}w+wy.\] Then we obtain \(n-2\) internally edge disjoint trees \(T_{1},\dots,T_{n-2}\) connecting \(S\). Suppose that \(z\) is adjacent to \(y\), say \(z=y_{\xi}\) for some \(\xi\in[n-2]\). Then \(z^{\prime}=w_{\xi}\). Let \[T_{i}=ww_{i}+w_{i}y_{i}+L_{i}+y_{i}\widehat{y}_{i}+Q_{i}\text{ for }i\in[n-3]\setminus\{\xi\}.\] and \(\widehat{y}=y[n-2,n-1]\). We consider \(\widehat{y}\neq x\) and \(\widehat{y}=x\) separately. Suppose that \(\widehat{y}\neq x\). Note that \(x^{\prime},\widehat{y}^{\prime},\widehat{w}^{\prime}_{\xi},\widehat{w}^{\prime} \notin V_{1}\cup V_{2}\), there are two \((\{x^{\prime},\widehat{y}^{\prime}\},\{\widehat{w}^{\prime}_{\xi},\widehat{w} ^{\prime}\})\)-paths \(R_{1}\) and \(R_{2}\) by Lemma 2.6. If \(x^{\prime}\) and \(\widehat{w}^{\prime}_{\xi}\) are in the same path, say \(R_{1}\), let \[T_{\xi}=xx^{\prime}+R_{1}+\widehat{w}^{\prime}_{\xi}\widehat{w}_{\xi}+\widehat {w}_{\xi}w_{\xi}+w_{\xi}w+wy,\] and \[T_{n-2}=Q_{\xi}+\widehat{y}y+yz+\widehat{y}\widehat{y}^{\prime}+R_{2}+\widehat {w}^{\prime}\widehat{w}+\widehat{w}w.\] Otherwise, assume that \(x^{\prime}\) is in \(R_{1}\), let \[T_{\xi}=xx^{\prime}+R_{1}+\widehat{w}^{\prime}\widehat{w}+\widehat{w}w+wy+yz\] and \[T_{n-2}=Q_{\xi}+\widehat{y}y+\widehat{y}\widehat{y}^{\prime}+R_{2}+\widehat{ w}^{\prime}_{\xi}\widehat{w}_{\xi}+\widehat{w}_{\xi}w_{\xi}+w_{\xi}z.\] So we obtain \(n-2\) internally edge disjoint trees \(T_{1},\dots,T_{n-2}\) connecting \(S\). Now suppose that \(\widehat{y}=x\). Then \[x=(q_{1},\dots,q_{n-3},2,q_{n-2},1)\text{ and }x^{\prime}=(q_{1},\dots,q_{n-3},2,1,q _{n-2}).\] Recall that \(w=y^{\prime}\). Then \[w=(q_{1},\ldots,q_{n-3},q_{n-2},1,2),\widehat{w}=(q_{1},\ldots,q_{n-3},1,q_{n-2},2)\] and \[\widehat{w}^{\prime}=(q_{1},\ldots,q_{n-3},1,2,q_{n-2}).\] It can be seen that \(x^{\prime}\) is adjacent to \(\widehat{w}^{\prime}\). Let \[T_{\xi}=yw+wz^{\prime}+z^{\prime}z+z\widehat{y}_{\xi}+Q_{\xi}\] and \[T_{n-2}=zy+yx+xx^{\prime}+x^{\prime}\widehat{w}^{\prime}+\widehat{w}^{\prime} \widehat{w}+\widehat{w}w.\] So there are \(n-2\) internally edge disjoint trees \(T_{1},\ldots,T_{n-2}\) connecting \(S\). Next suppose that \(y\) and \(z\) are not adjacent to \(w\). Suppose that \(y^{\prime}\) or \(z^{\prime}\), say \(y^{\prime}\), is adjacent to \(w\). Then \(y_{\xi}=w^{\prime}\) for some \(\xi\in[n-3]\). So \[T_{\xi}=wy_{\xi}+L_{\xi}+y_{\xi}\widehat{y}_{\xi}+Q_{\xi}\] is a tree containing the vertices in \(S\). Let \(W\) be the set of \(n-4\) neighbors of \(w\) such that they are not adjacent to \(y\) or \(z\) and \(Y=\{\widehat{y}^{\prime}_{i}:i\in[n-3]\setminus\{\xi\}\}\). Similarly to the above argument, we may obtain \(n-4\) internally vertex disjoint \((Y,W)\)-paths and hence \(n-4\) internally edge disjoint trees \(T_{i}\) for \(i\in[n-3]\setminus\{\xi\}\) connecting \(S\). Since \(\kappa(B^{(2,1)}_{n-2})=n-3\), there is a tree \(H\) containing \(w,y^{\prime},z^{\prime}\). Let \(v_{1}=y^{\prime}[n-2,n-1]\), \(v_{2}=v_{1}[n-3,n-2]\), \(v_{3}=v_{2}[n-4,n-3]\), \(v_{4}=v_{3}[n-3,n-2]\), \(v_{5}=v_{4}[n-2,n-1]\) and \(P_{y}=wy^{\prime}v_{1}v_{2}v_{3}v_{4}v_{5}\). Note that there is an \((x^{\prime},v^{\prime}_{5})\)-path \(L_{n-2}\) with \(V(L_{n-2})\cap V(T_{i})=\emptyset\) for \(i\in[n-3]\). Let \[T_{n-2}=H+P_{y}+v_{5}v^{\prime}_{5}+L_{n-2}+x^{\prime}x.\] So there are \(n-2\) internally edge disjoint trees \(T_{1},\ldots,T_{n-2}\) connecting \(S\). Suppose that \(y^{\prime},z^{\prime}\notin\{w_{i}:i\in[n-3]\}\cup\{w\}\). Let \(Y=\{\widehat{y}^{\prime}_{i}:i\in[n-3]\}\) and \(W=\{\widehat{w}^{\prime}_{i}:i\in[n-4]\}\cup\{w^{\prime}_{n-2}\}\). By Lemmas 2.1 and 2.6, there are \(n-3\) internally vertex disjoint \((Y,W)\)-paths in \(B_{n}[V(B_{n})\setminus(V_{1}\cup V_{2})]\). Hence we may obtain \(n-3\) internally edge disjoint trees \(T_{i}\) for \(i\in[n-3]\) by similar argument as in Case 5.2.1. Since \(\kappa(B^{(2,1)}_{n-2})=n-3\), there is a tree \(H\) in \(B^{(2,1)}_{n-2}\) containing vertices \(y^{\prime},z^{\prime},w\) with \(V(H)\cap\{w_{i}:i\in[n-4]\}=\emptyset\). By Lemma 2.6, there is an \((x^{\prime},\widehat{w}^{\prime}_{n-3})\)-path \(L_{n-2}\) such that it is disjoint with the above \(n-3\)\((Y,W)\)-paths. Let \[T_{n-2}=yy^{\prime}+zz^{\prime}+H+w_{n-3}\widehat{w}_{n-3}+\widehat{w}_{n-3} \widehat{w}^{\prime}_{n-3}+L_{n-2}+x^{\prime}x.\] Then there are \(n-2\) internally edge disjoint trees \(T_{1},\ldots,T_{n-2}\) connecting \(S\). **Case 5.3.**\(x^{\prime},y^{\prime},z^{\prime}\) lie in the same main part, that is, \(p_{n-1}=q_{n-1}=r_{n-1}\). **Case 5.3.1.** There is at least one of \(x,y,z\), say \(x\), that is not adjacent to the others. Since \(x,y,z\in V^{1}_{p_{n-1}}\), there are \(n-4\) internally edge disjoint trees \(T_{1},\ldots,T_{n-4}\) connecting \(\{x,y,z\}\) in \(B^{(1,p_{n-1})}_{n-2}\) by Lemma 2.5. Note that each \(T_{i}\) contains at least one vertex in \(N_{B^{(1,p_{n-1})}_{n-2}}(x)\), say \(x_{i}\), for \(i\in[n-4]\). Assume that \(x_{1}=x[n-3,n-2]\). Let \(\widehat{x}_{i}=x_{i}[n-2,n-1]\) for \(i\in[n-4]\), \(\widehat{x}=x[n-2,n-1]\), \(\widehat{y}=y[n-2,n-1]\) and \(\widehat{z}=z[n-2,n-1]\). Note that \(\widehat{x},\widehat{y},\widehat{z}\notin V_{p_{n-1}}^{1}\) and \(\kappa(B_{n-2})=n-3\), there is a tree \(T_{n-3}\) not in \(B_{n-2}^{(1,p_{n-1})}\) containing \(\widehat{x},\widehat{y},\widehat{z}\) with \(V(T_{n-3})\cap\{\widehat{x}_{i}:i=1,\ldots,n-4\}=\emptyset\). Assume that \(x_{1}=x[n-3,n-2]\). Let \(F=\{\widehat{x}_{i}^{\prime}:i=1,\ldots,n-4\}\cup\{\widehat{x}^{\prime},x^{ \prime},y^{\prime},z^{\prime}\}\) and \(F_{1}=F\cap V_{2}\). Note that \(\widehat{x}_{1}^{\prime}\in V_{p_{n-3}}\), \(\widehat{x}_{i}^{\prime}\in V_{p_{n-2}}\) for \(i=2,\ldots,n-4\), \(\widehat{x}^{\prime}\in V_{p_{n-2}}\) and \(x^{\prime},y^{\prime},z^{\prime}\in V_{p_{n-1}}\). There are four possibilities: (i) \(F_{1}=\emptyset\), (ii) \(F_{1}=\{\widehat{x}_{i}:i=2,\ldots,n-4\}\cup\{\widehat{x}\}\), (iii) \(F_{1}=\{\widehat{x}_{1}\}\), and (iv) \(F_{1}=\{x^{\prime},y^{\prime},z^{\prime}\}\). Note that (i)-(iii) can be discussed similarly as in Case 5.1. Then we only need to consider (iv). If \(w^{\prime}\notin V_{1}\), then \(w\notin V_{1}^{2}\) and \(x^{\prime},y^{\prime},z^{\prime}\in V_{1}^{2}\), and so the result follows by similar argument as in Case 5.2. So we assume that \(w^{\prime}\in V_{1}\). Suppose first that \(x^{\prime},y^{\prime},z^{\prime}\notin N_{B_{n-1}^{2}}[w]\). Let \(w_{i}=w[i,i+1]\), \(\widehat{w}_{i}=w_{i}[n-2,n-1]\) for \(i\in[n-4]\) and \(\widehat{w}=w[n-2,n-1]\). Then \(\widehat{w}_{i}^{\prime}\notin V_{1}\) for \(i\in[n-4]\). Let \(W=\{\widehat{w}_{i}^{\prime}:i\in[n-4]\}\cup\{\widehat{w}^{\prime}\}\) and \(X=\{\widehat{x}_{i}:i\in[n-4]\}\cup\{\widehat{x}^{\prime}\}\). By Lemma 2.6, there are \(n-3\) internally vertex disjoint \((X,W)\)-paths \(L_{1},\ldots,L_{n-3}\) in \(B_{n}[V(B_{n})\setminus(V_{1}\cup V_{2})]\). Assume that \(\widehat{x}_{i}^{\prime}\in V(L_{i})\) for \(i\in[n-4]\), \(\widehat{x}^{\prime}\in V(L_{n-3})\), \(\widehat{w}\in V(L_{s})\) for some \(s\in[n-3]\), \(\widehat{w}_{i}^{\prime}\in V(L_{i})\) for \(i\in[n-4]\setminus\{s\}\) and \(\widehat{w}_{s}\in L_{n-3}\). If \(s=n-3\), let \[T_{i}^{*}=T_{i}+x_{i}\widehat{x}_{i}+\widehat{x}_{i}\widehat{x}_{i}^{\prime}+ L_{i}+\widehat{w}_{i}^{\prime}\widehat{w}_{i}+\widehat{w}_{i}w_{i}+w_{i}w \mbox{ for }i\in[n-3]\] and \[T_{n-3}^{*}=T_{n-3}+x\widehat{x}+\widehat{x}\widehat{x}^{\prime}+L_{n-3}+ \widehat{w}^{\prime}\widehat{w}+\widehat{w}w.\] Otherwise, let \[T_{i}^{*}=T_{i}+x_{i}\widehat{x}_{i}+\widehat{x}_{i}\widehat{x}_{i}^{\prime}+ L_{i}+\widehat{w}_{i}^{\prime}\widehat{w}_{i}+\widehat{w}_{i}w_{i}+w_{i}w \mbox{ for }i\in[n-4]\setminus\{s\},\] \[T_{s}^{*}=T_{s}+x_{s}\widehat{x}_{s}+\widehat{x}_{s}\widehat{x}_{s}^{\prime}+ L_{s}+\widehat{w}^{\prime}\widehat{w}+\widehat{w}w,\] and \[T_{n-3}^{*}=T_{n-3}+x\widehat{x}+\widehat{x}\widehat{x}^{\prime}+L_{n-3}+ \widehat{w}_{s}^{\prime}\widehat{w}_{s}+\widehat{w}_{s}w_{s}+w_{s}w.\] In \(B_{n-2}^{(2,1)}\), there is a tree \(T_{n-2}\) containing \(w,x^{\prime},y^{\prime},z^{\prime}\) with \(V(T_{n-2})\cap\{w_{i}:i\in[n-4]\}=\emptyset\). Let \[T_{n-2}^{*}=xx^{\prime}+yy^{\prime}+zz^{\prime}+T_{n-2}.\] Hence, we obtain \(n-2\) internally edge disjoint trees \(T_{1}^{*},\ldots,T_{n-2}^{*}\) connecting \(S\). Suppose next that \(\{x^{\prime},y^{\prime},z^{\prime}\}\cap N_{B_{n-1}^{2}}[w]\neq\emptyset\). Suppose that \(x^{\prime}=w\), that is, \(w\) is adjacent to \(x\). Then \(y^{\prime}\) and \(z^{\prime}\) are not adjacent to \(w\). So \(\{x_{i}^{\prime}:i\in[n-4]\}\cup\{x\}\subseteq N_{B_{n-1}^{2}}(w)\). Let \(\widehat{w}=w[n-2,n-1]\). Then \(\widehat{w}^{\prime}\) is adjacent to \(\widehat{x}^{\prime}\). Let \[T_{i}^{*}=T_{i}+x_{i}x_{i}^{\prime}+x_{i}^{\prime}w\mbox{ for }i\in[n-4]\] and \[T_{n-3}^{*}=T_{n-3}+x\widehat{x}+\widehat{x}\widehat{x}^{\prime}+\widehat{x}^{ \prime}\widehat{w}^{\prime}+\widehat{w}^{\prime}\widehat{w}+\widehat{w}w.\] Since \(y^{\prime},z^{\prime}\in V_{2}\) and \(\kappa(B_{n-1}^{2})=n-2\), there is a tree \(T_{n-2}\) containing \(w,y^{\prime},z^{\prime}\) in \(B_{n-1}^{2}[V_{2}\setminus(\{x_{i}^{\prime}:i\in[n-4]\}\cup\{\widehat{w}\})]\). Let \[T_{n-2}^{*}=zz^{\prime}+yy^{\prime}+T_{n-2}+wx.\] Then \(T_{1}^{*},\ldots,T_{n-2}^{*}\) are \(n-2\) internally edge disjoint trees connecting \(S\). Suppose that \(w\) is not adjacent to \(x\). Suppose that \(y\) or \(z\), say \(y\), is adjacent to \(w\). Then \(x^{\prime}\) is not adjacent to \(w\). Choose \(n-4\) neighbors of \(w\), say \(w_{1},\ldots,w_{n-4}\) such that each of them is not adjacent to \(y\) or \(z\). By similar proof when \(x^{\prime},y^{\prime},z^{\prime}\notin N_{B_{n-1}^{2}}[w]\), we may obtain \(n-2\) internally edge disjoint trees connecting \(S\). So assume in the following that \(w\) is not adjacent to \(y\) or \(z\). Let \(w_{i}=w[i,i+1]\), \(\widehat{w}_{i}=w_{i}[n-2,n-1]\) for \(i\in[n-3]\), \(\widehat{w}=w[n-2,n-1]\) and \(\widehat{x}_{n-3}=\widehat{x}\). Suppose first that there is exactly one of \(x^{\prime},y^{\prime},z^{\prime}\), say \(x^{\prime}\), that is adjacent to \(w\). Then \(w^{\prime}=x_{s}\) and \(x^{\prime}=w_{t}\) for some \(s,t\in[n-3]\), \[T_{s}^{*}=T_{s}+xw\] is a tree containing vertices in \(S\). Let \(X=\{\widehat{x}_{i}^{\prime}:i\in[n-3]\setminus\{s\}\}\) and \(W=\{\widehat{w}_{i}:i\in[n-3]\setminus\{t\}\}\). By Lemmas 2.1 and 2.6, there are \(n-4\) internally vertex disjoint \((X,W)\)-paths \(Q_{i}\) for \(i\in[n-3]\setminus\{s\}\) in \(B_{n}[V(B_{n})\setminus(V_{1}\cup V_{2})]\). Assume that \(\widehat{x}_{i}^{\prime},\widehat{w}_{\ell_{i}}\in V(Q_{i})\) for \(i\in[n-3]\setminus\{s\}\), where \(\ell_{i}\in[n-3]\setminus\{t\}\) and \(\ell_{i}\neq\ell_{j}\) if \(i\neq j\). Let \[T_{i}^{*}=T_{i}+x_{i}\widehat{x}_{i}+\widehat{x}_{i}\widehat{x}_{i}^{\prime}+Q _{i}+\widehat{w}_{\ell_{i}}^{\prime}\widehat{w}_{\ell_{i}}+\widehat{w}_{\ell _{i}}w_{\ell_{i}}+w_{\ell_{i}}w\] for \(i\in[n-4]\setminus\{s\}\) and \[T_{n-3}^{*}=T_{n-3}+x\widehat{x}+\widehat{x}\widehat{x}^{\prime}+L_{n-3}+ \widehat{w}_{\ell_{n-3}}^{\prime}\widehat{w}_{\ell_{n-3}}+\widehat{w}_{\ell_{ n-3}}w_{\ell_{n-3}}+w_{\ell_{n-3}}w.\] Since \(\kappa(B_{n-2}^{(2,1)})=n-3\), there is a tree \(T_{n-2}\) containing \(w,x^{\prime},y^{\prime},z^{\prime}\) with \(V(T_{n-2})\cap\{w_{i}:i\in[n-3]\setminus\{t\}\}=\emptyset\) in \(B_{n-2}^{(2,1)}\). Let \[T_{n-2}^{*}=xx^{\prime}+yy^{\prime}+zz^{\prime}+T_{n-2}.\] Hence, there are \(n-2\) internally edge disjoint trees \(T_{1}^{*},\ldots,T_{n-2}^{*}\) connecting \(S\). Suppose now that there are exactly two of \(x^{\prime},y^{\prime},z^{\prime}\), say \(x^{\prime}\) and \(y^{\prime}\), that are adjacent to \(w\). That is, \(x^{\prime}=w_{t}\) and \(y^{\prime}=w_{r}\) for some \(t,r\in[n-3]\). Assume that \(w^{\prime}=x_{s}\). Then \[T_{s}^{*}=T_{s}+xw\] is a tree containing vertices in \(S\). Let \(\widehat{w}_{n-2}=w[n-2,n-1]\). Let \(X=\{\widehat{x}_{i}^{\prime}:i\in[n-3]\setminus\{s\}\}\) and \(W=\{\widehat{w}_{i}^{\prime}:i\in[n-2]\setminus\{t,r\}\}\). Then \(X,W\subseteq V(B_{n})\setminus(V_{1}\cup V_{2})\), there are \(n-4\) internally vertex disjoint \((X,W)\)-paths \(Q_{i}\) for \(i\in[n-3]\setminus\{s\}\). By similar argument as above, we may construct \(n-2\) internally edge disjoint trees (one of which is \(T_{s}^{*}\)) connecting \(S\). Finally suppose that \(x^{\prime},y^{\prime},z^{\prime}\) are all adjacent to \(w\). Then there are some \(t,r,s\in[n-3]\) with \(t<r<s\) such that \(x^{\prime}=w_{t}\), \(y^{\prime}=w_{r}\) and \(z^{\prime}=w_{s}\). Since \(3\leq s\leq n-3\), \(n\geq 6\). Assume that \(w^{\prime}=x_{\gamma}\) for some \(\gamma\in[n-4]\). Then \(T_{\gamma}^{*}=T_{\gamma}+w^{\prime}w\) is a tree containing vertices in \(S\). It can be verified that \(\widehat{x}^{\prime}\) is adjacent to \(\widehat{w}^{\prime}\). Let \(X=\{\widehat{x}_{i}^{\prime}:i\in[n-4]\setminus\{\gamma\}\}\) and \(W=\{\widehat{w}_{i}^{\prime}:i\in[n-3]\setminus\{t,r\}\}\). By Lemma 2.6, there are \(n-5\) internally vertex disjoint \((X,W)\)-paths \(L_{i}\) for \(i\in[n-4]\setminus\{\gamma\}\) in \(B_{n}[V(B_{n})\setminus(V_{1}\cup V_{2})]\) with \(V(L_{i})\cap\{\widehat{x}^{\prime},\widehat{w}^{\prime}\}=\emptyset\) for \(i\in[n-5]\). Assume that \(\widehat{w}_{s}^{\prime},\widehat{x}_{\xi}^{\prime}\in V(L_{\xi})\) for some \(\xi\in[n-4]\setminus\{\gamma\}\) and \(\widehat{x}_{i}^{\prime},\widehat{w}_{\ell_{i}}^{\prime}\in V(L_{i})\) for \(i\in[n-4]\setminus\{\gamma,\xi\}\), where \(\ell_{i}\in[n-3]\setminus\{t,r,s\}\) and \(\ell_{i}\neq\ell_{j}\) if \(i\neq j\). Let \(y_{\xi},z_{\xi}\) be the neighbors of \(y\) and \(z\) in \(V(L_{\xi})\), respectively. Then \(y^{\prime}_{\xi},z^{\prime}_{\xi}\in V_{1}^{2}\). Let \(\widehat{y}_{\xi}=y^{\prime}_{\xi}[n-2,n-1]\). Recall that \(z^{\prime}=\widehat{w}_{s}\) and \(\kappa(B_{n-2})=n-3\). So there is a \((\widehat{y}_{\xi},\widehat{w}_{s})\)-path \(Q_{\xi}\) in \(B_{n-1}^{2}[V_{2}\setminus V_{1}^{2}]\) with \(V(Q_{\xi})\cap(\{\widehat{w}_{i}:i\in[n-3]\setminus\{t,r,s\}\}\cup\{\widehat{ w}\})=\emptyset\). By Lemma 2.4, there is a tree \(T_{n-2}\) containing \(w,x^{\prime},y^{\prime},z^{\prime}_{\xi}\) in \(B_{n-2}^{(2,1)}\) with \(V(T_{n-2})\cap\{w_{i}:i\in[n-3]\setminus\{t,r\}\}=\emptyset\). Let \[T_{i}^{*}=T_{i}+x_{i}\widehat{x}^{\prime}_{i}+\widehat{x}_{i}\widehat{x}^{ \prime}_{i}+L_{i}+\widehat{w}^{\prime}_{\ell_{i}}\widehat{w}_{\ell_{i}}+ \widehat{w}_{\ell_{i}}w_{\ell_{i}}+w_{\ell_{i}}w\] for \(i\in[n-4]\setminus\{\gamma,\xi\}\), \[T_{n-3}^{*}=T_{n-3}+x\widehat{x}+\widehat{x}\widehat{x}^{\prime}+\widehat{x}^{ \prime}\widehat{w}^{\prime}+\widehat{w}^{\prime}\widehat{w}+\widehat{w}w,\] \[T_{\xi}^{*}=xx_{\xi}+x_{\xi}\widehat{x}_{\xi}+\widehat{x}_{\xi}\widehat{x}^{ \prime}_{\xi}+L_{\xi}+\widehat{w}^{\prime}_{s}\widehat{w}_{s}+\widehat{w}_{s}w +\widehat{w}_{s}z\] and \[T_{n-2}^{*}=xx^{\prime}+yy^{\prime}+T_{n-2}+z^{\prime}_{\xi}z_{\xi}+z_{\xi}z.\] Hence, we obtain \(n-2\) internally edge disjoint trees \(T_{1}^{*},\ldots,T_{n-2}^{*}\) connecting \(S\). **Case 5.3.2.** There is one of \(x,y,z\), say \(x\), is adjacent to the others. Let \(x_{i}=x[i,i+1]\) for \(i\in[n-2]\). There exist \(\ell,s\in[n-3]\) such that \(y=x_{\ell}\) and \(z=x_{s}\), where \(\ell<s\). Suppose first that \(s=n-3\). For \(i,j\in[n]\setminus\{1\}\) with \(i\neq j\), let \(V_{j}^{1,i}=\{(v_{1},\ldots,v_{n-3},j,i,1):(v_{1},\ldots,v_{n-3})\in\mbox{Sym}_{ 1,i,j}(n)\}\), where \(\mbox{Sym}_{1,i,j}(n)\) is the set of permutations of \([n]\setminus\{1,i,j\}\). Let \(B_{n-3}^{(1,p_{n-1},j)}=B_{n}[V_{j}^{1,p_{n-1}}]\). Then \(B_{n-3}^{(1,p_{n-1},j)}\cong B_{n-3}\). Note that \(\kappa(B_{n-3})=n-4\). So there are \(n-4\) internally vertex disjoint \((x,y)\)-paths \(L_{1},\ldots,L_{n-4}\) in \(B_{n-3}^{(1,p_{n-1},j)}\). Assume that \(x_{i}\in V(L_{i})\) for \(i\in[n-4]\). Let \(\widehat{x}_{i}=x_{i}[n-3,n-2]\). Then \(\widehat{x}_{i}\in V_{p_{n-1}}^{1}\setminus V_{p_{n-1}}^{(1,p_{n-1})}\), and there are \(n-4\) internally vertex disjoint \((z,\widehat{x}_{i})\)-paths \(Q_{i}\) for \(i\in[n-4]\). Let \(y_{n-2}=y[n-2,n-1]\) and \(z_{n-2}=z[n-2,n-1]\). Then \(x_{n-2},y_{n-2},z_{n-2}\in V_{1}\setminus V_{p_{n-1}}^{1}\), and there is a tree \(T\) containing \(x_{n-2},y_{n-2},z_{n-2}\) in \(B_{n}[V_{1}\setminus V_{p_{n-1}}^{1}]\). Let \(F=\{\widehat{x}^{\prime}_{i}:i\in[n-4]\}\cup\{x^{\prime}_{n-2},x^{\prime},y^{ \prime}\}\) and \(F_{1}=F\cap V_{2}\). Then there are three possibilities: (i) \(F_{1}=\emptyset\), (ii) \(F_{1}=\{\widehat{x}^{\prime}_{i}:i\in[n-4]\}\cup\{x^{\prime},y^{\prime}\}\), and (iii) \(F_{1}=\{x^{\prime}_{n-2}\}\). By considering whether the out-neighbor of \(w\) lies in \(V_{1}\), and by similar discussions as in Case 5.1, we have \(n-2\) internally edge disjoint trees \(T_{1},\ldots,T_{n-2}\) such that \(T_{1},\ldots,T_{n-3}\) connect \(S\) and \(T_{n-2}\) contains \(x,y,w\). Let \(T_{n-2}^{*}=T_{n-2}+xz+xx^{\prime}+yy^{\prime}\). Then \(T_{1},\ldots,T_{n-3}\), \(T_{n-2}^{*}\) are \(n-2\) internally edge disjoint trees connecting \(S\). Suppose that \(s<n-3\). For any \(t\) with \(s\leq t\leq n-3\), let \(\{i_{1},\ldots,i_{n-(t+2)},j\}\subset[n]\setminus\{1\}\). Let \[V_{j}^{1,i_{1},\ldots,i_{n-(t+2)}}\] \[= \{(v_{1},\ldots,v_{t},j,i_{n-(t+2)},\ldots,i_{1},1):(v_{1}, \ldots,v_{t})\in\mbox{Sym}_{1,i_{1},\ldots,i_{n-(t+2)},j}(n)\},\] where \(\mbox{Sym}_{1,i_{1},\ldots,i_{n-(t+2)},j}(n)\) is the set of permutations of \([n]\setminus\{1,i_{1},\ldots,i_{n-(t+2)},j\}\). Then \(x,y\in V_{p_{n+1}}^{1,p_{n-1},\ldots,p_{s+2}}\). Since \(B_{n}[V_{p_{n+1}}^{1,p_{n-1},\ldots,p_{s+2}}]\cong B_{s}\) and \(\kappa(B_{s})=s-1\), there are \(s-1\) internally vertex disjoint \((x,y)\)-paths \(L_{1},\ldots,L_{s-1}\) in \(B_{n}[V_{p_{s+1}}^{1,p_{n-1},\ldots,p_{s+2}}]\). Assume that \(x_{i}\in V(L_{i})\) for \(i\in[s-1]\). Let \(\widehat{x}_{i}=x_{i}[s,s+1]\) for \(i\in[s-1]\). Then \(z,\widehat{x}_{i}\in V_{p_{s+2}}^{1,p_{n-1},\ldots,p_{s+3}}\setminus V_{p_{s+1}} ^{1,p_{n-1},\ldots,p_{s+2}}\). By Lemma 2.6, there are \(s-1\) internally vertex disjoint \((z,\widehat{x}_{i})\)-path \(Q_{i}\) for \(i\in[s-1]\). Let \(y_{i}=y[i,i+1]\) and \(z_{i}=z[i,i+1]\) for \(i=s+2,\ldots,n-2\). Since \(B_{n}[V_{p_{i+1},\ldots,p_{i+2}}^{1,p_{n-1},\ldots,p_{i+1}}\setminus V_{p_{i}}^{1,p_{n-1},\ldots,p_{i+1}}]\) is connected, there is a tree \(T_{i}^{*}\) containing \(x_{i},y_{i},z_{i}\) for \(i=s+2,\ldots,n-2\). Let \(F=\{\widehat{x}_{i}^{\prime}:i\in[s-1]\}\cup\{x_{i}^{\prime}:i=s+2,\ldots,n-2 \}\cup\{x^{\prime},y^{\prime}\}\). Then there are three possibilities: (i) \(F_{1}=\emptyset\), (ii) \(F_{1}=\{\widehat{x}_{i}^{\prime}:i\in[s-1]\}\cup\{x_{i}^{\prime}:i=s+2,\ldots, n-3\}\cup\{x^{\prime},y^{\prime}\}\), and (iii) \(F_{1}=\{x_{n-2}^{\prime}\}\). By considering whether the out-neighbor of \(w\) lies in \(V_{1}\), and similar discussions as in Case 5.1, we may have \(n-2\) internally edge disjoint trees \(T_{1},\ldots,T_{n-2}\) such that \(T_{1},\ldots,T_{n-3}\) connect \(S\) and one \(T_{n-2}\) contains \(x,y,w\). Let \(T_{n-2}^{*}=T+xz+xx^{\prime}+yy^{\prime}\), we obtain \(n-2\) internally edge disjoint trees \(T_{1},\ldots,T_{n-3}\), \(T_{n-2}^{*}\) connecting \(S\). ## 4 Concluding remarks From a theoretical perspective, the generalized \(k\)-connectivity \(\kappa_{k}(G)\) of a connected graph of order \(n\geq 2\) includes two fundamental concepts: the connectivity for \(k=2\) and the maximum number of edge disjoint spanning trees for \(k=n\). From a practical perspective, the generalized connectivity can measure the reliability and security of a network. The bubble-sort graph \(B_{n}\) is a particular Cayley graph that is suitable as a topology for massively parallel systems. In this article, we prove that \(\kappa_{4}(B_{n})=n-2\) for \(n\geq 3\). In other words, there are \(n-2\) internally disjoint trees connecting them in \(B_{n}\) for any four vertices of \(B_{n}\) when \(n\geq 3\). For further work, it would be interesting to study the generalized connectivity of Cayley graphs on symmetric groups generated by general trees and some other important networks [23]. **Acknowledgement.** This work was supported by National Natural Science Foundation of China (No. 12071158).
2306.07017
Multivariate extensions of the Multilevel Best Linear Unbiased Estimator for ensemble-variational data assimilation
Multilevel estimators aim at reducing the variance of Monte Carlo statistical estimators, by combining samples generated with simulators of different costs and accuracies. In particular, the recent work of Schaden and Ullmann (2020) on the multilevel best linear unbiased estimator (MLBLUE) introduces a framework unifying several multilevel and multifidelity techniques. The MLBLUE is reintroduced here using a variance minimization approach rather than the regression approach of Schaden and Ullmann. We then discuss possible extensions of the scalar MLBLUE to a multidimensional setting, i.e. from the expectation of scalar random variables to the expectation of random vectors. Several estimators of increasing complexity are proposed: a) multilevel estimators with scalar weights, b) with element-wise weights, c) with spectral weights and d) with general matrix weights. The computational cost of each method is discussed. We finally extend the MLBLUE to the estimation of second-order moments in the multidimensional case, i.e. to the estimation of covariance matrices. The multilevel estimators proposed are d) a multilevel estimator with scalar weights and e) with element-wise weights. In large-dimension applications such as data assimilation for geosciences, the latter estimator is computationnally unaffordable. As a remedy, we also propose f) a multilevel covariance matrix estimator with optimal multilevel localization, inspired by the optimal localization theory of M\'en\'etrier and Aulign\'e (2015). Some practical details on weighted MLMC estimators of covariance matrices are given in appendix.
Mayeul Destouches, Paul Mycek, Selime Gürol
2023-06-12T10:41:16Z
http://arxiv.org/abs/2306.07017v2
# Multivariate extensions ###### Abstract We propose a novel method for estimating the variance of the multilevel Best Linear Unbiased Estimator in terms of the number of parameters of the multilevel Best Linear Unbiased Estimator in terms of the number of parameters of the multilevel Best Linear Unbiased Estimator in terms of the number of parameters of the multilevel Best Linear Unbiased Estimator in terms of the number of parameters of the multilevel Best Linear Unbiased Estimator in terms of the number of parameters of the multilevel Best Linear Unbiased Estimator in terms of the number of parameters of the multilevel Best Linear Unbiased Estimator in terms of the number of parameters of the multilevel Best Linear Unbiased Estimator in terms of the number of parameters of the multilevel Best Linear Unbiased Estimator in terms of the number of parameters of the multilevel Best Linear Unbiased Estimator in terms of the number of parameters of the multilevel Best Linear Unbiased Estimator in terms ###### Abstract Multilevel estimators aim at reducing the variance of Monte Carlo statistical estimators, by combining samples generated with simulators of different costs and accuracies. In particular, the recent work of Schaden and Ullmann (2020) on the multilevel best linear unbiased estimator (MLBLUE) introduces a framework unifying several multilevel and multifidelity techniques. The MLBLUE is reintroduced here using a variance minimization approach rather than the regression approach of Schaden and Ullmann. We then discuss possible extensions of the scalar MLBLUE to a multidimensional setting, i.e. from the expectation of _scalar_ random variables to the expectation of random _vectors_. Several estimators of increasing complexity are proposed: a) multilevel estimators with scalar weights, b) with element-wise weights, c) with spectral weights and d) with general matrix weights. The computational cost of each method is discussed. We finally extend the MLBLUE to the estimation of second-order moments in the multidimensional case, i.e. to the estimation of covariance matrices. The multilevel estimators proposed are d) a multilevel estimator with scalar weights and e) with element-wise weights. In large-dimension applications such as data assimilation for geosciences, the latter estimator is computationnally unaffordable. As a remedy, we also propose f) a multilevel covariance matrix estimator with optimal multilevel localization, inspired by the optimal localization theory of Menetrier and Auligne (2015). Some practical details on weighted MLMC estimators of covariance matrices are given in appendix. ###### Contents * 1 Introduction * 2 The MLBLUE: reminder and notations * 3 Retrieving the MLBLUE via variance minimization * 4 Estimation of the expectation of a random vector * 4.1 Notations for the multidimensional case * 4.2 Scalar weights * 4.3 Field weights * 4.4 Field weights with change of basis * 4.5 Matrix weights - the multidimensional MLBLUE * 5 Estimation of a scalar covariance Estimation of a covariance matrix * 6.1 The problem * 6.2 Scalar weights * 6.3 Matrix field weights * 6.4 Optimal localization * 6.4.1 General case * 6.4.2 Imposing some structure to the localization matrix * A Retrieving the MLBLUE for the multidimensional expectation through constrained * B Estimating the average covariance matrix of covariance estimators * C Optimal sample allocation for an MLMC covariance matrix estimator * D Optimal localization using random asymptotic quantities ## 1 Introduction Multilevel techniques aim at reducing the variance of Monte Carlo statistical estimators, typically for the estimation of the expectation of a scalar random variable. These techniques combine in an astute way samples obtained through numerical simulators of varying accuracy and cost. An example of popular multilevel technique is the Multilevel Monte Carlo (MLMC) method, popularized by Giles (2008, 2015). Recently, an interesting unifying framework was proposed by Schaden and Ullmann (2020, 2021), herafter SU20 and SU21. Among others, the framework of Schaden and Ullmann includes multilevel Monte Carlo techniques (MLMC, Giles, 2008, 2015 for a review), multifidelity techniques (Peherstorfer et al., 2018) and approximate control variates (Gorodetsky et al., 2020). In this unified framework, some (new) estimators naturally appear as optimal, the so-called _Multilevel Best Linear Unbiased Estimators_, MLBLUEs. This framework has been complemented by Croci et al. (2023), who propose an efficient algorithm to solve the _model selection and sample allocation problem_ (MOSAP) for the MLBLUE. The present note proposes a new way to derive the MLBLUE, by building a weighted multilevel estimator and optimizing its weights to minimize the estimator's variance under a no-bias constraint. It also gives some insight on how the MLBLUE approach can be extended to the estimation of first and second-order statistical moments of random vectors, in possibly large dimensions. This extension to second-order moments and to random vectors is motivated by possible applications in ensemble-variational data assimilation, where Monte Carlo methods are used at a key stage, to estimate the covariance matrix of forecast errors (Lorenc, 2003; Buehner, 2005 and Bannister, 2017 for a review). As a result, the present note is not as general as the original articles by SU20 and SU21, nor as mathematically grounded. The authors are biased towards MLMC-like applications, and towards the estimation of discrete covariance operators in large dimension for geoscience applications. The note is organized as follows. Section 2 presents the main results of SU20 and SU21. Section 3 presents another way to derive these results, based on direct minimization of the variance of a weighted multilevel estimator. The next sections proposes extensions of the MLBLUE, some of which are unpublished in the literature to the best of our knowledge. We propose an extension to the multi-dimensional case (estimation of the expectation of a random vector) in section 4. We propose an extension to the estimation of covariance and covariance matrices in sections 5 and 6, including an extension to optimal localization for multilevel covariance matrices in the line of Menetrier et al. (2015a,b), hereafter M15a and M15b. The MLBLUE: reminder and notations Let \(Z_{\ell}=f_{\ell}(X)\colon\Omega\to\mathbb{R}\), \(1\leq\ell\leq L\) be a set of random variables approximating \(f_{L}(X)\), where \(f_{L}\) is a costly numerical simulator. The \(\ell\) indexing the \(f_{\ell}\) models are hereafter called fidelity levels. These fidelity levels may be associated to different spatial meshes, from the coarsest to the finest (\(\ell=1\) to \(\ell=L\) for an MLMC-like structure). This is not required though, and what follows can be applied even if the fidelities come from other sources, and even if there is no clear ranking of the models according to their accuracy. ExampleThe hierarchy of simulators can be forecasting models \(f_{\ell}\colon\mathbb{R}^{n}\to\mathbb{R}\) running on meshes with finer and finer horizontal resolutions, and predicting temperature at one given location. \(X\colon\Omega\to\mathbb{R}^{n}\) can be a random vector representing uncertain initial conditions of a numerical weather forecast. We are interested in the mean temperature that is forecast by the finest model, \(\mathbb{E}\big{[}f_{L}(X)\big{]}\). Multilevel techniques rely on coupled simulations, i.e. simulations that run at different levels using the same stochastic input \(X\). The sets of coupled levels can be sorted in \(K\) coupling groups \((S^{(k)})_{k=1}^{K}\). More formally, let \((S^{(k)})_{k=1}^{K}\) be a family of subsets of \(\{1,\ldots,L\}\). We impose \[S^{(k)}\neq S^{(k^{\prime})}\text{ for }k\neq k^{\prime} \tag{1}\] \[\cup_{k=1}^{K}S^{(k)}=\{1,\ldots,L\}. \tag{2}\] We denote by \(p^{(k)}\) the cardinality of \(S^{(k)}\). For \(1\leq k\leq K\), \(R^{(k)}\colon\mathbb{R}^{L}\to\mathbb{R}^{p^{(k)}}\) is the selection operator for group \(S^{(k)}\) verifying \(\forall x\in\mathbb{R}^{L},\,R^{(k)}x=(x_{\ell})_{\ell\in S^{(k)}}\). The associated extension operator is \(P^{(k)}:=\big{(}R^{(k)}\big{)}^{\intercal}\colon\mathbb{R}^{p^{(k)}}\to \mathbb{R}^{L}\). MLMC-like exampleWe can use this formalism to describe the coupling structure of an MLMC estimator with three levels, as is done in example 2.1 of SU20. The coupling groups in this case are \(S^{(1)}=\{1\}\), \(S^{(2)}=\{1,2\}\) and \(S^{(3)}=\{2,3\}\). The associated extension operators are \[P^{(1)} =\begin{pmatrix}1&0&0\end{pmatrix} \tag{3}\] \[P^{(2)} =\begin{pmatrix}1&0&0\\ 0&1&0\end{pmatrix}\] (4) \[P^{(3)} =\begin{pmatrix}0&1&0\\ 0&0&1\end{pmatrix} \tag{5}\] The repartition of simulators among coupling groups \(S^{(k)}\) can be visualized using the tableaux used by Schaden and Ullmann, and reproduced for this example in figure 1c. We denote by \(\mu:=\left(\mathbb{E}\left[Z_{\ell}\right]\right)_{\ell=1}^{L}\) the vector of expectations at each fidelity level. We are interested in estimating \(\alpha^{\intercal}\mu\) for a given vector \(\alpha\in\mathbb{R}^{L}\setminus\{0\}\). In practice, \(\alpha=e_{L}:=(0,\cdots,0,1)^{\intercal}\), but this is not mandatory. Let \(m^{(1)},\ldots,m^{(K)}\) be the number of available simulations for each group \(k\) (see figure 1b). SU20 provide the best estimator for \(\alpha^{\intercal}\mu\) among the unbiased estimators that linearly combine the simulations \(f_{\ell}(X^{(k,i)})\) for \(1\leq k\leq K\), \(\ell\in S^{(k)}\) and \(1\leq i\leq m^{(k)}\), where the \(X^{(k,i)}\) are i.i.d. random variables following the same law as \(X\). These linear estimators are of the form \[\widehat{\mu}^{\text{ML}}:=\sum_{k=1}^{K}\sum_{\ell\in S^{(k)}}\beta_{\ell}^{ (k)}\widehat{E}^{(k)}[Z_{\ell}] \tag{6}\] where \(\widehat{E}^{(k)}[Z_{\ell}]\) is the standard Monte Carlo estimator for \(\mathbb{E}[Z_{\ell}]\), using \(m^{(k)}\) random inputs associated to group \(k\), \[\widehat{E}^{(k)}[Z_{\ell}]:=\frac{1}{m^{(k)}}\sum_{i=1}^{m^{(k)}}f_{\ell}(X^ {(k,i)}). \tag{7}\] The inner sum in (6) can be written as a scalar product by introducing two more notations. Firstly, we denote by \(\beta^{(k)}:=\left(\beta_{\ell}^{(k)}\right)_{\ell\in S^{(k)}}\in\mathbb{R}^{ p^{(k)}}\) the vector of weights associated to group \(k\). In the case of a three-level MLMC, we would have the (sub-optimal) weights \(\beta^{(1)}=(1)\) and \(\beta^{(2)}=\beta^{(3)}=(-1,1)^{\intercal}\) (figure 1c). Figure 1: MLMC coupling structure. Tableaux inspired by SU20. Secondly, we denote by \(Z^{(k)}:=(Z_{\ell})_{\ell\in S^{(k)}}\) the random vector gathering all random variables in group \(k\). Then equation (6) becomes \[\widehat{\mu}^{\text{ML}}=\sum_{k=1}^{K}\left(\beta^{(k)}\right)^{\intercal} \widehat{E}^{(k)}\big{[}Z^{(k)}\big{]} \tag{8}\] The (optimal) vector \(\beta^{(k)}\) can be expressed as (from equation 2.7 in SU21) \[\beta^{(k)}=m^{(k)}\left(C^{(k)}\right)^{-1}R^{(k)}\left(\sum_{k^{\prime}=1}^{ K}m^{(k^{\prime})}P^{(k^{\prime})}\left(C^{(k^{\prime})}\right)^{-1}R^{(k^{ \prime})}\right)^{-1}\alpha, \tag{9}\] where \(C^{(k)}:=\text{Cov}(Z^{(k)},Z^{(k)})\) and \(\text{Cov}(A,B):=\mathbb{E}[\left(A-\mathbb{E}[A]\right)\left(B-\mathbb{E}[B] \right)^{\intercal}]\) denotes the covariance matrix of random vectors \(A\) and \(B\). These \(C^{(k)}\) matrices are unknown in practice and must be estimated, which results in sub-optimal weights. Note that the estimation of the \(C^{(k)}\) should be done independently of the estimation of \(\alpha^{\intercal}\mu\), otherwise a bias is introduced. The importance of this bias is likely to depend on the particular application and setting considered. Model selection and sample allocation problemThis approach provides the MLBLUE for a given coupling structure and a given number of samples on each coupling group. SU20 propose some ways to optimize the model selection and sample allocation by minimizing the variance of the associated MLBLUE. Their approach has been later extended and made more robust by Croci et al. (2023), who transform it into a semidefinite programming problem. Retrieving the MLBLUE via variance minimization We believe the derivation based on constrained minimization of the variance to be more direct, and perhaps more intuitive than the regression approach proposed by SU20. Though both derivations are closely related, we believe the variance minimization approach may appear as more natural to some readers, especially from the community of multifidelity estimation methods based on control variates (see for instance Gorodetsky et al., 2020). We derive this approach here. Given the fidelity levels \(1,\ldots,L\) and the coupling structure \((S^{(k)},m^{(k)})_{k=1}^{K}\), we look for an unbiased estimator of \(\alpha^{\intercal}\mu\) that linearly combines the samples and that has the lowest possible variance (the BLUE). We assume that the multilevel estimator is of the form of equations (6) and (8), and we look for the \(\beta_{\ell}^{(k)}\) coefficients that minimize the variance under a no-bias constraint. Unbiasedness constraintThe optimal \(\beta\) weights are subject to the unbiasedness constraint \[\mathbb{E}[\widehat{\mu}(\beta)]=\alpha^{\intercal}\mu \Longleftrightarrow \mathbb{E}\Biggl{[}\sum_{k=1}^{K}\left(\beta^{(k)}\right)^{ \intercal}\widehat{E}^{(k)}\bigl{[}Z^{(k)}\bigr{]}\Biggr{]}=\alpha^{\intercal}\mu \tag{10}\] \[\Longleftrightarrow \sum_{k=1}^{K}\left(\beta^{(k)}\right)^{\intercal}R^{(k)}\mu= \alpha^{\intercal}\mu\] (11) \[\Longleftrightarrow \sum_{k=1}^{K}P^{(k)}\beta^{(k)}=\alpha, \tag{12}\] assuming that we have no prior information on \(\mu\), so that the unbiasedness should be met for all values of \(\mu\in\mathbb{R}^{L}\). \[\mathbb{E}[\widehat{\mu}(\beta)]=\alpha^{\intercal}\mu \Longleftrightarrow \left(P^{(1)}\ \ \cdots\ \ P^{(K)}\right)\beta=\alpha \tag{13}\] \[\Longleftrightarrow g(\beta)=0\] (14) \[\text{with }g(\beta):=\left(P^{(1)}\ \ \cdots\ \ P^{(K)}\right)\beta-\alpha \tag{15}\] and where we denote by \(\beta:=\left(\beta^{(k)}\right)_{k=1}^{K}\in\mathbb{R}^{p}\) the vector made of all the \(\beta^{(k)}\), with \(p:=\sum_{k=1}^{K}p^{(k)}\). Expression of the varianceThe variance of the linear estimator \(\widehat{\mu}\) is the sum of the variances for each coupling group, since simulations are independent from one coupling group to another. We denote by \(\mathbb{V}(X)\) the variance of any square-integrable random variable \(X\). \[\mathbb{V}(\widehat{\mu}(\beta)) =\sum_{k=1}^{K}\mathbb{V}\Big{(}\big{(}\beta^{(k)}\big{)}^{\intercal} \,\widehat{E}^{(k)}\big{[}Z^{(k)}\big{]}\Big{)} \tag{16}\] \[=\sum_{k=1}^{K}\big{(}\beta^{(k)}\big{)}^{\intercal}\,\mathrm{Cov }\Big{(}\widehat{E}^{(k)}\big{[}Z^{(k)}\big{]},\widehat{E}^{(k)}\big{[}Z^{(k)} \big{]}\Big{)}\,\beta^{(k)}\] (17) \[=\sum_{k=1}^{K}\frac{1}{m^{(k)}}\,\big{(}\beta^{(k)}\big{)}^{ \intercal}\,\mathrm{Cov}\big{(}Z^{(k)},Z^{(k)}\big{)}\,\beta^{(k)}\] (18) \[=\sum_{k=1}^{K}\frac{1}{m^{(k)}}\,\big{(}\beta^{(k)}\big{)}^{ \intercal}\,C^{(k)}\beta^{(k)}\] (19) \[=\beta^{\intercal}\Sigma\beta\] (20) \[\text{with}\,\,\,\Sigma:=\mathrm{Diag}_{k=1}^{K}\bigg{(}\frac{1} {m^{(k)}}C^{(k)}\bigg{)}\,. \tag{21}\] We used the independence of the \(f_{\ell}\big{(}X^{(k,i)}\big{)}\) for different \(i\) to go from (17) to (18), and used the Diag operator to denote a block-diagonal matrix. Equation (19) is equivalent to equation (2.8) in SU21. Convexity of the variance\(\Sigma\) is a block-diagonal matrix with covariance matrices on the diagonal. As a result, it is positive semi-definite and \(\mathbb{V}(\widehat{\mu}(\beta))\) is a convex (quadratic) function of \(\beta\). Note that as will be discussed hereafter, the covariance matrices \(C^{(k)}\) are actually positive definite, and the variance is a strictly convex function of \(\beta\). Constrained minimization problemThe best unbiased estimator is then given by the minimizer of the variance under the unbiasedness constraint. \[\beta^{\star}=\operatorname*{arg\,min}_{\beta\text{ s.t. }g(\beta)=0}\frac{1}{2}\, \mathbb{V}\big{(}\widehat{\mu}(\beta)\big{)}. \tag{22}\] Unconstrained minimization problemThe assumptions on the coupling groups \(S^{(k)}\) ensure that the \(L\) constraints are linearly independent. A vector \(\lambda\in\mathbb{R}^{L}\) of Lagrange multipliers can be used to solve the minimization problem. From the convexity of the quadratic problem, the solutions of (22) are the solutions of \[\beta^{\star},\lambda^{\star} =\arg\min_{\beta,\lambda}\mathcal{L}(\beta,\lambda), \tag{23}\] \[\text{with }\mathcal{L}(\beta,\lambda) =\frac{1}{2}\,\mathbb{V}(\widehat{\mu}(\beta))-\lambda^{\intercal}g (\beta). \tag{24}\] In particular, the gradient of the Lagrangian, \[\nabla_{\beta}\mathcal{L} =\beta^{\intercal}\operatorname{Diag}_{k=1}^{K}\!\left(\frac{1}{ m^{(k)}}C^{(k)}\right)-\lambda^{\intercal}\left(P^{(1)}\ \ \cdots\ \ P^{(K)}\right), \tag{25}\] \[\nabla_{\lambda}\mathcal{L} =\beta^{\intercal}\left(P^{(1)}\ \ \cdots\ \ P^{(K)}\right)^{ \intercal}-\alpha^{\intercal}, \tag{26}\] should vanish. The associated linear system is \[\begin{pmatrix}\frac{1}{m^{(1)}}C^{(1)}&&&-R^{(1)}\\ &\ddots&&\vdots\\ &&\frac{1}{m^{(k)}}C^{(K)}&-R^{(K)}\\ \hline P^{(1)}\ \ \cdots\ \ P^{(K)}&0_{L}\end{pmatrix}\begin{pmatrix}\beta^{ \star}\\ \beta^{\star}\\ \lambda^{\star}\end{pmatrix}=\begin{pmatrix}0\\ \vdots\\ 0\\ \alpha\end{pmatrix} \tag{27}\] To simplify the notations, we drop the stars and write the system as \[\Sigma\beta-P^{\intercal}\lambda =0, \tag{28}\] \[P\beta =\alpha. \tag{29}\] This system can be solved by substitution: \[\text{From (\ref{eq:1}):} \beta =\Sigma^{-1}P^{\intercal}\lambda \tag{30}\] \[\text{Inserting (\ref{eq:2}) in (\ref{eq:2}):} P\Sigma^{-1}P^{\intercal}\lambda =\alpha\] (31) \[\iff\lambda =\left(P\Sigma^{-1}P^{\intercal}\right)^{-1}\alpha\] (32) \[\text{Inserting (\ref{eq:2}) in (\ref{eq:2}):} \beta =\Sigma^{-1}P^{\intercal}\left(P\Sigma^{-1}P^{\intercal}\right)^ {-1}\alpha. \tag{33}\] Invertibility of the matricesWe assumed the invertibility of \(\Sigma\) and \(P\Sigma^{-1}P^{\intercal}\). The invertibility of \(\Sigma\) follows from the invertibility of each covariance matrix \(C^{(k)}\). Suppose \(\Sigma\) is singular. Then, there exists a \(k\) such that \(C^{(k)}\) is singular. Then there exists a vector \(\gamma\in\mathbb{R}^{p^{(k)}}\setminus\{0\}\) such that \(\gamma^{\intercal}C^{(k)}\gamma=\mathbb{V}(\gamma^{\intercal}Z^{(k)})=0\), _i.e._\(\sum_{\ell\in S^{(k)}}\gamma_{\ell}Z_{\ell}\) is actually deterministic. One random variable \(Z_{\ell}\) can thus be expressed as an affine function of the others. It brings no new information to the problem, and can be removed from the estimator. The invertibility of \(P\Sigma^{-1}P^{\intercal}\) follows from the invertibility of \(\Sigma^{-1}\) and from \(P\) being a full-rank linear map from \(\mathbb{R}^{p}\) to the lower-dimensional space \(\mathbb{R}^{L}\). MLBLUE weightsThe optimal choice of \(\beta\) is thus given by equation (33): \[\beta=\begin{pmatrix}\beta^{(1)}\\ \vdots\\ \beta^{(K)}\end{pmatrix}=\begin{pmatrix}m_{1}\left(C^{(1)}\right)^{-1}&&\\ &\ddots&\\ &&m^{(k)}\left(C^{(K)}\right)^{-1}\end{pmatrix}\begin{pmatrix}R^{(1)}\\ \vdots\\ R^{K}\end{pmatrix}\phi^{-1}\alpha, \tag{34}\] where \[\phi:=P\Sigma^{-1}P^{\intercal}=\sum_{k=1}^{K}m^{(k)}P^{(k)}\left(C^{(k)} \right)^{-1}R^{(k)}. \tag{35}\] For a given group \(k\), we retrieve equation (2.7) of SU21, namely \[\beta^{(k)}=m^{(k)}\left(C^{(k)}\right)^{-1}R^{(k)}\left(\sum_{k^{\prime}=1}^{ K}m^{(k^{\prime})}P^{(k^{\prime})}\left(C^{(k^{\prime})}\right)^{-1}R^{(k^{ \prime})}\right)^{-1}\alpha. \tag{36}\] Is it the MLBLUE?The estimators of the form (8) only describe a specific subset of all possible linear estimators. The more general class of linear estimators would be \[\mu^{\text{ML}}=\sum_{k=1}^{K}\sum_{\ell\in S^{(k)}}\sum_{i=1}^{m^{(k)}}\beta_ {\ell}^{(k,i)}f_{\ell}\big{(}X^{(k,i)}\big{)}\,. \tag{37}\] Estimators of the form (37) can be related to Eq. (8) by replacing \(\beta_{\ell}^{(k,i)}\) with \(\beta_{\ell}^{(k)}/m^{(k)}\). In other words, Eq. (8) assumes that at the optimum, the \(\beta_{\ell}^{(k,i)}\) weights should not depend on the sample index \(i\). This independence is a very intuitive result, that can be derived from the interchangeability of the samples \(i\) and the strict convexity of the variance of (37) as a function of the weights. Sample allocationInserting (36) into (19) and simplifying gives the expression of the minimum variance reachable with a given sample allocation \(m\) (equivalent to equation 2.12 in SU20): \[\mathbb{V}(\mu^{\text{ML}}(m))=\alpha^{\intercal}\phi(m)^{-1}\alpha. \tag{38}\] The optimal choice for the sample allocation \(m=(m^{(1)},\ldots,m^{(K)})\) is given by minimizing this variance under a computational cost constraint, which can be done numerically. Model selection and sample allocation problemAlternatively, the model selection and sample allocation problem (MOSAP) can be solved through a semidefinite programming problem, as shown by Croci et al. (2023), in the typical case where \(\alpha=e_{L}\): \[\min_{m\geq 0,\,t}\,t\quad\text{s.t.}\quad\begin{cases}\begin{pmatrix}\phi(m)&e_{L }\\ e_{L}^{\intercal}&t\end{pmatrix}\text{ is positive semi-definite,}\\ m^{\intercal}c\leq b,\\ m^{\intercal}h\geq 1.\end{cases} \tag{39}\] The second constraint imposes a computational budget \(b\), where \(c=(c^{(1)},\ldots,c^{(K)})^{\intercal}\) describes the computational cost of generating a coupled sample in each coupling group. The third constraint, where \(h\) denotes the vector of \(\{0;1\}^{K}\) such that \(h^{(k)}=1\) if and only if \(L\in S^{(k)}\), enforces that the high-fidelity model be sampled at least once. Note that this extends easily to the case of any \(\alpha\) by replacing \(e_{L}\) with \(\alpha\) and by adding inequality constraints to ensure that all model with non-zero coefficients in \(\alpha\) are sampled at least once. Also note that this approach to the MOSAP can handle sample sizes of zero, which means the problem can be directly optimized on the set of all possible coupling groups. This would not be directly possible with the sample allocation strategy proposed previously. A similar version for a target accuracy with no constraint on the computational budget is also proposed in Croci et al. (2023). Estimation of the expectation of a random vector This section extends the MLBLUE methodology to the estimation of the expectation of a random vector. Section (4.1) introduces new notations to deal with vector quantities. We then propose various multilevel estimators of increasing complexity (sections 4.2 - 4.4), before introducing the general multidimensional MLBLUE in section 4.5. ### Notations for the multidimensional case We consider the case where \(\mathbf{Z}_{\ell}:\,\Omega\to\mathbb{R}^{n}\) are random vectors. All vectors or matrices related to multidimensional quantities are written in bold. We are interested in the expectation \(\boldsymbol{\mu}:=\mathbb{E}\big{[}\mathbf{Z}\big{]}\), where \(\mathbf{Z}:=(\mathbf{Z}_{1}\ldots\mathbf{Z}_{L})^{\intercal}\) is the random matrix with values in \(\mathbb{R}^{L\times n}\), obtained by stacking the random vectors from all fidelity levels so that the first dimension of \(\mathbf{Z}\) indexes the fidelity levels. We want to estimate a linear combination of the expectations on different levels, \(\boldsymbol{\mu}_{\alpha}:=\sum_{\ell=1}^{L}\alpha_{\ell}\,\mathbb{E}\big{[} \mathbf{Z}_{\ell}\big{]}=\boldsymbol{\mu}^{\intercal}\alpha\), where \(\alpha\) is a non-zero vector of \(\mathbb{R}^{L}\). In practice, we are often interested in the estimation for one given fidelity level, typically the highest, in which case \(\alpha=e_{L}\). The selection and extension operators from the scalar case naturally extend to the vector case: \[\mathbf{Z}^{(k)}:=R^{(k)}\mathbf{Z}\in\mathbb{R}^{p^{(k)}\times n},\quad \forall\,1\leq k\leq K. \tag{40}\] Vector equivalent of the varianceUnder a no-bias constraint, the variance of a random variable is its (scalar) mean squared error (MSE). In the multidimensional case, minimizing the MSE of an estimator \(\widehat{\boldsymbol{\mu}}\) using the 2-norm is equivalent to minimizing the sum of scalar MSEs for all vector elements. \[\mathbb{E}\big{[}\|\widehat{\boldsymbol{\mu}}-\boldsymbol{\mu} \|_{2}^{2}\big{]} =\sum_{i=1}^{n}\mathbb{E}\big{[}\big{(}\widehat{\mu}_{i}-\mu_{i} \big{)}^{2}\big{]} \tag{41}\] \[=\sum_{i=1}^{n}\mathbb{V}(\widehat{\mu}_{i})\] (42) \[=\operatorname{Tr}\operatorname{Cov}\big{(}\widehat{\boldsymbol{ \mu}},\widehat{\boldsymbol{\mu}}\big{)} \tag{43}\] where \(\operatorname{Tr}\) is the trace operator, and where we used the unbiasedness of the estimator \(\widehat{\boldsymbol{\mu}}\). It can be seen from here that the natural generalization of the variance for a random vector is the trace of the covariance matrix, _i.e._ the sum of the variances of each vector element. Each of the following sections introduces a class of estimators \(\widehat{\boldsymbol{\mu}}(\boldsymbol{\beta})\) where \(\boldsymbol{\beta}\) is a set of weights. Similarly to the scalar case, the optimal value \(\boldsymbol{\beta}^{\star}\) of these weights is found by minimization of the trace of the covariance matrix of the estimator, under a no-bias constraint: \[\boldsymbol{\beta}^{\star}=\operatorname*{arg\,min}_{\boldsymbol{\beta}\text{ s.t. }\mathbb{E}[\widehat{\boldsymbol{\mu}}(\boldsymbol{\beta})]=0}\operatorname{ Tr\,Cov}\bigl{(}\widehat{\boldsymbol{\mu}}(\boldsymbol{\beta}),\widehat{ \boldsymbol{\mu}}(\boldsymbol{\beta})\bigr{)}. \tag{44}\] Hereafter, the term _variance_ is sometimes used to refer to the trace of the covariance matrix. ### Scalar weights The simplest possibility is to use scalar weights \(\beta_{\ell}^{(k)}\), common to all random vector elements. In this case, \(\boldsymbol{\mu}_{\alpha}\) is estimated through the linear combination of Monte Carlo estimators \[\widehat{\boldsymbol{\mu}}_{\alpha}^{\text{sw}} =\sum_{k=1}^{K}\sum_{\ell\in S^{(k)}}\beta_{\ell}^{(k)}\widehat{ E}^{(k)}\bigl{[}\mathbf{Z}_{\ell}\bigr{]} \tag{45}\] \[=\sum_{k=1}^{K}\widehat{E}^{(k)}\bigl{[}\mathbf{Z}^{(k)}\bigr{]} ^{\intercal}\beta^{(k)}. \tag{46}\] The BLUE has no reason to lie in this class of estimators, which is just a subset of the linear estimators we will introduce in sections 4.3 to 4.5. Finding the optimal scalar weights \(\beta_{\ell}^{(k)}\) is still of interest though, as these scalar-weigthed estimators are both simple and numerically affordable. #### Unbiasedness constraint \[\mathbb{E}\big{[}\widehat{\mathbf{\mu}}_{\alpha}^{\rm sw}\big{]} =\mathbf{\mu}_{\alpha} \tag{47}\] \[\Longleftrightarrow \mathbb{E}\Bigg{[}\sum_{k=1}^{K}\widehat{E}^{(k)}\big{[}\mathbf{ Z}^{(k)}\big{]}^{\intercal}\beta^{(k)}\Bigg{]} =\mathbb{E}[\mathbf{Z}^{\intercal}]\alpha\] (48) \[\Longleftrightarrow \sum_{k=1}^{K}\mathbb{E}\big{[}\mathbf{Z}^{(k)}\big{]}^{\intercal} \beta^{(k)} =\mathbb{E}[\mathbf{Z}^{\intercal}]\alpha\] (49) \[\Longleftrightarrow \sum_{k=1}^{K}\mathbb{E}\big{[}R^{(k)}\mathbf{Z}\big{]}^{\intercal }\beta^{(k)} =\mathbb{E}[\mathbf{Z}]^{\intercal}\alpha\] (50) \[\Longleftrightarrow \mathbb{E}\big{[}\mathbf{Z}\big{]}^{\intercal}\sum_{k=1}^{K}P^{( k)}\beta^{(k)} =\mathbb{E}[\mathbf{Z}]^{\intercal}\alpha\] (51) \[\Longleftrightarrow \sum_{k=1}^{K}P^{(k)}\beta^{(k)} =\alpha\] (52) \[\Longleftrightarrow g(\beta) =0 \tag{53}\] where \(g\) was defined in Eq. (15). This is exactly the unbiasedness constraint (14) from the scalar expectation case. Variance of the estimatorHerafter, the covariance operator is occasionally extended to random matrices using the definition \[\mathrm{Cov}\big{(}\mathbf{A},\mathbf{B}\big{)}:=\mathbb{E}\big{[}\big{(} \mathbf{A}-\mathbb{E}[\mathbf{A}]\big{)}\big{(}\mathbf{B}-\mathbb{E}[\mathbf{ B}]\big{)}^{\intercal}\big{]} \tag{54}\] where \(\mathbf{A}\) and \(\mathbf{B}\) are any random matrices with same number of columns. This definition coincides with the usual one in the case of column vectors. \[\operatorname{Tr}\operatorname{Cov}\left(\widehat{\mathbf{\mu}}_{\alpha}^{ \operatorname{sw}},\widehat{\mathbf{\mu}}_{\alpha}^{\operatorname{sw}}\right) =\sum_{k=1}^{K}\frac{1}{m^{(k)}}\operatorname{Tr}\operatorname{ Cov}\bigl{(}\mathbf{Z}^{(k)\intercal}\beta^{(k)},\,\mathbf{Z}^{(k)\intercal}\beta^{(k)} \bigr{)} \tag{55}\] \[=\sum_{k=1}^{K}\frac{1}{m^{(k)}}\operatorname{Tr}\operatorname{ Cov}\bigl{(}\beta^{(k)\intercal}\mathbf{Z}^{(k)},\,\beta^{(k)\intercal}\mathbf{Z}^{(k)} \bigr{)}\] (56) \[=\sum_{k=1}^{K}\frac{1}{m^{(k)}}\operatorname{Tr}\bigl{\{}\beta^{ (k)\intercal}\operatorname{Cov}\bigl{(}\mathbf{Z}^{(k)},\mathbf{Z}^{(k)} \bigr{)}\beta^{(k)}\bigr{\}}\] (57) \[=\sum_{k=1}^{K}\frac{1}{m^{(k)}}\beta^{(k)\intercal}\operatorname {Cov}\bigl{(}\mathbf{Z}^{(k)},\mathbf{Z}^{(k)}\bigr{)}\beta^{(k)}\] (58) \[=\sum_{k=1}^{K}\frac{1}{m^{(k)}}\beta^{(k)\intercal}\overline{C ^{(k)}}\beta^{(k)} \tag{59}\] with \(\overline{C^{(k)}}:=\operatorname{Cov}\bigl{(}\mathbf{Z}^{(k)},\mathbf{Z}^{( k)}\bigr{)}\). Note that the variance of the estimator has the same expression as in the scalar expectation case (cf. Eq. 19), just replacing \(C^{(k)}\) with \(\overline{C^{(k)}}\). Interpretation as averaged covariance matricesThe relation \(C^{(k)}=R^{(k)}CP^{(k)}\) that defines the \(C^{(k)}\) as submatrices of the inter-level covariance matrix is still valid: \[\overline{C^{(k)}} =R^{(k)}\overline{C}P^{(k)} \tag{60}\] \[\text{with }\overline{C} :=\operatorname{Cov}\bigl{(}\mathbf{Z},\mathbf{Z}\bigr{)} \tag{61}\] The matrix \(\overline{C}\) is just the average of the inter-level covariance matrices that would be estimated for each element of the random vectors. \[\overline{C} =\sum_{i=1}^{n}C^{(k,i)} \tag{62}\] \[\text{where }\ C^{(k,i)} :=\operatorname{Cov}((\mathbf{Z}_{:,i}),(\mathbf{Z}_{:,i})) \tag{63}\] and where \(\mathbf{Z}_{:,i}\) is the \(i\)-th column of \(\mathbf{Z}\). Optimal weights and MOSAPFrom previous paragraphs, it is clear that all results of the scalar expectation case now apply, just replacing the inter-level covariances with averaged inter-level covariances. For instance, Eq.(36) from the scalar case becomes \[\beta^{(k)} =m^{(k)}\left(\overline{C^{(k)}}\right)^{-1}R^{(k)}\phi^{-1}\alpha, \tag{64}\] \[\text{with }\phi =\sum_{k=1}^{K}m^{(k)}P^{(k)}\left(\overline{C^{(k)}}\right)^{-1}R^ {(k)} \tag{65}\] and the MOSAP can be solved by updating \(\phi(m)\) in Eq. (39). Application in large dimensionsThis approach is tractable for large dimension systems. The most expensive steps have a computational cost that is linear in \(n\): * For each of the \(n\) grid points (or vector elements more generally), estimate an \(L\times L\) covariance matrix. Then, take the average of these \(n\) matrices. This averaging step should help reducing the sampling noise in the estimation of the covariance matrices. * As in the scalar expectation case, inverting a few matrices of size at most \(L\times L\). ### Field weights A more refined approximation consists in allowing for different \(\beta\) weights depending on the element consider. This has been introduced by Croci et al. (2023) as a "multi-output" MLBLUE. We don't propose anything new in this section compared to their work. When the random vector to be estimated can be considered as a discretized random field, the weights \(\beta\) of the multi-output MLBLUE are varying in space. Here, we call this estimator the \(\beta\)-field multilevel estimator: \[\widehat{\mathbf{\mu}}_{\alpha}^{\text{fw}}=\sum_{k=1}^{K}\sum_{\ell\in S^{(k)}} \text{Diag}\Big{(}\mathbf{\beta}_{\ell}^{(k)}\Big{)}\widehat{E}^{(k)}\big{[} \mathbf{Z}_{\ell}\big{]} \tag{66}\] where the \(\mathbf{\beta}_{\ell}^{(k)}\) are now vectors of \(\mathbb{R}^{n}\) and \(\text{Diag}\Big{(}\mathbf{\beta}_{\ell}^{(k)}\Big{)}\) is the diagonal matrix with diagonal \(\mathbf{\beta}_{\ell}^{(k)}\). This class of multilevel estimators includes the previous one (scalar weights). This implies that the optimal \(\beta\)-field estimator is at least as good as the estimator with scalar weights. Nonetheless, it is still a specific class of estimators, that has no reason to include the BLUE. The minimization problem in this case consists in minimizing a sum of independent problems similar to (22). As a consequence, for a given \(i\in\{1,\ldots,n\}\), the optimal weights \(\beta_{\ell,i}^{(k)}\) are the MLBLUE weights for the scalar random variables \((Z_{\ell,i})_{\ell=1}^{L}\). Mean value of space-dependent weightsNote that the mean values of the \(\beta_{\ell}^{(k)}\) vectors differ from the optimal scalar weights (64), due to the non-linearity introduced by the inverse in (64). Variance of the estimatorThe variance of the \(\beta\)-field ML estimator is given by \[\mathbb{V}\left(\widehat{\boldsymbol{\mu}}_{\alpha}^{\text{fw}}( m)\right)=\sum_{i=1}^{n}\alpha^{\intercal}\phi_{i}(m)^{-1}\alpha \tag{67}\] \[\text{where }\phi_{i}(m):=\sum_{k=1}^{K}m^{(k)}P^{(k)}\left(C^{(k,i )}\right)^{-1}R^{(k)}. \tag{68}\] MosapCroci et al. (2023) proposes a solution that minimizes the maximum variance (their equation 21). They also propose variants to minimize, for instance, the total variance: \[\min_{m\geq 0,\,\mathbf{t}\in\mathbb{R}^{n}}\|\mathbf{t}\|_{1}\quad \text{s.t.}\ \left\{\begin{array}{cc}\begin{pmatrix}\phi_{i}(m)&\alpha\\ \alpha^{\intercal}&t_{i}\end{pmatrix}\text{ is positive semi-definite },\forall i=1, \ldots,n\\ &m^{\intercal}c\leq b\\ &&m^{\intercal}h\geq 1\end{array}\right. \tag{69}\] Application in large dimensionsFinding the optimal field weights does not require much more computations than the scalar weight approach. Both require the estimation of the \(C^{(k,i)}\) matrices. The \(\beta\)-field approach requires to store \(L(L+1)/2\) vectors of size \(n\) to store the local covariance matrices. More importantly, it also requires \(pn\) inversions of \(K\) matrices of size less than \(\max_{k}p^{(k)}\) and of one matrix of size \(L\). ### Field weights with change of basis RationaleAnother still larger (but still suboptimal) class of ML estimator can be defined by applying the estimator in a possibly different space. This approach is motivated by the intuition that a better variance reduction could be obtained with the field-weight estimator if the data could be linearly transformed into a space where the elements are distributed according to the strength of their interlevel coupling. For instance, if the low-fidelity models originate from coarse grid simulations, a scale decomposition could provide such a transform, with loose coupling on fine scales and strong coupling on large scales. In this case, there would be a set of \(\beta_{\ell}^{(k)}\) weights for each wave number instead of each vector element. This class of estimator is especially interesting for two reasons: 1. It is the largest class of estimators that are still computationally tractable in high dimension, i.e. with a computational cost in \(\mathcal{O}(n\log(n))\) with respect to the vector size \(n\). 2. It can be interpreted as an optimal post-smoothing, similar to what is done in multigrid methods. The derivations here may be a bit cumbersome though, so the impatient reader is invited to go directly to section 4.5 on the general multivariate MLBLUE. DefinitionLet \(\mathbf{W}\in\mathbb{R}^{n\times n}\) be an orthonormal matrix, so that \(\mathbf{W}^{\intercal}\mathbf{W}=\mathbf{W}\mathbf{W}^{\intercal}=\mathbf{I}_ {n}\). We introduce the class of \(W\)-field multilevel estimators: \[\widehat{\boldsymbol{\mu}}_{\alpha}^{\mathrm{W}} :=\sum_{k=1}^{K}\sum_{\ell\in S^{(k)}}\mathbf{W}\operatorname{ Diag}\Bigl{(}\boldsymbol{\beta}_{\ell}^{(k)}\Bigr{)}\mathbf{W}^{\intercal} \widehat{E}^{(k)}\bigl{[}\mathbf{Z}_{\ell}\bigr{]} \tag{70}\] \[=\mathbf{W}\sum_{k=1}^{K}\sum_{\ell\in S^{(k)}}\operatorname{ Diag}\bigl{(}\boldsymbol{\beta}_{\ell}^{(k)}\bigr{)}\widehat{E}^{(k)}\left[ \mathbf{W}^{\intercal}\mathbf{Z}_{\ell}\right]. \tag{71}\] **Relation to the \(\beta\)-field estimators** _Intuitively, the optimal \(W\)-field estimator should be obtained by applying the optimal \(\beta\)-field estimator to the transformed samples \(\mathbf{W}^{\intercal}\mathbf{Z}_{\ell}\), and transforming the result back to the physical space. This intuitive result is now properly derived._ To simplify the notations, let us denote by \(\widehat{\boldsymbol{\mu}}^{\mathrm{fw}}(\mathbf{Z},\boldsymbol{\beta})\) a \(\beta\)-field estimator based on samples from \(\mathbf{Z}\) and on (possibly non-optimal) field weights \(\boldsymbol{\beta}\). \(\widehat{\boldsymbol{\mu}}^{\mathrm{fw}}(\mathbf{Z},\boldsymbol{\beta})\) is a possibly biased and possibly sub-optimal estimator for \(\boldsymbol{\mu}_{\alpha}\). Similarly, we denote by \(\widehat{\boldsymbol{\mu}}^{\mathrm{W}}(\mathbf{Z},\boldsymbol{\beta}, \mathbf{W})\) the \(W\)-field estimator based on samples \(\mathbf{Z}\) and using field weights \(\boldsymbol{\beta}\). Then equation (71) can be rewritten as \[\widehat{\boldsymbol{\mu}}^{\mathrm{W}}(\mathbf{Z},\boldsymbol{\beta}, \mathbf{W}):=\mathbf{W}\widehat{\boldsymbol{\mu}}^{\mathrm{fw}}(\mathbf{Z} \mathbf{W},\boldsymbol{\beta}) \tag{72}\] UnbiasednessWe first show that unbiased \(W\)-field estimators are necessarily associated to unbiased \(\beta\)-field estimator. Let \((P1)\) be the proposition "\(\widehat{\boldsymbol{\mu}}^{\text{W}}(\mathbf{Z},\boldsymbol{\beta},\mathbf{W})\) is an unbiased estimator of \(\mathbb{E}[\mathbf{Z}]^{\intercal}\alpha\)". \[(P1) \iff\quad\mathbb{E}\left[\widehat{\boldsymbol{\mu}}^{\text{W}}( \mathbf{Z},\boldsymbol{\beta},\mathbf{W})\right]=\mathbb{E}[\mathbf{Z}]^{ \intercal}\alpha \tag{73}\] \[\iff\quad\mathbb{E}\left[\mathbf{W}\widehat{\boldsymbol{\mu}}^{ \text{fw}}(\mathbf{Z}\mathbf{W},\boldsymbol{\beta})\right]=\mathbb{E}[ \mathbf{Z}]^{\intercal}\alpha\] (74) \[\iff\quad\mathbf{W}\,\mathbb{E}\left[\widehat{\boldsymbol{\mu}}^{ \text{fw}}(\mathbf{Z}\mathbf{W},\boldsymbol{\beta})\right]=\mathbb{E}[ \mathbf{Z}]^{\intercal}\alpha\] (75) \[\iff\quad\quad\mathbb{E}\left[\widehat{\boldsymbol{\mu}}^{ \text{fw}}(\mathbf{Z}\mathbf{W},\boldsymbol{\beta})\right]=\mathbf{W}^{ \intercal}\,\mathbb{E}[\mathbf{Z}]^{\intercal}\alpha\quad\text{using }\mathbf{W}^{ \intercal}\mathbf{W}=\mathbf{W}\mathbf{W}^{\intercal}=\mathbf{I}_{n}\] (76) \[\iff\quad\quad\mathbb{E}\left[\widehat{\boldsymbol{\mu}}^{ \text{fw}}(\mathbf{Z}\mathbf{W},\boldsymbol{\beta})\right]=\mathbb{E}[ \mathbf{Z}\mathbf{W}]^{\intercal}\alpha\] (77) \[\iff\quad\quad\quad\quad(P2) \tag{78}\] with \((P2)\): "\(\widehat{\boldsymbol{\mu}}^{\text{fw}}(\mathbf{Z}\mathbf{W},\boldsymbol{\beta})\) is an unbiased estimator of \(\mathbb{E}\big{[}\mathbf{Z}\mathbf{W}\big{]}^{\intercal}\alpha\)". Minimal varianceWe then show that the mean square errors of two associated estimators are equal. \[\text{MSE}\big{(}\widehat{\boldsymbol{\mu}}^{\text{W}}(\mathbf{Z},\boldsymbol{\beta},\mathbf{W}),\mathbb{E}[\mathbf{Z}]^{\intercal}\alpha\big{)} =\mathbb{E}\Big{[}\big{\|}\widehat{\boldsymbol{\mu}}^{\text{W}}( \mathbf{Z},\boldsymbol{\beta},\mathbf{W})-\mathbb{E}[\mathbf{Z}]^{\intercal} \alpha\big{\|}^{2}\Big{]} \tag{79}\] \[=\mathbb{E}\Big{[}\big{\|}\mathbf{W}\widehat{\boldsymbol{\mu}}^{ \text{fw}}(\mathbf{Z}\mathbf{W},\boldsymbol{\beta})-\mathbb{E}[\mathbf{Z}]^{ \intercal}\alpha\big{\|}^{2}\Big{]}\] (80) \[=\mathbb{E}\Big{[}\big{\|}\mathbf{W}^{\intercal}\mathbf{W} \widehat{\boldsymbol{\mu}}^{\text{fw}}(\mathbf{Z}\mathbf{W},\boldsymbol{\beta} )-\mathbf{W}^{\intercal}\,\mathbb{E}[\mathbf{Z}]^{\intercal}\alpha\big{\|}^{2} \Big{]}\] (81) \[=\mathbb{E}\Big{[}\big{\|}\widehat{\boldsymbol{\mu}}^{\text{fw}} (\mathbf{Z}\mathbf{W},\boldsymbol{\beta})-\mathbb{E}[\mathbf{Z}\mathbf{W}]^{ \intercal}\alpha\big{\|}^{2}\Big{]}\] (82) \[=\text{MSE}\big{(}\widehat{\boldsymbol{\mu}}^{\text{fw}}(\mathbf{ Z}\mathbf{W},\boldsymbol{\beta}),\mathbb{E}[\mathbf{Z}\mathbf{W}]^{\intercal}\alpha\big{)} \tag{83}\] where we used \(\mathbf{W}\mathbf{W}^{\intercal}=\mathbf{I}_{n}\) to obtain (81) and \(\mathbf{W}^{\intercal}\mathbf{W}=\mathbf{I}_{n}\) to obtain (82). Since the unbiased estimators have equal mean square errors, the best unbiased \(W\)-field estimator is associated to the best unbiased \(\beta\)-field estimator through (72). All results valid for the \(\beta\)-field estimator (optimal values, sample allocation) can thus be applied here, by replacing random vectors \(\mathbf{Z}_{\ell}\) by \(\mathbf{W}^{\intercal}\mathbf{Z}_{\ell}\). Numerical cost of applying the estimatorTo limit the number of transforms, the \(W\)-field ML estimator can be applied as \[\widehat{\boldsymbol{\mu}}^{\text{W}}_{\alpha}=\mathbf{W}\sum_{k=1}^{K}\sum_{ \ell\in S^{(k)}}\text{Diag}\left(\boldsymbol{\beta}^{(k)}_{\ell}\right) \mathbf{W}^{\intercal}\widehat{E}^{(k)}\big{[}\mathbf{Z}_{\ell}\big{]} \tag{84}\] which counts \(p\) forward transforms and one inverse transform (in the MLMC case, \(2L\) transforms. See page 7 for the definition of \(p\)). Numerical cost of estimating the optimal weightsThe estimation of the optimal weights requires the computation of element-wise covariance matrices \(C^{(k,i)}\). The simplest approach consists in estimating the \(C^{(k,i)}\) from a set of \(N\times L\) transformed and coupled realizations, with \(N\) a large sampling size. The transformation of this training ensemble into the \(W\)-space requires \(N\times L\) forward transforms. Choice of WThis estimator depends on the choice of \(\mathbf{W}\). With \(\mathbf{W}=\mathbf{I}_{n}\), we retrieve the \(\beta\)-field estimator. With a Fourier basis, we get the optimal spectral filters. Note that if \(\mathbf{W}\) is allowed to vary, the \(W\)-field estimators encompass all estimators with real valued symmetric matrices that are simultaneously diagonalizable. This is less general than the BLUE introduced in section 4.5, where the matrix weights are generally not symmetric. ### Matrix weights - the multidimensional MLBLUE The general MLBLUE for a random vector estimation has matrix weights, as mentioned in Croci et al. (2023). We follow the same approach as SU20 for the derivation of the MLBLUE in this context. A variance minimization approach is also feasible and would yield the same results (see appendix A). NotationsSome notations need to be updated for this section: * The random vectors \(\mathbf{Z}_{1},\ldots,\mathbf{Z}_{L}\) on each levels can now have different sizes \(n_{\ell}\) possibly all different from \(n\). For instance, if the low-fidelity samples originate from simulations on coarse grids, there is no requirement to interpolate them back to the grid of the fine level \(L\). To shorten notations, we define: \[N :=\sum_{\ell=1}^{L}n_{\ell}\] (85) \[n^{(k)} :=\sum_{\ell\in S(k)}n_{\ell}\] (86) * \(\mathbf{Z}\) is now the concatenation of the \(L\) random vectors \(\mathbf{Z}_{\ell}\in\mathbb{R}^{n_{\ell}}\). \[\mathbf{Z} :=\left(\mathbf{Z}_{1}^{\intercal}\cdots\mathbf{Z}_{L}^{\intercal} \right)^{\intercal}\in\mathbb{R}^{N}\] (87) \[\boldsymbol{\mu} :=\mathbb{E}\big{[}\mathbf{Z}\big{]}\] (88) * The selection and extension operators are extended accordingly, to select only the part \(\mathbf{Z}^{(k)}\) of \(\mathbf{Z}\) which is relevant to some coupling group \(k\). \[\mathbf{Z}^{(k)} :=\left(\cdots\mathbf{Z}_{\ell}^{\intercal}\cdots\right)_{\ell\in S ^{(k)}}^{\intercal}\in\mathbb{R}^{n^{(k)}}\] (89) \[\mathbf{R}^{(k)}\] of size \(n^{(k)}\times N\) such that \[\mathbf{R}^{(k)}\mathbf{Z}=\mathbf{Z}^{(k)}\] (90) \[\mathbf{P}^{(k)} :=\mathbf{R}^{(k)\intercal}\] (91) * The \(\alpha\) coefficients are extended to a matrix \(\boldsymbol{\alpha}\) representing a linear map from \(\mathbf{R}^{N}\) to \(\mathbf{R}^{n}\). In the most common case, we are interested in the high-fidelity level and \(n=n_{L}\). In this case, \(\boldsymbol{\alpha}\) is the selection operator for this level. \[\boldsymbol{\alpha} :=\left(\mathbf{0}_{n\times n_{1}}\ldots\mathbf{0}_{n\times n_{L- 1}}\mathbf{I}_{n}\right)\] (93) The quantity of interest here is \(\boldsymbol{\mu}_{\alpha}:=\boldsymbol{\alpha}\boldsymbol{\mu}\in\mathbb{R}^ {n}\). Normal equationsAll samples are concatenated in a big column vector of "observations" \(\underline{\mathbf{Z}}:=\left(\left(\mathbf{Z}^{(k),i}\right)_{i=1}^{m^{(k)}} \right)_{k=1}^{K}\), where \(\mathbf{Z}^{(k),i}\in\mathbb{R}^{n^{(k)}}\) is the \(i\)-th (random) sample on coupling group \(k\). The size of \(\underline{\mathbf{Z}}\) is \(\sum_{k=1}^{K}m^{(k)}n^{(k)}\). The "observation operator" relating \(\boldsymbol{\mu}\) and \(\underline{\mathbf{Z}}\) is the column block vector \(\mathbf{H}:=\left(\left(\mathbf{R}^{(k)}\right)_{i=1}^{m^{(k)}}\right)_{k=1}^ {K}\). Then \[\underline{\mathbf{Z}} =\mathbf{H}\boldsymbol{\mu}+\boldsymbol{\epsilon} \tag{95}\] \[\text{with}\quad\boldsymbol{\epsilon} :=\underline{\mathbf{Z}}-\mathbf{H}\boldsymbol{\mu} \tag{96}\] We have the following properties about the noise \(\boldsymbol{\epsilon}\). \[\mathbb{E}[\boldsymbol{\epsilon}] =0 \tag{97}\] \[\text{Cov}(\boldsymbol{\epsilon},\boldsymbol{\epsilon}) =\text{Cov}\big{(}\underline{\mathbf{Z}},\,\underline{\mathbf{Z} }\big{)}\] (98) \[=\text{Diag}_{k=1}^{K}\Big{(}\text{Diag}_{i=1}^{m^{(k)}}\big{(} \mathbf{C}^{(k)}\big{)}\Big{)}\] (99) \[\text{with}\quad\mathbf{C}^{(k)} :=\text{Cov}\big{(}\mathbf{Z}^{(k)},\mathbf{Z}^{(k)}\big{)} \tag{100}\] Invertibility of the covariance matricesThereafter, we assume that the matrices \(\mathbf{C}^{(k)}\) are non-singular. This may not be the case in some cases, and some care should be taken in defining the low-fidelity samples. For instance, the assumption is not valid if some low fidelity samples on level \(\ell\) are coarse grid simulations linearly interpolated to a finer grid. In this situation, the extrapolated elements in \(\mathbf{Z}_{\ell}\) can be expressed as a linear combination of the other elements it was extrapolated from, so that \(\mathbf{C}^{(k)}\) has a non zero kernel and is singular. To meet the non-singularity assumption in this case, the low fidelity simulations on the coarse grid should be used as is, without any interpolation. It is the role of the MLBLUE to compute the optimal linear transform that best maps samples from this low-fidelity level to the finest grid. The matrix weights here act not only as optimal multilevel weights, but also as interpolators and smoothers. Associated generalized least-squares problem \[\min_{\boldsymbol{\mu}\in\mathbf{R}^{N}}\|\underline{\mathbf{Z}}-\mathbf{H} \boldsymbol{\mu}\|_{\text{Cov}(\boldsymbol{\epsilon},\boldsymbol{\epsilon})^{- 1}}^{2}\] (101) which could be decomposed as a sum over the coupling group \(k\), as a consequence of the block-diagonal structure of \(\text{Cov}(\boldsymbol{\epsilon},\boldsymbol{\epsilon})^{-1}\). BLUE for the expectation of a random vectorThe solution is given by \[\boldsymbol{\widehat{\mu}}^{\text{mat},\star} =\left(\mathbf{H}^{\intercal}\,\text{Cov}(\boldsymbol{\epsilon}, \boldsymbol{\epsilon})^{-1}\mathbf{H}\right)^{-1}\mathbf{H}^{\intercal}\, \text{Cov}(\boldsymbol{\epsilon},\boldsymbol{\epsilon})^{-1}\underline{ \mathbf{Z}} \tag{102}\] \[=\boldsymbol{\phi}^{-1}\mathbf{y}\] (103) \[\text{with}\quad\boldsymbol{\phi} :=\sum_{k=1}^{K}m^{(k)}\mathbf{P}^{(k)}\left(\mathbf{C}^{(k)} \right)^{-1}\mathbf{R}^{(k)}\] (104) \[\text{and}\quad\mathbf{y} :=\sum_{k=1}^{K}m^{(k)}\mathbf{P}^{(k)}\left(\mathbf{C}^{(k)} \right)^{-1}\widehat{E}^{(k)}\Big{[}\mathbf{Z}^{(k)}\Big{]} \tag{105}\] Partial estimationThe BLUE for \(\boldsymbol{\mu}_{\alpha}\) is actually \(\boldsymbol{\alpha}\boldsymbol{\widehat{\mu}}^{\text{mat},\star}\). Matrix weightsThe previous equations can be expanded to evidence the multilevel structure of the estimator. \[\boldsymbol{\alpha}\boldsymbol{\widehat{\mu}}^{\text{mat},\star} =\sum_{k=1}^{K}\boldsymbol{\beta}^{(k)}\widehat{E}^{(k)}\left[ \mathbf{Z}^{(k)}\right] \tag{106}\] \[=\sum_{k=1}^{K}\sum_{\ell\in S^{(k)}}\boldsymbol{\beta}_{\ell}^{ (k)}\widehat{E}^{(k)}\left[\mathbf{Z}_{\ell}\right]\] (107) \[\boldsymbol{\beta}^{(k)} =m^{(k)}\boldsymbol{\alpha}\boldsymbol{\phi}^{-1}\mathbf{P}^{(k) }\left(\mathbf{C}^{(k)}\right)^{-1}\,\in\mathbb{R}^{n\times n^{(k)}}\] (108) \[=\left(\cdots\,\,\boldsymbol{\beta}_{\ell}^{(k)}\cdots\right)_{ \ell\in S^{(k)}} \tag{109}\] Variance and sample allocationThe approach of Croci et al. (2023) to solve the MOSAP via semidefinite programming is not directly applicable to this case. Some work on extending it would be of interest. For a given selection of groups of levels, the variance of the ML matrix-weighted estimator with optimal matrix weights is given by \[\operatorname{Tr}\operatorname{Cov}\left(\widehat{\boldsymbol{\mu}}^{\operatorname {mat},\star},\widehat{\boldsymbol{\mu}}^{\operatorname{mat},\star}\right)= \operatorname{Tr}\left(\boldsymbol{\alpha}\boldsymbol{\phi}^{-1}(m) \boldsymbol{\alpha}^{\intercal}\right) \tag{110}\] which can be used to find the optimal sample allocation \(m\) for this specific choice of levels and coupling structure. With suboptimal matrix weights \(\boldsymbol{\beta}^{(k)}\), the variance is given by \[\operatorname{Tr}\operatorname{Cov}\left(\widehat{\boldsymbol{\mu}}^{ \operatorname{mat}},\widehat{\boldsymbol{\mu}}^{\operatorname{mat}}\right)= \sum_{k=1}^{K}m^{(k)}\operatorname{Tr}\left(\boldsymbol{\beta}^{(k)}\mathbf{ C}^{(k)}\left(\boldsymbol{\beta}^{(k)}\right)^{\intercal}\right) \tag{111}\] RemarkIn the case of a weighted MLMC, the no-bias condition implies that last matrix weight \(\boldsymbol{\beta}_{L,K}\) is \(\mathbf{I}_{n}\). Computational costThis approach is untractable as is for large-dimension systems. Indeed, estimating the matrix weights requires: * Estimating the covariance matrix \(\mathbf{C}:=\operatorname{Cov}(\mathbf{Z},\mathbf{Z})\) of size \(N\) by \(N\), which is a \(\mathcal{O}\!\left(n^{2}\right)\) task. The covariance matrices \(\mathbf{C}^{(k)}\) can then be extracted as \(\mathbf{C}^{(k)}=\mathbf{R}^{(k)}\mathbf{C}\mathbf{P}^{(k)}\). Some of these \(\mathbf{C}^{(k)}\) are invertible only if \(\mathbf{C}\) is estimated with at least \(\max_{k}n^{(k)}+1\) samples, which may be very expensive. A possible workaround to the singularity of \(\mathbf{C}\) could be using some regularization technique such as covariance localization or ridging. * Inverting the \(\boldsymbol{\phi}\) matrix, of size \(N\), and each \(\mathbf{C}^{(k)}\) matrix of size \(n^{(k)}\), or solving linear systems of associated sizes if the weights are directly applied to some MC estimate. Making it tractable in large dimensionsThere are various ways forward to simplify this multilevel estimator to make it tractable in large dimension. If the simulations are attached to a physical space with some notion of distance, the interpretation of the matrix weights as interpolators and smoothers suggest that the \(\boldsymbol{\beta}^{(k)}_{\ell}\) matrices could be imposed as sparse. This means that the estimation of an element on the fine grid should only involve low-fidelity elements that are within some maximum distance. The matrix weights could be further simplified by imposing some invariance by translation, or have some periodicity based on the underlying grids. Alternatively, the simplification could be done the other way round. The inter-level and inter-element covariance matrices \(\mathbf{C}^{(k)}\) could be computed only on pair of points within a given distance. Beyond this distance, the covariances would be assumed to be zero. This could be an interesting avenue for future research. Relation with other multilevel estimatorsThe multilevel estimators introduced in sections 4.2 to 4.4 can be considered as special cases of the general matrix-weight estimator. * The estimator with scalar weights is restricting itself to \(\boldsymbol{\beta}_{\ell}^{(k)}\) matrices of the form \(\beta_{\ell}^{(k)}\mathbf{I}_{n}\). * The \(\beta\)-field estimator is restricting itself to \(\boldsymbol{\beta}_{\ell}^{(k)}\) matrices of the form \(\text{Diag}\Big{(}\boldsymbol{\beta}_{\ell}^{(k)}\Big{)}\) (where \(\boldsymbol{\beta}_{\ell}^{(k)}\) is now a vector). * The \(W\)-field estimator is restricting itself \(\boldsymbol{\beta}_{\ell}^{(k)}\) matrices that are all diagonalizable in the basis \(\mathbf{W}\). * The tractable approaches mentioned in previous paragraph would restrict to sparse matrix weights, or kernel-based matrices. Estimation of a scalar covariance Let \(X=(X_{\ell})_{\ell=1}^{L}\) and \(Y=(Y_{\ell})_{\ell=1}^{L}\) be random vectors gathering the scalar random variables \(X_{1},\ldots,X_{L}\) and \(Y_{1},\ldots,Y_{L}\). We group the covariances of \(X_{\ell}\) and \(Y_{\ell}\) for each \(\ell\) in the vector \(c:=(\operatorname{Cov}(X_{\ell},Y_{\ell}))_{\ell=1}^{L}\). We are interested in a linear combination of the elements of \(c\), of the kind \(\alpha^{\intercal}c\), with \(\alpha\in\mathbb{R}^{n}\setminus\{0\}\). In practice, \(\alpha=(0\ldots 0\;1)\in\mathbb{R}^{L}\). Given some some coupling structure \(\left(S^{(k)},m^{(k)}\right)_{k=1}^{K}\), we are looking for the best unbiased estimator of \(\alpha^{\intercal}c\) of the form \[\widehat{C}_{\alpha}^{\text{ML}}=\sum_{k=1}^{K}\sum_{\ell\in S^{(k)}}\beta_{ \ell}^{(k)}\widehat{C}^{(k)}(X_{\ell},Y_{\ell}) \tag{112}\] where \(\widehat{C}^{(k)}(X_{\ell},Y_{\ell})\) is the sample covariance estimator of \(X_{\ell}\) and \(Y_{\ell}\) based on \(m^{(k)}\) coupled samples. \[\widehat{C}^{(k)}(X_{\ell},Y_{\ell}):=\frac{m^{(k)}}{m^{(k)}-1}\widehat{E}^{( k)}\left(\left(X_{\ell}-\widehat{E}^{(k)}(X_{\ell})\right)\left(Y_{\ell}- \widehat{E}^{(k)}(Y_{\ell})\right)\right) \tag{113}\] This is an unbiased estimator for \(\operatorname{Cov}(X_{\ell},Y_{\ell})\). A linear estimator?The multilevel MC estimators of the form (112) depend quadratically on the samples, and as such are not truly linear estimators. A multilevel best _linear_ unbiased estimator for the covariance does not exist in general. However, estimators (112) are still linear in the MC estimators they combine, which is why we use the name MLBLUE for covariance estimators as well. The regression approach could be used here to derive the MLBLUE, but the MC estimators on each coupling groups should be used instead of the samples. Relation to the expectation caseThis problem is very similar to the problem faced in section 3 for the estimation of the expectation. The two important points needed to extend the results are the unbiasedness of the MC estimators involved, and an expression of their covariances. * \(\operatorname{Diag}\left(\widehat{C}^{(k)}\left(X^{(k)},Y^{(k)}\right)\right)\) is an unbiased estimator for \(R^{(k)}c\), similar to the expectation case where \(\widehat{E}^{(k)}\left[Z^{(k)}\right]\) is an unbiased estimator of \(R^{(k)}\mu\). * However, the covariance of MC estimators for the covariances does not simplify as well as the MC estimators for the mean. Let \(\mathbb{C}^{(k)}\), of size \(p^{(k)}\) by \(p^{(k)}\), denote the covariance of the MC estimator on group \(k\) with \(m^{(k)}\) samples. It differs from the covariance \(C^{(k)}\) of the random vectors \(Z^{(k)}\). For the estimation of the mean, we have \[\mathbb{C}^{(k)} =\operatorname{Cov}\left(\widehat{E}^{(k)}\left[Z^{(k)}\right], \widehat{E}^{(k)}\left[Z^{(k)}\right]\right) \tag{114}\] \[=1/m^{(k)}C^{(k)} \tag{115}\] For the estimation of the covariance, we have \[\mathbb{C}^{(k)}:=\operatorname{Cov}\left(\operatorname{Diag}\left(\widehat{C }^{(k)}\left(X^{(k)},Y^{(k)}\right)\right),\operatorname{Diag}\left(\widehat{C }^{(k)}\left(X^{(k)},Y^{(k)}\right)\right)\right) \tag{116}\] which is expanded later in this section. MLBLUE results expressed as a function of the \(\mathbb{C}^{(k)}\)The solution to the constrained minimization problem is given by \[\beta^{(k)} =\left(\mathbb{C}^{(k)}\right)^{-1}R^{(k)}\phi^{-1}\alpha \tag{117}\] \[\text{with }\phi:=\sum_{k=1}^{K}P^{(k)}\big{(}\mathbb{C}^{(k)} \big{)}^{-1}R^{(k)}. \tag{118}\] The variance of the MLBLUE is \(\alpha^{\intercal}\phi^{-1}\alpha\). Note that this paragraph is valid for the estimation of the expectation, for the estimation of the covariance, but also for the estimation of any scalar statistic which admits unbiased Monte Carlo estimators. @Paul,Selime: Should this be underlined by transforming the section into _Estimation of a scalar statistics_? Estimating the \(\mathbb{C}^{(k)}\)Covariances \(\mathbb{C}^{(k)}\) can be expressed as a function of the first centered-moments of \(R^{(k)}X\) and \(R^{(k)}Y\), isolating the dependence on the sample size \(m^{(k)}\). Let's drop the \(k\) for the sake of clarity, and focus on the covariance matrix \(\mathbb{C}\) associated to \(X\) and \(Y\). It can be shown (equation 9 of M15a) that for \(\ell,\ell^{\prime}\in\{1,\ldots,L\}\), element \(\ell,\ell^{\prime}\) of \(\mathbb{C}\) is \[\mathbb{C}_{\ell,\ell^{\prime}}=\frac{\mathbb{M}^{4}\left[X_{\ell },X_{\ell}^{\prime},Y_{\ell},Y_{\ell}^{\prime}\right]}{m^{(k)}}-\frac{ \operatorname{Cov}\left(X_{\ell},Y_{\ell}\right)\operatorname{Cov}\left(X_{ \ell^{\prime}},Y_{\ell^{\prime}}\right)}{m^{(k)}}\\ +\frac{\operatorname{Cov}\left(X_{\ell},Y_{\ell^{\prime}}\right) \operatorname{Cov}\left(Y_{\ell},X_{\ell^{\prime}}\right)+\operatorname{Cov} \left(X_{\ell},X_{\ell^{\prime}}\right)\operatorname{Cov}\left(Y_{\ell},Y_{ \ell^{\prime}}\right)}{m^{(k)}(m^{(k)}-1)} \tag{119}\] with \(\mathbb{M}^{4}\left[X_{1},X_{2},X_{3},X_{4}\right]:=\mathbb{E}\left[\Pi_{i=1} ^{4}\left(X_{i}-\mathbb{E}\left[X_{i}\right]\right)\right]\). In the case of variance estimation, when \(X=Y\), equation (119) simplifies to \[\mathbb{C}_{\ell,\ell^{\prime}}=\frac{\mathbb{M}^{4}\left[X_{\ell},X_{\ell}^{ \prime}\right]}{m^{(k)}}-\frac{\mathbb{V}\left(X_{\ell}\right)\mathbb{V}\left( X_{\ell^{\prime}}\right)}{m^{(k)}}+\frac{2\operatorname{Cov}\left(X_{\ell},X_{ \ell^{\prime}}\right)^{2}}{m^{(k)}(m^{(k)}-1)} \tag{120}\] with \(\mathbb{M}^{4}\left[X_{1},X_{2}\right]:=\mathbb{M}^{4}\left[X_{1},X_{1},X_{2},X_{2}\right]\) (the definition of \(\mathbb{M}^{4}\) depends on the number of parameters it is given). MosapThe semidefinite programming approach to the MOSAP does not extends naturally here, as the matrix \(\phi(m)\) no longer depends linearly on the sample sizes \(m\). Note this will be the case for all multilevel estimators of the covariance introduced in this note, including multilevel estimators of covariance matrices. This could be circumvented by minimizing an upper bound of the variance that linearly depends on the inverses of the samples sizes, as done nyyycek2019. Another solution that does not introduce approximations is given by falling back to the constrained minimization of the non linear variance \(\alpha^{\intercal}\phi(m)^{-1}\alpha\), or by solving the non-linear semidefinite problem of croci2023. Computational cost * In practice, all the involved matrices are of size at most \(L\) (_i.e._ 2 or 3). * Ensemble estimates of fourth-order moments are more noisy than ensemble covariances. As a consequence, the ensemble sizes used to estimate \(\mathbb{C}\) should be larger here than in section 3 to reach a similar robustness. * \(\mathbb{C}\) is a symmetric matrix, so only \(L(L+1)/2\) elements really need to be estimated. This number can be even more reduced by noting that covariances between levels that are not coupled (\(\ell,\ell^{\prime}\) so that \(\forall k,\,\ell\in S^{(k)}\Rightarrow\ell^{\prime}\notin S^{(k)}\)) need not be computed. Estimation of a covariance matrix ### The problem How do the results of previous section extend to the multidimensional case? The general MLBLUE for the estimation of a covariance matrix involves fourth-order tensors linearly combining the entries of several covariance matrix estimators into one matrix. We don't derive it here, as it is computationnally prohibitively expensive in high-dimension applications. Instead, we focus on multilevel estimators of the covariance matrix that are both simpler and computationally affordable. The \(\mathbf{X}_{\ell}\) are now random vectors of \(\Omega\to\mathbb{R}^{n}\). Note that they are imposed to have the same dimension, which would not be the case with the general MLBLUE. They are concatenated in the random vector \(\mathbf{X}:=(\mathbf{X}_{1}^{\intercal},\ldots,\mathbf{X}_{L}^{\intercal})^{\intercal}\). We are interested in the covariance matrix \(\operatorname{Cov}(\mathbf{X},\mathbf{X})\) of size \(nL\). Partial estimationMore specifically, we are often interested in estimating only some blocks of \(\operatorname{Cov}(\mathbf{X},\mathbf{X})\), for instance the last block \(\operatorname{Cov}(\mathbf{X}_{L},\mathbf{X}_{L})\). To extend the formalism to this kind of situation, we focus on estimating \(\mathbf{C}^{\alpha}:=\sum_{\ell=1}^{L}\alpha_{\ell}\operatorname{Cov}(\mathbf{ X}_{\ell},\mathbf{X}_{\ell})\) for some non-zero vector \(\alpha\in\mathbb{R}^{L}\). Generalized variance for a matrix estimatorWe choose the Frobenius norm to measure mean square errors for matrix estimators, as in M15a. The choice of the Frobenius norm is associated to the following bias-variance decomposition: \[\operatorname{MSE}\left(\widehat{\mathbf{B}},\mathbf{B}\right)=\sum_{i=1}^{n }\sum_{j=1}^{n}\mathbb{V}(\widehat{B}_{ij})+\left\|\mathbb{E}\left[\widehat{ \mathbf{B}}-\mathbf{B}\right]\right\|_{\mathrm{F}}^{2} \tag{121}\] Hereafter, we overload the variance operator and define \[\mathbb{V}(\widehat{\mathbf{B}}):=\sum_{i=1}^{n}\sum_{j=1}^{n}\mathbb{V}( \widehat{B}_{ij}) \tag{122}\] for any matrix-valued random variable \(\widehat{\mathbf{B}}\). ### Scalar weights We first consider the case of scalar weights, one for each MC estimator, common to all matrix elements in the estimator. #### Class of estimators \[\widehat{\mathbf{C}}^{\mathrm{ML}}=\sum_{k=1}^{K}\sum_{\ell\in S^{(k)}}\beta_{ \ell}^{(k)}\widehat{C}^{(k)}\left(\mathbf{X}_{\ell},\mathbf{X}_{\ell}\right) \tag{123}\] where \(\beta_{\ell}^{(k)}\) is a scalar. Optimal scalar weightsThe optimal weights can be retrieved very similarly to what is done for multidimensional expectation in section 4.2. The same relation (53) guarantees the unbiasedness of the estimator. It can be used as a constraint to minimize the variance of the estimator, given by a relationship similar to equation (59). \[\mathbb{V}\left(\widehat{\mathbf{C}}^{\mathrm{ML}}\right)=\sum_{k=1}^{K}\left( \beta^{(k)}\right)^{\intercal}\left(\sum_{i=1}^{n}\sum_{j=1}^{n}\mathbb{C}^{(k,ij)}\right)\beta^{(k)} \tag{124}\] where \(\mathbb{C}^{(k,ij)}\) is the covariance matrix of size \(p^{(k)}\times p^{(k)}\) for Monte Carlo covariance estimators \(\widehat{C}^{(k)}(X_{\ell,i},X_{\ell,j})\), \(\ell\in S^{(k)}\). The optimal weights are then given by equation (117), replacing \(\mathbb{C}^{(k)}\) with \(\sum_{i=1}^{n}\sum_{j=1}^{n}\mathbb{C}^{(k,ij)}\), or equivalently with \(1/n^{2}\sum_{i=1}^{n}\sum_{j=1}^{n}\mathbb{C}^{(k,ij)}\). Estimating the average \(\mathbb{C}^{(k,ij)}\)The expression of the average inter-level covariance matrix of MC estimators directly follows from the expression in the scalar case (section 5). \[\sum_{i=1}^{n}\sum_{j=1}^{n}\mathbb{C}^{(k,ij)}_{\ell,\ell^{ \prime}}=\sum_{i=1}^{n}\sum_{j=1}^{n}\left(\frac{\mathbb{M}^{4}\left[X_{\ell,i },X_{\ell^{\prime},i},X_{\ell,j},X_{\ell^{\prime},j}\right]}{m^{(k)}}\right.\\ -\frac{\operatorname{Cov}\left(X_{\ell,i},X_{\ell,j}\right) \operatorname{Cov}\left(X_{\ell^{\prime},i},X_{\ell^{\prime},j}\right)}{m^{( k)}}\\ \left.+\frac{\operatorname{Cov}\left(X_{\ell,i},X_{\ell^{\prime },j}\right)\operatorname{Cov}\left(X_{\ell,j},X_{\ell^{\prime},i}\right)+ \operatorname{Cov}\left(X_{\ell,i},X_{\ell^{\prime},i}\right)\operatorname{ Cov}\left(X_{\ell,j},X_{\ell^{\prime},j}\right)}{m^{(k)}(m^{(k)}-1)}\right) \tag{125}\] MosapAs explained in section 5, the semidefinite programming approach to the MOSAP does not extend naturally here. Computational costEstimating \(\sum_{i=1}^{n}\sum_{j=1}^{n}\mathbb{C}^{(k,ij)}\) directly is not tractable in high dimension, as the number of operations is quadratic in the grid size \(n\). Fortunately, it is possible to reduce the cost of this estimation from \(\mathcal{O}\left(n_{e}n^{2}\right)\) to \(\mathcal{O}\left(n_{e}^{2}n\right)\), where \(n_{e}\) is the size of the ensemble used to estimate centered statistics in the pre-processing step (see appendix B). Sample allocation for MLMCThe expression of the variance of a general estimator given here can be used to estimate the optimal sample allocation for the MLMC estimator of a covariance matrix. This is detailed in appendix C. ### Matrix field weights Class of estimators \[\widehat{\mathbf{C}}^{\mathrm{ML}}=\sum_{k=1}^{K}\sum_{\ell\in S^{(k)}}\mathbf{ \beta}_{\ell}^{(k)}\circ\widehat{C}^{(k)}\left(\mathbf{X}_{\ell},\mathbf{X}_{ \ell}\right) \tag{126}\] where \(\mathbf{\beta}_{\ell}^{(k)}\) is an \(n\times n\) matrix and \(\circ\) denotes the Schur (element-wise) product. Unbiasedness, Variance, Optimal weightsFrom the definition of the generalized variance derived from the Frobenius norm, the problem can be decomposed as independent estimations of \(n^{2}\) scalar covariances. See section 5 for more details. Numerical costUnaffordable as such, since it requires the estimation of \(n^{2}\) scalar coefficients. Relation to covariance localizationThe use of a Schur product applied to a covariance matrix in equation (126) reminds of covariance localization in data assimilation (e.g. Lorenc, 2003). This regularization technique consists in element-wise multiplication of an ensemble covariance matrix with a parametrized distance-dependent correlation matrix \(\mathbf{L}\): \[\mathbf{L}\circ\widehat{C}^{(k)}\left(\mathbf{X}_{\ell},\mathbf{X}_{\ell}\right) \tag{127}\] Localization differs from what has been considered so far, has it yields a _biased_ estimator of the covariance matrix wherever \(\mathbf{L}\) is not 1. Optimizing the variance without the bias constraint makes no sense, but optimizing the MSE does, since it includes a bias and a variance contribution. This is one of the ideas presented in M15b, which we extend to the multilevel case in next section. ### Optimal localization We propose here an extension of the optimal localization theory of M15a and M15b to multilevel covariance estimators. We first derive the results in the line of the previous sections. An attempt to propose a derivation based on the approach of M15a and M15b (see also Menetrier, 2020) is proposed in appendix D. Though both approaches yield the same practical results, the associated assumptions and interpretations are slightly different. Contrarily to previous sections, we no longer minimize the variance under a no-bias constraint. The localization makes the covariance estimator biased, so we rather minimize the mean squared error of the localized multilevel estimator. #### 6.4.1 General case To simplify the notations, we note \(\mathbf{B}_{\ell}:=\text{Cov}(\mathbf{X}_{\ell},\mathbf{X}_{\ell})\) and \(\widetilde{\mathbf{B}}_{\ell}^{(k)}:=\widehat{C}^{(k)}\left(\mathbf{X}_{\ell},\mathbf{X}_{\ell}\right)\). From the unbiasedness of the Monte Carlo covariance estimator, we have \(\mathbb{E}\left[\widetilde{\mathbf{B}}_{\ell}^{(k)}\right]=\mathbf{B}_{\ell}\). Class of estimators\[\widehat{\mathbf{B}}^{\text{ML}}=\sum_{k=1}^{K}\sum_{\ell \in S^{(k)}}\mathbf{L}_{\ell}^{(k)}\circ\widetilde{\mathbf{B}}_{\ell}^{(k)}\] (128) where \(\mathbf{L}_{\ell}^{(k)}\) is an \(n\times n\) matrix, without any imposed structure at this stage. Minimizing the MSEWe want to minimize the mean square error of the localized covariance estimator: \[\text{MSE}\left(\widehat{\mathbf{B}}^{\text{ML}},\mathbf{B}_{L} \right)=\sum_{i=1}^{n}\sum_{j=1}^{n}\mathbb{E}\left[\left(\sum_{k=1}^{K}\sum_ {\ell\in S^{(k)}}L_{\ell,ij}^{(k)}\circ\widetilde{B}_{\ell,ij}^{(k)}-B_{L,ij} \right)^{2}\right] \tag{129}\] The function to minimize is the sum of \(n^{2}\) independent cost functions. We now focus on one independent sub-problem \(i,j\). To ease the notations, we drop the \(ij\) indexes and denote \(L_{\ell}^{(k)}:=L_{\ell,ij}^{(k)}\), \(\widetilde{B}_{\ell}^{(k)}:=\widetilde{B}_{\ell,ij}^{(k)}\) and \(B_{\ell}:=B_{\ell,ij}\). The problem reads as \[\min_{L_{\ell}^{(k)},\,1\leq k\leq K,\,\ell\in S^{(k)}}\mathbb{E }\left[\left(\sum_{k=1}^{K}\sum_{\ell\in S^{(k)}}L_{\ell}^{(k)}\widetilde{B}_{ \ell}^{(k)}-B_{L}\right)^{2}\right] \tag{130}\] To further simplify the notations, we define stacked vectors containing the information from all levels in all coupling groups: \[\underline{L} :=\left(\left(L_{\ell}^{(k)}\right)_{\ell\in S^{(k)}}\right)_{k=1 }^{K}\in\mathbb{R}^{p} \tag{131}\] \[\underline{\widetilde{B}} :=\left(\left(\widetilde{B}_{\ell}^{(k)}\right)_{\ell\in S^{(k)} }\right)_{k=1}^{K}\in\mathbb{R}^{p}\] (132) \[\underline{B} :=\left(\left(B_{\ell}\right)_{\ell\in S^{(k)}}\right)_{k=1}^{K} \in\mathbb{R}^{p} \tag{133}\] with \(p=\sum_{k=1}^{K}p^{(k)}\). Equation (130) becomes \[\min_{\underline{L}}\,\mathbb{E}\left[\left(\underline{L}^{\intercal }\underline{\widetilde{B}}-B_{L}\right)^{2}\right] =\min_{\underline{L}}\,\mathbb{E}\left[\left(\underline{L}^{ \intercal}\underline{\widetilde{B}}\right)\left(\underline{\widetilde{B}}^{ \intercal}\underline{L}\right)-2B_{L}\left(\underline{\widetilde{B}}^{ \intercal}\underline{L}\right)+B_{L}^{2}\right]\] \[=\min_{\underline{L}}\,\underline{L}^{\intercal}\,\mathbb{E} \left[\underline{\widetilde{B}}\underline{\widetilde{B}}^{\intercal}\right] \underline{L}-2B_{L}\underline{B}^{\intercal}\underline{L} \tag{134}\] Setting the gradient with respect to \(\underline{L}\) to zero yields the optimality criterion \[\mathbb{E}\left[\underline{\widetilde{B}}\underline{\widetilde{B}}^{ \intercal}\right]\underline{L}=B_{L}\underline{B} \tag{135}\] Invertibility of \(\mathbb{E}\left[\underline{\widetilde{B}}\underline{\widetilde{B}}^{ \intercal}\right]\)The uniqueness of the optimal localization is not guaranteed, in particular if the matrix \(\mathbb{E}\left[\underline{\widetilde{B}}\underline{\widetilde{B}}^{ \intercal}\right]\) on the left-hand side is not invertible. This should not be a problem though, as cases of non-invertibility are related to flat gradient in the MSE, which means to variations of \(\underline{L}\) which don't impact the MSE. In practice, the non-uniqueness of these cases \(i,j\) can be decided from the neighbouring pairs of points. For instance, parametric correlation matrices can be fit to these raw optimal localizations, to ensure the positive semidefiniteness of the localization matrices. Sample or asymptotic quantitiesThe left hand-side matrix could be expressed as a function of the sample size and asymptotic quantities. We could then use a large independent ensemble to estimate these asymptotic quantities. In practice, we don't have such large ensembles. A possible workaround is to express everything in terms of expectations of sample quantities, and to exploit the structure of the localization matrix. #### 6.4.2 Imposing some structure to the localization matrix In practice, the localization matrix has some predefined structure, both for ease of estimation and ease of use. The state space \(\mathbb{R}^{n}\) is attached to some physical space, three-dimensional for instance. Assuming for instance horizontal homogeneity and isotropy, we can define an equivalence relation between pairs \((i,j)\), associating pairs that should have the same localization. The space of pairs \(\mathbb{R}^{n}\times\mathbb{R}^{n}\) is thus partitioned into equivalence classes. \[L^{(k)}_{\ell,ij}=L^{(k)}_{\ell,\mathcal{C}}\text{ where }\mathcal{C}=\left[(i,j) \right]\text{ is the equivalence class of }(i,j) \tag{136}\] In this context, equation (129) reads as \[\text{MSE}\left(\widehat{\mathbf{B}}^{\text{ML}},\mathbf{B}_{L}\right)=\sum_{i=1}^ {n}\sum_{j=1}^{n}\mathbb{E}\left[\left(\sum_{k=1}^{K}\sum_{\ell\in S^{(k)}}L_{ \ell,[(i,j)]}^{(k)}\circ\widetilde{B}_{\ell,ij}^{(k)}-B_{L,ij}\right)^{2}\right] \tag{137}\] where the localization \(L_{\ell,ij}^{(k)}\) has been replaced by \(L_{\ell,[(i,j)]}^{(k)}\). This minimization problem is associated to independent problems for each class \(\mathcal{C}=[(i,j)]\). \[\min_{L_{\ell,\,\mathcal{C}}^{(k)},\,1\leq k\leq K,\,\ell\in S^{(k)}}\;\sum_{( i,j)\in\mathcal{C}}\mathbb{E}\left[\left(\sum_{k=1}^{K}\sum_{\ell\in S^{(k)}}L_{ \ell,\mathcal{C}}^{(k)}\widetilde{B}_{\ell,ij}^{(k)}-B_{L,ij}\right)^{2}\right] \tag{138}\] As previously done, we define stacked vectors of \(\mathbb{R}^{p}\): \(\underline{L}\), \(\widetilde{B}_{ij}\) and \(\underline{B}_{ij}\) for \((i,j)\in\mathcal{C}\). Expanding the MSE gives \[\min_{\underline{L}}\;\sum_{(i,j)\in\mathcal{C}}\mathbb{E}\left[ \left(\underline{L}^{\intercal}\widetilde{B}_{ij}-B_{L,ij}\right)^{2}\right]\\ =\min_{\underline{L}}\;\underline{L}^{\intercal}\left(\sum_{(i,j )\in\mathcal{C}}\mathbb{E}\left[\underline{\widetilde{B}}_{ij}\widetilde{B}_ {ij}^{\intercal}\right]\right)\underline{L}-2\left(\sum_{(i,j)\in\mathcal{C}}B _{L,ij}\underline{B}_{ij}^{\intercal}\right)\underline{L} \tag{139}\] which is associated to the optimality criterion \[\left(\frac{1}{|\mathcal{C}|}\sum_{(i,j)\in\mathcal{C}}\mathbb{E}\left[ \underline{\widetilde{B}}_{ij}\widetilde{B}_{ij}^{\intercal}\right]\right) \underline{L}=\left(\frac{1}{|\mathcal{C}|}\sum_{(i,j)\in\mathcal{C}}B_{L,ij }\underline{B}_{ij}^{\intercal}\right) \tag{140}\] **Relating asymptotic quantities to expectations of sampled moments** Using results from Menetrier (2020, equation 4.2), we can relate asymptotic quantities to expectations of sample quantities: \[B_{L,ij}B_{\ell,ij}=P_{1}(m)\,\mathbb{E}\left[\widehat{C}_{m} \left(X_{\ell,i},X_{\ell,j}\right)\widehat{C}_{m}\left(X_{L,i},X_{L,j}\right)\right] \\ +P_{2}(m)\left(\mathbb{E}\left[\widehat{C}_{m}\left(X_{\ell,i},X_ {L,i}\right)\widehat{C}_{m}\left(X_{\ell,j},X_{L,j}\right)\right]\right.\\ +\left.\mathbb{E}\left[\widehat{C}_{m}\left(X_{\ell,i},X_{L,j} \right)\widehat{C}_{m}\left(X_{\ell,j},X_{L,i}\right)\right]\right)\\ +P_{3}(m)\,\mathbb{E}\left[\widehat{M}_{m}^{4}\left(X_{\ell,i},X_ {\ell,j},X_{L,i},X_{L,j}\right)\right] \tag{141}\] where \(m\geq 4\) is a sampling size, \(\widehat{C}_{m}\) is the unbiased Monte Carlo estimator of covariance for \(m\) samples, \(\widehat{M}_{m}^{4}\) is the Monte Carlo estimator for fourth-order centered moments, and \(P_{1}(m)\), \(P_{2}(m)\) and \(P_{3}(m)\) are rational fractions. \[\widehat{C}_{m}(X,Y) =\frac{1}{m-1}\sum_{i=1}^{m}\widetilde{X}^{(i)}\widetilde{Y}^{(i)} \tag{142}\] \[\widehat{M}_{m}^{4}(X,Y,Z,T) =\frac{1}{m}\sum_{i=1}^{m}\widetilde{X}^{(i)}\widetilde{Y}^{(i) }\widetilde{Z}^{(i)}\widetilde{T}^{(i)}\] (143) \[\text{with}\quad\widetilde{X}^{(i)} =X^{(i)}-\widehat{E}_{m}(X)\quad\text{etc.}\] (144) \[P_{1}(m) =\frac{(m-1)(m^{2}-3m+1)}{m(m-2)(m-3)}\] (145) \[P_{2}(m) =\frac{m-1}{m(m-2)(m-3)}\] (146) \[P_{3}(m) =-\frac{m}{(m-2)(m-3)} \tag{147}\] In practice, this relation requires estimating covariances between fine level \(L\) and any level \(\ell\). This can be done using a coupling group involving all fidelity levels. Ideally, the estimation should be performed independently of the covariance estimation. Ergodicity assumptionUsing expression (141) to express the right-hand side of (140), we still have to express expectations of sample quantities. Estimating this expectations by single-sample Monte Carlo estimators is a possible solution. The high resulting sampling noise is actually cancelled out by the averaging over the whole equivalence class \(\mathcal{C}\). Pretending that the noise is averaged out implicitly supposes that space averages are equivalent to sampling a common process (ergodic assumption). The approach of Benjamin Menetrier presented in appendix D gives more insight on what this process may be. Numerical costIn practice, the operator \(\sum_{(i,j)\in\mathcal{C}}\mathbb{E}\left[\cdot\right]\) could also be replaced by an average over a random subset of \(\mathcal{C}\) as is done in M15a. DrawbacksIn a single-level setting, Menetrier and Auligne (2015) showed how the localization and hybridization weights could be jointly optimized. The approach was very appealing, but later trials showed it was less robust than a two-step optimization, where localization would be first optimized before optimizing the hybridization weights (Menetrier, personal communication). A similar behavior may occur here, where having too many degrees of freedom in the optimization problem may result in non robust solutions. Retrieving the MLBLUE for the multidimensional expectation through constrained minimization of the variance We retrieve here the results of section 4.5 with the variance minimization approach used in 3. The notations of section 4.5 are used here. In particular, all random quantities are column vectors. We define the class of matrix-weighted ML estimators for \(\mathbf{\alpha}\mathbf{\mu}=\mathbf{\mu}_{\alpha}\) of the form \[\widehat{\mathbf{\mu}}^{\text{mat}}=\sum_{k=1}^{K}\sum_{\ell\in S^{(k)}}\mathbf{\beta} _{\ell}^{(k)}\widehat{E}^{(k)}\big{[}\mathbf{Z}_{\ell}\big{]} \tag{148}\] where the \(\mathbf{\beta}_{\ell}^{(k)}\) are matrices of \(\mathbb{R}^{n\times n_{\ell}}\). The inner sum can be made implicit by joining these matrices into \(\mathbf{\beta}^{(k)}:=\big{(}\cdots\,\mathbf{\beta}_{\ell}^{(k)}\cdots\big{)}_{\ell \in S^{(k)}}\). \[\widehat{\mathbf{\mu}}^{\text{mat}}=\sum_{k=1}^{K}\mathbf{\beta}^{(k)}\widehat{E}^{(k) }\big{[}\mathbf{Z}^{(k)}\big{]} \tag{149}\] Variance of the estimatorThe variance of the matrix-weighted estimator is given by \[\operatorname{Tr}\operatorname{Cov}\!\left(\widehat{\mathbf{\mu}}^{ \text{mat}},\widehat{\mathbf{\mu}}^{\text{mat}}\right) =\sum_{k=1}^{K}\frac{1}{m^{(k)}}\mathbf{\beta}^{(k)}\mathbf{C}^{(k)} \left(\mathbf{\beta}^{(k)}\right)^{\intercal} \tag{150}\] \[=\mathbf{\beta}\operatorname{Diag}_{k=1}^{K}\!\left(\frac{1}{m^{(k) }}\mathbf{C}^{(k)}\right)\!\mathbf{\beta}^{\intercal}\] (151) \[\text{with}\quad\mathbf{C}^{(k)} :=\operatorname{Cov}\!\left(\mathbf{Z}^{(k)},\mathbf{Z}^{(k)}\right)\] (152) \[\text{and}\quad\mathbf{\beta} :=\big{(}\cdots\,\mathbf{\beta}^{(k)}\cdots\big{)}_{1\leq k\leq K} \in\mathbf{R}^{n\times\sum_{k}n^{(k)}} \tag{153}\] Unbiasedness constraintThe no-bias condition is given by \[\sum_{k=1}^{K}\mathbf{\beta}^{(k)}\mathbf{R}^{(k)}-\mathbf{\alpha} =0 \tag{154}\] \[\Leftrightarrow\qquad\mathbf{\beta}\mathbf{P}^{\intercal}-\mathbf{\alpha} =0\] (155) \[\text{with}\quad\mathbf{P} :=\big{(}\mathbf{P}^{(1)}\ldots\mathbf{P}^{(K)}\big{)}\in\mathbf{ R}^{N\times\sum_{k}n^{(k)}} \tag{156}\] Unconstrained minimization problemWe introduce Lagrange multipliers \(\Lambda\). \[\mathbf{\beta},\mathbf{\Lambda}=\operatorname*{argmin}_{\mathbf{\beta}\in\mathbf{ R}^{p\times\Sigma_{k}n^{(k)}},\mathbf{\Lambda}\in\mathbb{R}^{n\times N}}\frac{1}{2} \operatorname*{Tr}\biggl{(}\mathbf{\beta}\operatorname*{Diag}_{k=1}^{K}\biggl{(} \frac{1}{m^{(k)}}\mathbf{C}^{(k)}\biggr{)}\mathbf{\beta}^{\intercal}\biggr{)}\\ -\sum_{i=1}^{n}\sum_{j=1}^{N}\Lambda_{ij}\left(\mathbf{\beta}\mathbf{ P}^{\intercal}-\mathbf{\alpha}\right)_{ij} \tag{157}\] Associated linear systemSetting the gradient with respect to \(\mathbf{\beta}\) and \(\mathbf{\Lambda}\) to zero yields the following linear system. \[\mathbf{\beta}\mathbf{\Sigma}-\mathbf{\Lambda}\mathbf{P} =\mathbf{0} \tag{158}\] \[\mathbf{\beta}\mathbf{P}^{\intercal} =\mathbf{\alpha} \tag{159}\] with \(\mathbf{\Sigma}:=\operatorname*{Diag}_{k=1}^{K}\bigl{(}\frac{1}{m^{(k)}} \mathbf{C}^{(k)}\bigr{)}\). Optimal matrix weightsThe unique solution is given by \[\mathbf{\beta} =\mathbf{\alpha}\mathbf{\phi}^{-1}\mathbf{P}\operatorname*{Diag}_{k=1}^{K} \biggl{(}m^{(k)}\left(\mathbf{C}^{(k)}\right)^{-1}\biggr{)} \tag{160}\] \[\text{with}\quad\mathbf{\phi} :=\mathbf{P}\operatorname*{Diag}_{k=1}^{K}\biggl{(}m^{(k)}\bigl{(} \mathbf{C}^{(k)}\bigr{)}^{-1}\biggr{)}\mathbf{P}^{\intercal}\] (161) \[=\sum_{k=1}^{K}m^{(k)}\mathbf{P}^{(k)}\left(\mathbf{C}^{(k)} \right)^{-1}\mathbf{R}^{(k)} \tag{162}\] From which we can retrieve expressions (108) and (109) for \(\mathbf{\beta}^{(k)}\) and \(\mathbf{\beta}^{(k)}_{\ell}\). Estimating the average covariance matrix of covariance estimators The elements of the average covariance matrix \(\sum_{i=1}^{n}\sum_{j=1}^{n}\mathbb{C}^{(k,ij)}\) of covariance estimators, given by equation (125), are needed in two (not unrelated) situations: 1. To numerically optimize the weights and generalized sample allocation of the MLBLUE of a covariance matrix with scalar weights (see 6.2); 2. To numerically optimize the generalized sample allocation of an MLMC estimator of a covariance matrix (see appendix C). The goal of this appendix is to provide computationally tractable ways to estimate the \(L\times L\) matrix of these elements. Each element of the matrix of interest can be decomposed into four terms. For \(1\leq k\leq K\) and \(\ell,\ell^{\prime}\in S^{(k)}\): \[\sum_{i=1}^{n}\sum_{j=1}^{n}\mathbb{C}^{(k,ij)}_{\ell,\ell^{\prime }}=\sum_{i=1}^{n}\sum_{j=1}^{n}\frac{\mathbb{M}^{4}\left[X_{\ell,i},X_{\ell^{ \prime},i},X_{\ell,j},X_{\ell^{\prime},j}\right]}{m^{(k)}}\\ -\sum_{i=1}^{n}\sum_{j=1}^{n}\frac{\operatorname{Cov}\left(X_{ \ell,i},X_{\ell,j}\right)\operatorname{Cov}\left(X_{\ell^{\prime},i},X_{\ell^ {\prime},j}\right)}{m^{(k)}}\\ +\sum_{i=1}^{n}\sum_{j=1}^{n}\frac{\operatorname{Cov}\left(X_{ \ell,i},X_{\ell^{\prime},j}\right)\operatorname{Cov}\left(X_{\ell,j},X_{\ell^ {\prime},i}\right)}{m^{(k)}(m^{(k)}-1)}\\ +\sum_{i=1}^{n}\sum_{j=1}^{n}\frac{\operatorname{Cov}\left(X_{ \ell,i},X_{\ell^{\prime},i}\right)\operatorname{Cov}\left(X_{\ell,j},X_{\ell^ {\prime},j}\right)}{m^{(k)}(m^{(k)}-1)} \tag{163}\] As is, the double sums over the state dimension \(n\) prevent any explicit computation. Fortunately, naive Monte Carlo estimates of the centred moments can be reordered to make sure the cost of estimation stays linear in \(n\). Each one of the four terms can be estimated from (biased) Monte Carlo estimates based on coupled independent samples indexed by \(s\), \(1\leq s\leq n_{e}\). We denote as \(\widetilde{X}^{s}_{\ell^{\prime},i}:=X^{s}_{\ell,i}-\frac{1}{n_{e}}\sum_{s=1}^ {n_{e}}X^{s}_{\ell,i}\) the centred perturbations associated to a sample \(X^{s}_{\ell,i}\). **First term: fourth-order moment** \[\sum_{i=1}^{n}\sum_{j=1}^{n}\mathbb{M}^{4}\left[X_{\ell,i},X_{\ell^{ \prime},i},X_{\ell,j},X_{\ell^{\prime},j}\right] \approx\sum_{i=1}^{n}\sum_{j=1}^{n}\frac{1}{n_{e}}\sum_{s=1}^{n_ {e}}\widetilde{X}_{\ell,i}^{s}\widetilde{X}_{\ell^{\prime},i}^{s}\widetilde{X}_ {\ell,j}^{s}\widetilde{X}_{\ell^{\prime},j}^{s} \tag{164}\] \[=\frac{1}{n_{e}}\sum_{s=1}^{n_{e}}\left(\sum_{i=1}^{n}\widetilde{ X}_{\ell,i}^{s}\widetilde{X}_{\ell^{\prime},i}^{s}\right)^{2} \tag{165}\] **Second term: product of intra-level covariances** \[\sum_{i=1}^{n}\sum_{j=1}^{n}\operatorname{Cov}\left(X_{\ell,i},X _{\ell,j}\right)\operatorname{Cov}\left(X_{\ell^{\prime},i},X_{\ell^{\prime}, j}\right)\approx\\ \sum_{i=1}^{n}\sum_{j=1}^{n}\left(\frac{1}{n_{e}-1}\sum_{s=1}^{n _{e}}\widetilde{X}_{\ell,i}^{s}\widetilde{X}_{\ell,j}^{s}\right)\left(\frac{1 }{n_{e}-1}\sum_{s^{\prime}=1}^{n_{e}}\widetilde{X}_{\ell^{\prime},i}^{s^{ \prime}}\widetilde{X}_{\ell^{\prime},j}^{s^{\prime}}\right)\\ =\frac{1}{(n_{e}-1)^{2}}\sum_{s=1}^{n_{e}}\sum_{s^{\prime}=1}^{n_ {e}}\left(\sum_{i=1}^{n}\widetilde{X}_{\ell,i}^{s}\widetilde{X}_{\ell^{\prime},i}^{s^{\prime}}\right)^{2} \tag{166}\] **Third term: product of inter-level inter-point covariances** \[\sum_{i=1}^{n}\sum_{j=1}^{n}\operatorname{Cov}\left(X_{\ell,i},X _{\ell^{\prime},j}\right)\operatorname{Cov}\left(X_{\ell,j},X_{\ell^{\prime}, i}\right)\approx\\ \sum_{i=1}^{n}\sum_{j=1}^{n}\left(\frac{1}{n_{e}-1}\sum_{s=1}^{n_ {e}}\widetilde{X}_{\ell,i}^{s}\widetilde{X}_{\ell^{\prime},j}^{s}\right) \left(\frac{1}{n_{e}-1}\sum_{s^{\prime}=1}^{n_{e}}\widetilde{X}_{\ell,j}^{s^{ \prime}}\widetilde{X}_{\ell^{\prime},i}^{s^{\prime}}\right)\] \[=\frac{1}{(n_{e}-1)^{2}}\sum_{s=1}^{n_{e}}\sum_{s^{\prime}=1}^{n_ {e}}\left(\sum_{i=1}^{n}\widetilde{X}_{\ell,i}^{s}\widetilde{X}_{\ell^{\prime},i}^{s^{\prime}}\right)\left(\sum_{i=1}^{n}\widetilde{X}_{\ell^{\prime},i}^{s} \widetilde{X}_{\ell,i}^{s^{\prime}}\right)\] \[=\frac{1}{(n_{e}-1)^{2}}\sum_{s=1}^{n_{e}}\left(\sum_{i=1}^{n} \widetilde{X}_{\ell,i}^{s}\widetilde{X}_{\ell^{\prime},i}^{s}\right)^{2}\] \[+\frac{2}{(n_{e}-1)^{2}}\sum_{s=1}^{n_{e}}\sum_{s^{\prime}>s}^{n_ {e}}\left(\sum_{i=1}^{n}\widetilde{X}_{\ell,i}^{s}\widetilde{X}_{\ell^{\prime},i}^{s^{\prime}}\right)\left(\sum_{i=1}^{n}\widetilde{X}_{\ell^{\prime},i}^{s} \widetilde{X}_{\ell,i}^{s^{\prime}}\right) \tag{167}\] Fourth term: product of inter-level covariances \[\sum_{i=1}^{n}\sum_{j=1}^{n}\operatorname{Cov}\left(X_{\ell,i},X_{ \ell^{\prime},i}\right)\operatorname{Cov}\left(X_{\ell,j},X_{\ell^{\prime},j} \right)\approx\\ \sum_{i=1}^{n}\sum_{j=1}^{n}\left(\frac{1}{n_{e}-1}\sum_{s=1}^{n_ {e}}\widetilde{X}_{\ell,i}^{s}\widetilde{X}_{\ell^{\prime},i}^{s}\right)\left( \frac{1}{n_{e}-1}\sum_{s^{\prime}=1}^{n_{e}}\widetilde{X}_{\ell,j}^{s^{\prime} }\widetilde{X}_{\ell^{\prime},j}^{s^{\prime}}\right)\\ =\left(\frac{1}{n_{e}-1}\sum_{s=1}^{n_{e}}\sum_{i=1}^{n}\widetilde {X}_{\ell,i}^{s}\widetilde{X}_{\ell^{\prime},i}^{s}\right)^{2} \tag{168}\] Computing in practiceIn practice, these quantities can be expressed as functions of the \(n_{e}^{2}L^{2}\) space averages \(\gamma(\ell,s,\ell^{\prime},s^{\prime})=\sum_{i=1}^{n}\widetilde{X}_{\ell,i} ^{s}\widetilde{X}_{\ell^{\prime},i}^{s^{\prime}}\). Grouping the four terms together, we retrieve \[\sum_{i=1}^{n}\sum_{j=1}^{n}\mathbb{C}_{\ell,\ell^{\prime}}^{(k, ij)}=\frac{1}{m^{(k)}}\left(\frac{\sum_{s=1}^{n_{e}}\gamma(\ell,s,\ell^{\prime},s)^ {2}}{n_{e}}-\frac{\sum_{s=1}^{n_{e}}\sum_{s^{\prime}=1}^{n_{e}}\gamma(\ell,s, \ell^{\prime},s^{\prime})^{2}}{(n_{e}-1)^{2}}\right)\\ +\frac{1}{m^{(k)}(m^{(k)}-1)}\left(\frac{\sum_{s=1}^{n_{e}}\sum_{ s^{\prime}=1}^{n_{e}}\gamma(\ell,s,\ell^{\prime},s^{\prime})\gamma(\ell^{\prime},s, \ell,s^{\prime})}{(n_{e}-1)^{2}}+\frac{\left(\sum_{s=1}^{n_{e}}\gamma(\ell,s, \ell^{\prime},s)\right)^{2}}{(n_{e}-1)^{2}}\right) \tag{169}\] Note the symmetry \(\gamma(\ell,s,\ell^{\prime},s^{\prime})=\gamma(\ell^{\prime},s^{\prime},\ell,s)\), which reduces to \(n_{e}(n_{e}+1)L^{2}/2\) the number of space averages to compute. A possible algorithm would be: 1. Generate \(n_{e}\) simulations coupled across all \(L\) levels: \(\left((X_{\ell}^{s})_{\ell=1}^{L}\right)_{s=1}^{n_{e}}\). 2. Loop over all ensemble members and all fidelity levels to estimate the \(L\) ensemble means \(\mu_{\ell}\in\mathbb{R}^{n}\). These will be used to estimate fourth-order moments in next step. 3. Double loop over ensemble members and double loop over fidelity levels to estimate the point-wise \(\gamma\). Make use of the symmetry property. Space-average. 4. Compute the elements of the averaged covariance matrix (169). RemarkThis is valid in the limit of very large \(n_{e}\). For finite sizes, these estimates are biased, as mentioned earlier. Unbiased estimators for these quantities do exist, but are more complex (see for instance Gerlovina and Hubbard, 2019). Optimal sample allocation for an MLMC covariance matrix estimator The MLMC estimator for a covariance matrix is a specific (sub-optimal) case of (123), with weights \(\beta^{(1)}=\left(1\right)\) and \(\beta^{(k)}=\left(1\ \ -1\right)^{\intercal}\) for \(2\leq k\leq K\). The generalized variance of this multilevel estimator is given by (124), which can be estimated in practice following appendix B. Minimizing the variance as a function of the sample sizes solves the problem of optimal sample allocation. Note that the dependence on the sample sizes \(\left(m^{(k)}\right)_{k=1}^{K}\) is hidden in the \(\mathbb{C}^{(k,ij)}\). Note also that the minimization should be done numerically, since the variance involves complex terms such as the inverse of \(m^{(k)}\left(m^{(k)}-1\right)\). This is quite direct in python, for instance using scipy.optimize.minimize with constraints. Link with Mycek and De Lozzo (2019)A simpler but possibly suboptimal sample allocation can be found using the approach proposed by Mycek and De Lozzo (2019). They do not minimize the exact variance of the estimator, but an upper bound of this variance (their equation 2.31). Their bound is still written a a sum over the coupling groups, but each term is proportional to \(1/m^{(k)}\). As a consequence, an analytical solution exists for the optimal sample allocation, which makes its determination much faster and simpler1. Footnote 1: In the situations we tested, the bounds proposed by Mycek and De Lozzo (2019) were quite loose (about a factor 2 above the actual variances). This had no impact in the sample allocation though, as their relative evolution among levels was very similar to the true variances, which made them a useful proxy for the actual variance contributions. We extend the bound to the multivariate case, simply by summing over all \(i,j\) elements of the covariance matrix to be estimated. With our notations, the contributions to the variance include the sample size (different from the notations of Mycek and De Lozzo, 2019). \[\mathcal{V}_{k} =\left(\beta^{(k)}\right)^{\intercal}\left(\sum_{i=1}^{n}\sum_{j =1}^{n}\mathbb{C}^{(k,ij)}\right)\beta^{(k)} \tag{170}\] \[\mathcal{V}_{k} \leq\sum_{i=1}^{n}\sum_{j=1}^{n}\frac{1}{2}\frac{1}{m^{(k)}-1} \left[\sqrt{\mathbb{M}^{4}\left[\Delta_{\ell,i}\right]\mathbb{M}^{4}\left[ \Sigma_{\ell,j}\right]}+\sqrt{\mathbb{M}^{4}\left[\Delta_{\ell,j}\right] \mathbb{M}^{4}\left[\Sigma_{\ell,i}\right]}\right]\] (171) \[=\frac{1}{m^{(k)}-1}\left(\sum_{i=1}^{n}\sqrt{\mathbb{M}^{4} \left[\Delta_{\ell,i}\right]}\right)\left(\sum_{i=1}^{n}\sqrt{\mathbb{M}^{4} \left[\Sigma_{\ell,i}\right]}\right) \tag{172}\] where \(\Delta_{\ell,i}=X_{\ell,i}-X_{\ell-1,i}\) and \(\Sigma_{\ell,i}=X_{\ell,i}+X_{\ell-1,i}\) (assuming undefined variables are zero). A possible algorithm would be: 1. Generate \(n_{e}\) simulations coupled across all \(L\) levels. 2. Loop over ensemble members and fidelity levels to estimate the \(L\) ensemble means \(\mu_{\ell}\in\mathbb{R}^{n}\). These will be used to estimate fourth-order moments in next step. 3. Loop over ensemble members and over fidelity levels to estimate the pointwise fourth-order moments \(\mathbb{M}^{4}\left[\Delta_{\ell,i}\right]\) and \(\mathbb{M}^{4}\left[\Sigma_{\ell,i}\right]\). 4. Space-average and multiply to get the variance contribution terms. Optimal localization using random asymptotic quantities This sections details how the results of section 6.4 can be derived (more rigorously?) with the formalism of M15a and M15b. The main difference consists in considering asymptotic quantities as random, consistently with linear filtering theory. We assume the existence of two independent random processes \(\mathcal{R}_{1}\) and \(\mathcal{R}_{2}\). The first process generates asymptotic quantities denoted as \(\mathcal{S}\) (for instance first, second and fourth order moments). The second one generates members consistent with these quantities (typically using an Ensemble of Data Assimilations). The mean squared error is to be minimized over both process, though we only have access to one realization of \(\mathcal{R}_{1}\). The expectation in the MSE is thus the expectation over both processes. We could either index the expectation operators by the processing they refer to, or use the formalism of conditional expectations: \[\mathrm{MSE}\left(\widehat{\mathbf{B}}^{\mathrm{ML}},\mathbf{B}_ {L}\right) =\mathbb{E}\left[\left\|\sum_{k=1}^{K}\sum_{\ell\in S^{(k)}} \mathbf{L}_{\ell}^{(k)}\circ\widetilde{\mathbf{B}}_{\ell}^{(k)}-\mathbf{B}_{L }\right\|_{\mathrm{F}}^{2}\right] \tag{173}\] \[=\mathbb{E}_{1}\left[\mathbb{E}_{2}\left[\left\|\sum_{k=1}^{K} \sum_{\ell\in S^{(k)}}\mathbf{L}_{\ell}^{(k)}\circ\widetilde{\mathbf{B}}_{ \ell}^{(k)}-\mathbf{B}_{L}\right\|_{\mathrm{F}}^{2}\right]\right]\] (174) \[=\mathbb{E}\left[\left.\mathbb{E}\left[\left\|\sum_{k=1}^{K}\sum_ {\ell\in S^{(k)}}\mathbf{L}_{\ell}^{(k)}\circ\widetilde{\mathbf{B}}_{\ell}^{( k)}-\mathbf{B}_{L}\right\|_{\mathrm{F}}^{2}\right]\right|\mathcal{S}\right] \tag{175}\] The rest of the derivation follows section 6.4.1. There is just one additional argument needed to simplify the cross terms \(\mathbb{E}\left[B_{L}\widetilde{\mathbf{B}}^{\top}\right]\) when expanding the MSE: Expectations of products of sampled and asymptotic quantitiesWe have assumed the independence of the sampling error and the process \(\mathcal{R}_{2}\) generating asymptotic statistics. \[\mathbb{E}\left[\left(\widetilde{B}_{\ell}-B_{\ell}\right)B_{L}\right] =\mathbb{E}\left[\widetilde{B}_{\ell}-B_{\ell}\right]\mathbb{E} \left[B_{L}\right] \tag{176}\] \[=0\] (177) \[i.e.\quad\mathbb{E}\left[\widetilde{B}_{\ell}B_{L}\right] =\mathbb{E}\left[B_{\ell}B_{L}\right] \tag{178}\] So \(\mathbb{E}\left[B_{L}\widetilde{\mathbf{B}}^{\top}\right]=\mathbb{E}\left[B_{L }\underline{B}\right]\). Interpretation of the ergodicity hypothesisThe ergodicity assumption intervenes within the same context of equivalence classes. A sum over a random subset of an equivalence class is meant to approximate the expectations over both random processes \(\mathcal{R}_{1}\) and \(\mathcal{R}_{2}\).
2305.11325
Slowly decaying zero mode in a weakly non-integrable boundary impurity model
The transverse field Ising model (TFIM) on the half-infinite chain possesses an edge zero mode. This work considers an impurity model -- TFIM perturbed by a boundary integrability breaking interaction. For sufficiently large transverse field, but in the ordered phase of the TFIM, the zero mode is observed to decay. The decay is qualitatively different from zero modes where the integrability breaking interactions are non-zero all along the chain. It is shown that for the impurity model, the zero mode decays by relaxing to a non-local quasi-conserved operator, the latter being exactly conserved when the opposite edge of the chain has no non-commuting perturbations so as to ensure perfect degeneracy of the spectrum. In the thermodynamic limit, the quasi-conserved operator vanishes, and a regime is identified where the decay of the zero mode obeys Fermi's Golden Rule. A toy model for the decay is constructed in Krylov space and it is highlighted how Fermi's Golden Rule may be recovered from this toy model.
Hsiu-Chung Yeh, Gabriel Cardoso, Leonid Korneev, Dries Sels, Alexander G. Abanov, Aditi Mitra
2023-05-18T22:18:13Z
http://arxiv.org/abs/2305.11325v2
# Slowly decaying zero mode in a weakly non-integrable boundary impurity model ###### Abstract The transverse field Ising model (TFIM) on the half-infinite chain possesses an edge zero mode. This work considers an impurity model -- TFIM perturbed by a boundary integrability breaking interaction. For sufficiently large transverse field, but in the ordered phase of the TFIM, the zero mode is observed to decay. The decay is qualitatively different from zero modes where the integrability breaking interactions are non-zero all along the chain. It is shown that for the impurity model, the zero mode decays by relaxing to a non-local quasi-conserved operator, the latter being exactly conserved when the opposite edge of the chain has no non-commuting perturbations so as to ensure perfect degeneracy of the spectrum. In the thermodynamic limit, the quasi-conserved operator vanishes, and a regime is identified where the decay of the zero mode obeys Fermi's Golden Rule. A toy model for the decay is constructed in Krylov space and it is highlighted how Fermi's Golden Rule may be recovered from this toy model. ## I Introduction The transverse field Ising model (TFIM) with open boundary conditions hosts Majorana zero modes [1]. These zero modes are also known as strong zero modes where the edge mode is associated with an operator that commutes with the Hamiltonian in the thermodynamic limit and anti-commutes with a discrete \(Z_{2}\) symmetry [2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18]. Thus the existence of a strong zero mode implies a doubly degenerate spectrum, with an observation of the zero mode not tied to the ground state sector. How perturbing away from the TFIM affects the strong zero mode is an essential question, as it concerns practical applications where the experimental set-up is only approximately a TFIM. Understanding this is also important from a conceptual point of view as it addresses how thermalization times are affected by quasi-conserved quantities. After the Jordan-Wigner transformation, the TFIM maps to a 1D model of spinless fermions with nearest-neighbor hopping, with the \(Z_{2}\) symmetry corresponding to fermion parity. A standard way to perturb away from this model is to include four fermion interactions [10; 11; 12; 13; 14; 15; 16; 17; 18]. Here we study numerically the effect of a weaker perturbation, where the four fermion interactions exist only at the boundary. This work aims to explore whether such a weak integrability breaking term can destroy the zero mode, and if so, what is the signature of the decay of the zero mode in the dynamics. It is not easy to establish numerically whether weak perturbations can cause the zero mode to decay. This is because, even for the TFIM, where a zero mode can be analytically constructed, a finite system size \(L\) causes the zero mode to decay. This decay comes about because of tunneling processes that hybridize the zero modes at the two ends of the chain, leading to a lifetime which is exponential in the system size. In the presence of perturbations, typically, one needs to park oneself at some parameter regime where the decay becomes \(L\)-independent, and only then one can safely claim that the weak perturbation destroys the zero mode. In this paper, we study the system using a combination of three different methods, (i) exact diagonalization (up to system sizes of \(L=14\)), (ii) Trotterized time-evolution of Haar random states (up to \(L=22\) and \(t=10^{4}/J_{x}\), \(J_{x}\) being the strength of the Ising interaction in the TFIM), which approximates the real dynamics up to exponential times and with exponential precision in space, (iii) Krylov space dynamics, which allows us to construct an approximate model for the zero mode decay in the thermodynamic limit. The paper is organized as follows. In Section II we outline the model and explain how the zero mode is detected numerically. In Section III, we construct a quasi-conserved quantity, which becomes exactly conserved when non-commuting couplings at one end of the chain are switched off. We highlight the role that the quasi-conserved quantity plays in the decay of the zero mode. In Section IV, we park ourselves in a region of parameter space where the decay is entirely due to processes that are second order in the integrability breaking term, deriving the Fermi Golden Rule (FGR) decay rate, and comparing it with numerics. We present our conclusions in Section V. In Appendix A we outline how the zero mode can be studied using Krylov space methods. We derive an effective model for the zero mode decay and highlight how FGR is recovered in Krylov sub-space. In Appendix B, we present examples of the quasi-conserved quantities in short spin chains. In Appendix C, we outline the numerical method of Haar random average and judicious Trotter decomposition, while in Appendix D we provide details in the derivation of the FGR decay rate. Model We study the TFIM of length \(L\) with open boundary conditions and perturbed by a boundary impurity. The latter is modeled as an integrability-breaking exchange interaction acting only on the first two sites of the chain. Thus the Hamiltonian is \[H=J_{x}\sum_{i=1}^{L-1}\sigma_{i}^{x}\sigma_{i+1}^{x}+g\sum_{i=1}^{L}\sigma_{i}^ {z}+J_{z}\sigma_{1}^{z}\sigma_{2}^{z}, \tag{1}\] where \(\sigma_{i}^{x,y,z}\) are Pauli matrices on site \(i\), \(g\) is the strength of transverse field and \(J_{x,z}\) is the strength of the Ising interaction in the \(x,z\)-directions, with \(J_{z}\) being non-zero only on the first link. We will set \(J_{x}=1\) in the paper. For \(J_{z}\)=0, the Hamiltonian is the TFIM, \(H_{0}=H|_{J_{z}=0}\), which is in the topological phase with an edge zero mode for \(|g|<1\). The zero mode operator \(\psi_{0}\) anti-commutes with the \(Z_{2}\) symmetry, \(\mathcal{D}=\sigma_{1}^{z}\ldots\sigma_{L}^{z}\) of the system: \(\{\psi_{0},\mathcal{D}\}=0\). In the thermodynamic limit of a semi-infinite chain, the zero mode commutes with the TFIM, \([\psi_{0},H_{0}]=0\). Because of this property, the edge zero mode has an infinite lifetime in the thermodynamic limit. On adding integrability-breaking perturbations, the commutation relation between the zero mode and the Hamiltonian no longer holds. However, one can still observe a long-lived quasi-stable edge zero mode for weak integrability breaking. A useful quantity to probe this object is the infinite temperature autocorrelation of \(\sigma_{1}^{x}\) \[A_{\infty}(t)=\frac{1}{2^{L}}\text{Tr}[\sigma_{1}^{x}(t)\sigma_{1}^{x}], \tag{2}\] where \(t\) is the time measured in units of \(1/J_{x}\). This is a good measure of the zero mode lifetime in the presence of interactions since the zero mode is localized on the edge with \(\mathcal{O}(1)\) overlap with \(\sigma_{1}^{x}\), \(\text{Tr}[\psi_{0}\sigma_{1}^{x}]/2^{L}\sim\mathcal{O}(1)\)[1, 3]. In the language of Majorana fermions, \(\sigma_{1}^{x}\) is the Majorana fermion on the first site, and the edge mode is a superposition of Majoranas, with the largest weight being on the Majoranas on the first few sites at the boundary. Fig. 1 shows examples of the autocorrelation function for \(g=0.3\) and two different models. The top panel shows autocorrelation functions for the boundary impurity (1). For comparison, the bottom panel presents the autocorrelation functions of the chain with a \(J_{z}\) perturbation on all sites, \(J_{z}\sum_{i=1}^{L-1}\sigma_{i}^{z}\sigma_{i+1}^{z}\). The bottom panel shows that after an initial transient, the autocorrelation decays into a long-lived zero mode which lasts for a long time, as shown by the constant value of the autocorrelation function. The overlap of the plots for \(L=12,14\) in the bottom panel suggests that the eventual decay of the autocorrelation to zero is due to interactions rather than finite system size. In contrast, the boundary impurity model (top panel) shows a much longer lifetime due to the weaker nature of the integrability breaking perturbation (\(J_{z}\) non-zero only on the first link). In particular, the autocorrelation does not show saturation of lifetime when system size increases up to size \(L=14\) in contrast to the bottom panel. The top panel seemingly suggests an exact zero mode instead of a quasi-stable zero mode for the impurity model. However, this appears to be a finite system size effect, and that for the given parameters, we simply do not have access to large enough \(L\) to be in a regime where the decay is dominated by interactions. This is supported by the fact that as one increases the transverse field, the autocorrelation shows a tendency to saturate with increasing system size. But, interestingly, even in this regime of eventual \(L\)-independent decay, there is still a qualitative difference in the decay mechanism of the zero mode for the impurity model and that for the model where \(J_{z}\neq 0\) on all links. In Fig. 2 we would like to identify three steps in the autocorrelation decay process for a finite-size impurity model: (i) after an initial transient (see \(t<10^{1}\)) the autocorrelation decays into the local zero mode of the original non-perturbed Hamiltonian \(H_{0}\) (see \(t<10^{3}\) for \(L=14\)), (ii) the system decays from this local zero mode to an Figure 1: Infinite temperature autocorrelation function for boundary impurity (top panel) and perturbation on all sites (bottom panel) with \(g=0.3\) and \(J_{z}=0.2\). For the boundary impurity (top panel), the edge zero mode survives for long times and does not show saturation of lifetime with system sizes up to \(L=14\). The model with perturbations on all sites (bottom panel) also shows a long-lived edge zero mode, but with a shorter lifetime that has saturated at \(L=12\). other quasi-conserved operator, (iii) it finally reaches zero due to the interaction (\(t>10^{4}\) for \(L=14\)). The plateau value at the end of step (ii) decreases with system size, indicating the existence of some non-local quasi-conserved operator. This effect was not observed for the model with perturbations on all sites, a fact which will be highlighted further later. In the thermodynamic limit, the plateau value goes to zero, and step (iii) disappears eventually. We expect that in the thermodynamic limit, the decay rate is dominated by step (ii), and we will show that, in certain regimes, this decay can be captured by perturbation theory in \(J_{z}\). In the following section, we will demonstrate the existence of the non-local quasi-conserved operator and highlight its role in the decay of the zero mode. ## III Quasi-conserved operator To construct the quasi-conserved operator, we follow the argument by Fendley [3] on the commutation relation between the zero mode and the Hamiltonian. The zero mode of an integrable model such as the TFIM [3], \(XY\) chain [11] or XYZ chain [3], does not commute with the integrable Hamiltonian at any finite system size, but only in the thermodynamic limit. However, for finite system size, the zero mode commutes with almost the whole Hamiltonian except for the interaction terms on the last site [3; 11]. For example, for the TFIM \[H_{0}=\sum_{i=1}^{L-1}\sigma_{i}^{x}\sigma_{i+1}^{x}+g\sum_{i=1}^{L}\sigma_{i} ^{z}, \tag{3}\] the corresponding zero mode localized on the first site is given by the following superposition of Majoranas up to an overall normalization \[\psi_{0}\propto\sum_{l=1}^{L}g^{l-1}a_{2l-1}, \tag{4}\] where the Majoranas are defined as follows \[a_{2l-1}=\prod_{j=1}^{l-1}\sigma_{j}^{z}\sigma_{l}^{x};\qquad\quad a_{2l}=\prod _{j=1}^{l-1}\sigma_{j}^{z}\sigma_{l}^{y}. \tag{5}\] The commutation between the zero mode and the Hamiltonian is non-zero due to the transverse field on the last site, \([\psi_{0},H_{0}]=[\psi_{0},g\sigma_{L}^{z}]\neq 0\). However, since the zero mode is localized on the first site, this commutation is exponentially small in \(L\) and becomes zero in the thermodynamic limit. Based on the above argument, one can numerically construct a conserved operator \(O_{c}\) in the following way. Let us take the TFIM as an example. Consider first the TFIM with the last-site transverse field turned off, \(\tilde{H}_{0}=H_{0}-g\sigma_{L}^{z}\). The Hamiltonian \(\tilde{H}_{0}\) has a two-fold degenerate energy spectrum for any finite system size. In particular, \(\tilde{H}_{0}\) splits into two sectors labeled by the polarization of the spin on the last site in the \(x\)-direction: \(|\rightarrow\rangle_{x}\) or \(|\leftarrow\rangle_{x}\), and since the only interaction acting on the last site is \(\sigma_{L-1}^{x}\sigma_{L}^{x}\), it cannot flip the spin in the eigenbases of \(\sigma_{x}\). Given such a double degeneracy of the spectrum, one may construct a conserved operator which is odd under \(Z_{2}\), and has non-zero matrix elements between opposite parity eigenstates of \(\tilde{H}_{0}\). In addition, one may choose this operator to overlap with \(\sigma_{1}^{x}\). Such a conserved operator \(O_{c}\), is the long time-limit of the operator \(\sigma_{1}^{x}(t)\) (up to an overall normalization) \[O_{c}\propto\lim_{T\rightarrow\infty}\frac{1}{T}\int_{0}^{T}dt\ \sigma_{1}^{x}(t). \tag{6}\] Numerically, the conserved operator \(O_{c}\) can be constructed by eliminating the terms oscillating in \(\sigma_{1}^{x}(t)\). In the eigenbasis of \(\tilde{H}_{0}\), \(\sigma_{1}^{x}(t)\) is represented by \[\langle\tilde{n}|\sigma_{1}^{x}(t)|\tilde{m}\rangle=\langle\tilde{n}|\sigma_{ 1}^{x}|\tilde{m}\rangle e^{i(\tilde{E}_{n}-\tilde{E}_{m})t}, \tag{7}\] where the matrix elements are not oscillating as long as \(\tilde{E}_{n}=\tilde{E}_{m}\). In numerics, one is only required to construct matrix elements with \(\tilde{E}_{n}=\tilde{E}_{m}\), \[\langle\tilde{n}|O_{c}|\tilde{m}\rangle=\left\{\begin{array}{cc}\langle \tilde{n}|\sigma_{1}^{x}|\tilde{m}\rangle&\text{if }\tilde{E}_{n}=\tilde{E}_{m};\\ 0&\text{else}.\end{array}\right. \tag{8}\] Finally, \(O_{c}\) is normalized with a norm equal to one, \(\text{Tr}[O_{c}^{\dagger}O_{c}]/2^{L}=1\). In the example of the TFIM, the conserved operator \(O_{c}\) happens to be the same as the zero mode \(\psi_{0}\) (4). Still, they may be different in generic models possessing zero modes. For constructing the zero mode in generic models, it is proposed to apply commutant algebra, and this may be achieved both analytically and numerically, see [19; 20]. Figure 2: Infinite temperature autocorrelation function for the boundary impurity with \(g=0.5\) and \(J_{z}=0.4\). Unlike Fig. 1 which was for a smaller \(g,J_{z}\), the autocorrelation shows a plateau region before decaying to zero (see, e.g., the range of \(t\) between \(10^{3}\) and \(10^{4}\) for the \(L=14\) plot). The height of the plateau decreases as the system size increases. The physical reason behind the conserved quantity \(O_{c}\) is that for the TFIM, there are two Majorana modes, one on the left end, and the other on the right end of the chain. The lifetime comes from the two modes coupling via tunneling processes, with the tunneling amplitude \(\propto g^{L}\). However, when \(g\) is made zero on the last site, the two zero modes, one related to \(\sigma_{1}^{x}\), and the other related to \(\sigma_{L}^{x}\), no longer hybridize. Thus the zero mode on the left end does not decay and is exactly conserved. In general, the method of constructing the conserved operator \(O_{c}\) described above can be applied to any spin system as long as the two-fold degeneracy of the energy spectrum can be achieved by turning off interactions on the last site. Moreover, one should choose an appropriate seed operator to numerically generate the conserved operator; e.g., we choose \(\sigma_{1}^{x}\) as the seed operator since \(J_{x}=1\) is the largest coupling, and \(\sigma_{1}^{x}\) is the first Majorana in the convention (5). However, once the interaction on the last site is restored, the conserved operator may become quasi-conserved, or even immediately die out, if the commutation with the last site interactions does not approach zero in the thermodynamic limit. In Fig. 3, we demonstrate the autocorrelation with \(g=0.5\) and \(J_{z}=0.4\) for both the boundary impurity model and the perturbation being non-zero on all sites cases. Also plotted are the autocorrelations with no interactions on the last site except \(\sigma_{L-1}^{x}\sigma_{L}^{x}\). For comparison, the numerically constructed zero mode from (8) are also plotted as red dashed lines. The agreement between the plateaus (solid black lines) and this numerically constructed zero mode (dashed red lines) is excellent. In the case of perturbation on all sites (bottom panel in Fig. 3), the autocorrelation already saturates at small system sizes \(L=10\). When the interaction is turned off on the last site, the autocorrelation approaches a non-zero constant value (solid black lines) which corresponds to the conserved operator (6). The decrease of the plateau height with increasing system size indicates that the conserved operator becomes less localized on the first Majorana. Note that this plateau formation happens at times longer than the decay time for the model where \(J_{z}\) is non-zero along the chain. As the interactions on the last site are restored, the conserved operator immediately dies out due to interactions. In contrast, for the impurity model where the perturbation is present only at the boundary (top panel in Fig. 3), it is clearly seen that the late-time plateau comes from the conserved operator. Once the last-site interaction is restored, the operator becomes quasi-conserved so that the autocorrelation persists for a long time before it eventually decays. The quasi-conserved operator is non-local as shown by the decrease of the plateau value as \(L\) increases, going to zero in the thermodynamic limit. The different behaviors of the autocorrelation between the two models, perturbations on all sites, and the boundary impurity model are due to the different bulk properties of the two models; the former is non-integrable, while the latter is free in the bulk. One can also capture the difference between the two autocorrelation functions by mapping the dynamics of \(\sigma_{1}^{x}\) to single-particle dynamics in Krylov space. The Krylov space Hamiltonian is a tri-diagonal Hamiltonian, where the off-diagonal elements have some universal features [21] that clearly distinguish between the two models. Moreover, decay rates can be derived by further coarse-graining the Krylov Hamiltonian and mapping it to a Dirac model with a spatially inhomogeneous mass (in Krylov space), see discussion in Appendix A. In the next section, we will focus on the decay rate of the autocorrelation function for the boundary impurity model. Figure 3: Infinite temperature autocorrelation function for the boundary impurity model (top panel) and for perturbation on all sites (bottom panel) with \(g=0.5\) and \(J_{z}=0.4\). The autocorrelation with interactions switched off on the last site (i.e., \(g=0\) on the last site in the top panel and \(g=0,J_{z}=0\) on the last site in the bottom panel) are plotted as well (solid black lines) and show the existence of a conserved operator at late times. This is consistent with the numerically constructed zero mode in (8), highlighted with red dashed line for late times. When the interactions on the last site are restored, the conserved operator immediately disappears for the case of all-site perturbation (bottom panel) but becomes quasi-stable for the boundary impurity model (top panel). The plateaus in the top panel are the remnant of the conserved operator. Fermi's Golden Rule Decay Rate This section considers sufficiently large transverse fields where one can obtain system-size independent results. Moreover, this choice places us in a regime where Fermi's Golden Rule (FGR) approximation for the decay rate is valid. We will derive and compare the FGR decay rate with numerics. Let us start by presenting a numerical method that allows us to compute the autocorrelation for system sizes beyond \(L=14\). Due to the limitations of computational resources, ED can only be applied up to \(L=14\). Therefore, numerically approximate methods for computing the autocorrelation are required for accessing larger system sizes. Here we outline one such approximation. First, one approximates the trace by the average of a Haar random state \(\phi\): \(\mathrm{Tr}[\cdots]/2^{L}\approx\langle\phi|\cdots|\phi\rangle\). The average of the Haar random state consists of two parts: diagonal and off-diagonal matrix elements in the eigenbasis representation (see Appendix C). The diagonal part corresponds to the trace that one wants to compute. The sum of the off-diagonal parts is essentially a summation of random numbers, which is typically \(\sim 1/\sqrt{2^{L}}\) and negligible as long as the system size is large, and the sum of diagonal parts is an \(\mathcal{O}(1)\) number. Therefore, one can calculate autocorrelations up to \(\mathcal{O}(1/\sqrt{2^{L}})\) precision without performing ED. Second, the unitary evolution is approximated by Trotter decomposition with finite time step \(dt\): \(U(dt)\approx\exp(-iH_{\mathrm{xx}}dt)\exp(-iH_{z}dt)\exp(-iH_{\mathrm{zz}}dt)\), where \(H_{\mathrm{xx}},H_{z}\) and \(H_{\mathrm{zz}}\) correspond to the three parts of the Hamiltonian (1). Physically, we have replaced the continuous time evolution with a discrete-time (Floquet) one. One recovers continuous-time dynamics in the high-frequency limit, \(dt\ll 1\). Setting \(dt=0.2\), the heating time of such a Floquet system is estimated to be \(\sim e^{2\pi/dt}\sim 10^{13}\). Here, we choose \(g=0.6\) so that the autocorrelation almost decays by \(t=10^{4}\) for \(J_{z}\) between \(0.15-0.5\) while this time scale is still much smaller than the heating time. Combining these two approximations, the autocorrelation can be massaged into the average of a Haar random state at different times. Computationally, one only requires to perform the time evolution of a state. This costs significantly less resources than ED so that one can probe larger system sizes. However, it is inefficient to calculate long-time behavior since the computation time is proportional to the number of time steps fixed by the Trotter decomposition; see details of numerical methods and discussion in Appendix C. This is the main reason why a larger transverse field strength \(g=0.6\) is chosen, and we can push the system size up to \(L=22\). We now explain why FGR is valid in the regime of \(g=0.6\). Notice that the transverse-field strength \(g\) controls the bandwidth of the bulk quasi-particle spectrum, \(\epsilon_{k}\in[1-g,1+g]\). The boundary impurity can be written as a four Majorana interaction, \(J_{z}\sigma_{1}^{z}\sigma_{2}^{z}=-J_{z}a_{1}a_{2}a_{3}a_{4}\). The limiting case for the resonance condition in second-order perturbation, and therefore for FGR to hold, requires an energy-conserving process where an edge zero mode and one quasi-particle at the top of the band are annihilated and two quasi-particles at the bottom of the band are created, \(2(1-g)=1+g\). From this argument, the second-order perturbation cannot match the resonance condition for \(g<1/3\). Thus \(g=0.6\) clearly places us in a regime where second-order perturbation theory is valid. In Appendix D, we show that the FGR decay rate is \[\Gamma=\frac{1}{2^{L}}\int_{0}^{\infty}dt\ \mathrm{Tr}[\dot{\psi_{0}}(t)\dot{ \psi_{0}}(0)], \tag{9}\] where \(\psi_{0}\) is the zero mode of the TFIM (4), we define \(\dot{\psi_{0}}=i[J_{z}\sigma_{1}^{z}\sigma_{2}^{z},\psi_{0}]\) and \(\dot{\psi_{0}}(t)\) evolves with the unperturbed Hamiltonian \(H_{0}\). Fig. 4 shows the autocorrelation of the boundary impurity (top panel) and the boundary impurity with zero transverse field on the last site (bottom panel). The FGR decay rate is plotted in both panels, where we numerically compute its value, \(\Gamma=0.16J_{z}^{2}\) for \(g=0.6\). This decay rate captures the decay of the autocorrelation function after the initial transient. At late times, the decay of the autocorrelation slows down due to the presence of the Figure 4: The infinite temperature autocorrelation function for the boundary impurity model (top panel) at \(g=0.6\) and \(J_{z}=0.2\) and the corresponding results with zero transverse-field \(g\) on the last site (bottom panel). For large transverse fields, the plateau is blurred and becomes a slowly decaying tail in the top panel. The FGR result is computed numerically and found to be \(\Gamma=0.16J_{z}^{2}\). It matches the decay of the edge zero mode (solid black line) after the initial transient but fails when the quasi-conserved operator comes in at late times. Note that the effect of the latter becomes smaller with increasing system size. quasi-conserved operator. Since we now study a larger transverse-field \(g=0.6\), the presence of plateau is not as clear as in Fig. 2. Nevertheless, the bottom panel of Fig. 4 shows that as \(L\) increases, the conserved operator becomes more and more delocalized, and FGR depicts the full decay in the thermodynamic limit. We explore the autocorrelation function with \(J_{z}\) between \(0.15-0.5\) with system size \(L=18,20,22\). The results are summarized in Fig. 5, with a numerical fitting to an exponential decay also shown. The fitted decay rate is compared with FGR results in Fig. 6. At small \(J_{z}\), the decay rate follows the prediction of FGR. The increase of error bars comes from the enhancement of oscillations in the autocorrelation and the late time slowing down of the decay as \(J_{z}\) decreases. This is a finite system size effect. In particular, as \(J_{z}\) decreases, the quasi-conserved operator becomes more localized and more similar to the zero mode \(\psi_{0}\). Therefore, one has to increase the system size further to separate them. In appendix A, we connect the decay of the edge zero mode to a tunneling process of a 1D particle from the edge to the bulk in Krylov space. We show how FGR can be recovered in Krylov space. ## V Conclusions The TFIM is one of the most basic models that hosts an edge zero mode. Understanding the stability of that zero mode to perturbations is important. This paper has studied the effect of a weak boundary integrability-breaking perturbation. We have compared this perturbation to the conventional one, where integrability-breaking perturbations are uniformly included all along the chain. We showed a qualitatively different behavior in the dynamics of the zero modes for these two cases. In particular, for the impurity model, the zero mode decays much more slowly than for the case where the perturbations are non-zero all along the chain. The slow decay arises because the zero mode has an overlap with a quasi-conserved quantity. We explicitly identified this quasi-conserved quantity by a trick that involves local modifications of couplings at the end of the spin chain to enforce the exact degeneracy of the spectrum for any Figure 5: The infinite temperature autocorrelation function for \(L=18\) (top), \(20\) (middle) and \(22\) (bottom) with \(g=0.6\) and different strengths of the integrability breaking term \(J_{z}\). As \(J_{z}\) decreases, the lifetime increases. Each data set is fitted (solid black line) with an exponential function \(C\exp(-t\Gamma_{\rm fit})\) where the decay rate is determined from the average \(\Gamma_{\rm fit}=(\Gamma_{90\%}+\Gamma_{50\%})/2\), where \(\Gamma_{\rm z\%}\) is the inverse time at which the autocorrelation is \(0.0xC\). The decay rate is in units of \(J_{x}=1\). Figure 6: \(\Gamma_{\rm Fit}\) vs. \(J_{z}\) on a log-log scale for \(L=18,20,22\). The decay rate from FRG gives \(\Gamma_{\rm Fit}=0.16J_{z}^{2}\) for small \(J_{z}\) (dashed red line). The system size effect becomes larger for small \(J_{z}\) as the autocorrelation is strongly influenced by the quasi-conserved operator in Fig. 5. The decay rate is in units of \(J_{x}=1\). finite system size. We showed that in the thermodynamic limit, the overlap between the zero mode with the quasi-conserved quantity becomes smaller, approaching zero as \(L\to\infty\). In addition, we showed that for large enough transverse fields and in the thermodynamic limit, the zero mode decay could be captured by FGR. While we have a quantitative understanding of the zero mode decay for \(g\geq 1/3\), an important open question is the fate of the zero mode for small \(g\). We do not expect FGR to hold below \(g<1/3\). How the decay rate changes as \(g\) becomes smaller is left for future studies. The analytic construction of the quasi-conserved quantities presented in Appendix B might help these studies. In addition, the Krylov method, generalized to systems in the thermodynamic limit, employed here to recover FGR (see Appendix A), may also be helpful. Acknowledgments: This work was supported by the US Department of Energy, Office of Science, Basic Energy Sciences, under Award No. DE-SC0010821 (HY and AM), and by the National Science Foundation under Grant NSF DMR-2116767 (LK and AGA). HY acknowledges the support of the NYU IT High-Performance Computing resources, services, and staff expertise. AGA acknowledges the support of Rosi and Max Varon Visiting Professorship. ## Appendix A Operator growth in Krylov space Besides the direct study of the autocorrelation function to determine the decay rate of a given operator \(O_{1}\), there is another approach for extracting decay rates. This involves studying how the operator evolves and spreads in operator space. First, the Heisenberg time evolution of the operator \(O_{1}\) under the Hamiltonian \(H\) is \[O_{1}(t)=e^{iHt}O_{1}e^{-iHt}=\sum_{n=0}^{\infty}\frac{(it)^{n}}{n!}\mathcal{ L}^{n}O_{1}, \tag{10}\] where we define \(\mathcal{L}O=[H,O]\) for any operator \(O\). In operator space, we treat the operator \(O_{1}\) as a vector \(|O_{1}\rangle\), and \(\mathcal{L}\) is called the superoperator since it is an operator which acts on operators. In this new notation, the time evolution of \(O_{1}\) becomes \[|O_{1}(t)\rangle=e^{i\mathcal{L}t}|O_{1}\rangle, \tag{11}\] where \(\mathcal{L}\) plays the role of a "Hamiltonian" as it is the generator of time evolution for the operators. The operator space is spanned by the set of operators generated by \(\mathcal{L}\) acting on \(|O_{1}\rangle\): \(\{|O_{1}\rangle,\mathcal{L}|O_{1}\rangle,\mathcal{L}^{2}|O_{1}\rangle,\ldots\}\), and is called the Krylov space. The inner product between two operators \(A\) and \(B\) is defined as \[(A|B)=\frac{1}{2^{L}}\mathrm{Tr}[A^{\dagger}B]. \tag{12}\] To construct an orthonormal basis, we apply the Lanczos algorithm. Starting from a normalized operator \(|O_{1}\rangle\), one can generate a new basis element \(|O_{2}\rangle\) via \(\mathcal{L}|O_{1}\rangle=b_{1}|O_{2}\rangle\) with \(b_{1}=\sqrt{|\mathcal{L}|O_{1}\rangle|^{2}}\), the norm of \(\mathcal{L}|O_{1}\rangle\). The remaining basis elements are computed from the iterative relation for \(n\geq 2\) \[\mathcal{L}|O_{n}\rangle=b_{n}|O_{n+1}\rangle+b_{n-1}|O_{n-1}\rangle, \tag{13}\] where \(b_{n}=\sqrt{|\mathcal{L}|O_{n}\rangle-b_{n-1}|O_{n-1}\rangle|^{2}}\). Finally, one can represent \(\mathcal{L}\) as a tri-diagonal matrix in this basis \[\mathcal{L}=\begin{pmatrix}0&b_{1}&&\\ b_{1}&0&b_{2}&\\ &b_{2}&0&\ddots\\ &&\ddots&\ddots\end{pmatrix}. \tag{14}\] In the following, we refer to this tridiagonal matrix as the Krylov Hamiltonian. There are two kinds of representations in the numerical computation to generate off-diagonal elements \(\{b_{n}\}\): matrix representation or Pauli strings. In the matrix representation, \(|O_{1}\rangle\) is a \(2^{L}\times 2^{L}\) matrix. It is usually sparse if \(|O_{1}\rangle\) is some local operator, e.g., \(\sigma_{1}^{x}\). After some iterations, one begins to generate non-sparse matrices \(|O_{n}\rangle\), and the computation is limited by the memory to store such matrices. The non-sparsity of the basis \(|O_{n}\rangle\) is a property both for integrable and non-integrable models unless, for the former, a suitable Majorana basis is available to perform the expansion. The idea behind the Pauli strings representation is to overcome the non-sparsity of the matrix representation, and below we summarize the discussion in [21]. For spin systems, a Pauli string is a series of tensor products of Pauli matrices on each site as follows \[i^{\delta}(-1)^{\epsilon}(\sigma_{1}^{x})^{v_{1}}(\sigma_{1}^{z})^{w_{1}} \otimes\ldots\otimes(\sigma_{L}^{x})^{v_{L}}(\sigma_{L}^{z})^{w_{L}}, \tag{15}\] where \(\delta,\epsilon,v_{n},w_{n}\in\{0,1\}\). Thus one only requires to store \(2L+2\) numbers and each of them is either \(0\) or \(1\), for a given Pauli string. Since \(\sigma^{x}\sigma^{z}=-i\sigma^{y}\) and the identity corresponds to setting \(v=w=0\), the Pauli string representation indeed exhausts all possible combinations of local spin operators. For two given Pauli strings \(\sigma\) and \(\sigma^{\prime}\), labeled by \(\{\delta,\epsilon,\vec{v},\vec{w}\}\) and \(\{\delta^{\prime},\epsilon^{\prime},\vec{v^{\prime}},\vec{w^{\prime}}\}\), the new Pauli string generated from the commutation \(\sigma^{\prime\prime}=[\sigma,\sigma^{\prime}]\) obeys the algebra rules \[\delta^{\prime\prime} =\delta+\delta^{\prime}\ \mathrm{mod}\ 2, \tag{16}\] \[\epsilon^{\prime\prime} =\epsilon+\epsilon^{\prime}+\delta\delta^{\prime}+\vec{w}\cdot \vec{v^{\prime}}\ \mathrm{mod}\ 2,\] (17) \[\vec{v^{\prime\prime}} =\vec{v}+\vec{v^{\prime}}\ \mathrm{mod}\ 2,\] (18) \[\vec{w^{\prime\prime}} =\vec{w}+\vec{w^{\prime}}\ \mathrm{mod}\ 2. \tag{19}\] For an operator represented by Pauli strings, there are overall \((2L+2)\times N\) numbers, where \(N\) is the number of Pauli strings since it requires another \(N\)-dimensional vector to store the coefficients of each Pauli string. As long as \(N\) is much smaller than \(2^{L}\times 2^{L}\), the Pauli string representation is efficient in the memory cost. However, it is time-consuming to add or subtract operators which consist of many Pauli strings because one has to scan through all the Pauli strings of each operator to determine if the two operators contain the same Pauli string. Addition and subtraction are much simpler operations in matrix representation. Therefore, one may choose either the matrix or the Pauli string representation in numerical computation depending on how fast the number of Pauli strings grows and how many off-diagonal elements \(b_{n}\) one needs to compute. In Fig. 7, we show the system size independent results of \(\{b_{n}\}\) generated by \(|O_{1}\rangle=\sigma_{1}^{2}\). Numerically, we calculate \(\{b_{n}\}\) with increasing system size until \(\{b_{n}\}\) is independent of system size. We employ the matrix representation for the model with perturbations on all sites, and we employ the Pauli strings representation for the boundary impurity and the free case. The growth of \(\{b_{n}\}\) reflects the integrability of the system. Without any perturbation, the system is free and \(\{b_{n}\}\) are perfectly dimerized, which allows for an exactly conserved zero mode localized on the first site. In particular, (10) becomes a Su-Schrieffer-Heeger (SSH) model with topologically non-trivial dimerization. However, the perfect dimerization is altered by interactions and it is argued [21] that a linear growth appears when the system is chaotic, e.g. perturbations on all sites of a chain in Fig. 7. When integrability is weakly broken like for the boundary impurity, we observe a square root behavior of \(\{b_{n}\}\). The square root behavior is also seen in the integrable interacting model of the XXX chain, see [21]. Here, we simply assume square root behavior holds for the boundary impurity case and we will build a toy model from it. To understand how \(\{b_{n}\}\) is related to the decay rate of edge zero mode, we follow and summarize the discussion in Refs. [11; 13]. First, one can write down the Schrodinger equation of the operator from (11) and (10) \[-i\partial_{t}\Psi_{n}=b_{n}\Psi_{n+1}+b_{n-1}\Psi_{n-1}, \tag{12}\] where \(\Psi_{n}\) are the coefficients of operator \(\Psi\) expanded in Krylov space, \(|\Psi\rangle=\sum_{n=1}^{\infty}\Psi_{n}|O_{n}\rangle\). Since for the non-interacting case, the \(b_{n}\) are perfectly dimerized, we proceed by decomposing \(\Psi_{n}\) and \(b_{n}\) into two parts \[\Psi_{n}=i^{n}\left[\alpha_{n}+(-1)^{n}\tilde{\alpha}_{n}\right], \tag{13}\] \[b_{n}=h_{n}+(-1)^{n}\tilde{h}_{n}, \tag{14}\] where \(h_{n}\) depicts the average growth of \(b_{n}\) and \(\tilde{h}_{n}\) senses the dimerization of \(b_{n}\). The Schrodinger equation then becomes \[-i\partial_{t}\left[\alpha_{n}+(-1)^{n}\tilde{\alpha}_{n}\right] \tag{15}\] \[= i\left[(h_{n}\alpha_{n+1}-\tilde{h}_{n}\tilde{\alpha}_{n+1}-h_{ n-1}\alpha_{n-1}-\tilde{h}_{n-1}\tilde{\alpha}_{n-1})\right.\] \[\left.+(-1)^{n}(\tilde{h}_{n}\alpha_{n+1}-h_{n}\tilde{\alpha}_{n +1}+h_{n-1}\tilde{\alpha}_{n-1}+\tilde{h}_{n-1}\alpha_{n-1})\right].\] Since \((-1)^{n}\) is rapidly oscillating, the Schrodinger equation can be solved by equating the terms with and without \((-1)^{n}\) on both sides. Now, we assume \(\alpha_{n},\tilde{\alpha}_{n},h_{n}\) and \(\tilde{h}_{n}\) are slowly varying and smooth functions of \(n\). In the continuous limit of \(n\), we consider \(h_{n\pm 1}\approx h(n)\pm\partial_{n}h(n)\) and the same expansion for \(\tilde{h}_{n\pm 1},\alpha_{n\pm 1}\) and \(\tilde{\alpha}_{n\pm 1}\). In addition, we only keep terms up to one derivative. One obtains \[-i\partial_{t}\begin{pmatrix}\alpha\\ \tilde{\alpha}\end{pmatrix}=i\begin{pmatrix}\partial_{n}h+2h\partial_{n}&-2 \tilde{h}+\partial_{n}\tilde{h}\\ 2\tilde{h}-\partial_{n}\tilde{h}&-\partial_{n}h-2h\partial_{n}\end{pmatrix} \begin{pmatrix}\alpha\\ \tilde{\alpha}\end{pmatrix}. \tag{16}\] The diagonal terms can be massaged into simple linear spatial derivatives. First by rescaling fields, \((\alpha\ \tilde{\alpha})^{T}=\chi/\sqrt{h}\), \(\partial_{n}h\) is canceled. Then, absorbing \(2h\) into \(n\) via the change of variables \[X=\int_{0}^{n}\frac{dn^{\prime}}{2h(n^{\prime})}. \tag{17}\] one finally arrives at \[-i\partial_{t}\chi=\left[-i\sigma_{z}\partial_{X}+\sigma_{y}m(X)\right]\chi, \tag{18}\] where \(m(X)=2\tilde{h}-(\partial_{X}\tilde{h})/2h\). Essentially, we have approximated the generalized SSH model in Krylov space as a continuous 1D Dirac equation with spatially non-uniform mass that contains information about the dimerization of \(b_{n}\). In the following, we first extract the information from numerical results and then apply the above toy model to compute how the decay rate is influenced by the boundary impurity. Fig. 8 shows the system size independent \(b_{n}\) up to \(n=100\) with \(g=0.6\) and \(J_{z}=0.2\), and determines \(h_{n}\) and Figure 7: Off diagonal matrix element \(b_{n}\) of the Krylov Hamiltonian for the seed operator \(\sigma_{1}^{z}\) and for the transverse-field Ising model with different perturbations. Without perturbations, \(b_{n}\) is dimerized. For the boundary impurity \(J_{z}\sigma_{1}^{z}\sigma_{2}^{z}\), \(b_{n}\)s follow in square root growth. With perturbation on all sites \(J_{z}\sum_{i}\sigma_{i}^{z}\sigma_{i+1}^{z}\), \(b_{n}\)s grow linearly. \(\tilde{h}_{n}\) from the \(b_{n}\) as follows \[h_{n} \approx\frac{b_{n}+b_{n+1}}{2}, \tag{117}\] \[\tilde{h}_{n} \approx(-1)^{n}\frac{b_{n}-b_{n+1}}{2}. \tag{118}\] Here we present \(h_{n}\) and \(\tilde{h}_{n}\) in \(\sqrt{n}\) scale since the \(b_{n}\) follow a square root growth. One can approximate \(h_{n}\) (middle panel) as \(h_{n}\approx a\sqrt{n}+b\) and the new spatial coordinate \(X\) from (117) is \[X=\frac{\sqrt{n}}{a}-\frac{b\ln(1+a\sqrt{n}/b)}{a^{2}}. \tag{119}\] For \(\tilde{h}_{n}\) and its 7 site moving average \(\langle\tilde{h}_{n}\rangle_{7}\) (right panel), the dimerization only survives up to \(\sqrt{n}\sim 6\), and therefore the edge zero mode has to decay eventually. The moving average \(\langle\tilde{h}_{n}\rangle_{7}\) mimics the slowly varying continuous \(h(n)\) in the toy model. In the new coordinate \(X\), \(\tilde{h}_{n}\) and \(\langle\tilde{h}_{n}\rangle_{7}\) is presented in Fig. 9. To determine the mass of the toy model and perform analytic calculations, we first approximate the mass by \(m(X)\approx 2\tilde{h}(X)\). This is because the moving average \(\langle\tilde{h}_{n}\rangle_{7}\) is rather smooth and \(h\) grows with \(X\), so that \((\partial_{X}\tilde{h})/2h\) can be dropped. Then, we fit \(\langle\tilde{h}_{n}\rangle_{7}\) with a step function profile \(M_{0}\theta(X_{0}-X)/2\) for analytic simplicity. Thus the mass is approximated to be \(m(X)\approx 2\tilde{h}(X)\approx M_{0}\theta(X_{0}-X)\). For a given mass distribution, one can solve the Green's function of the toy model (116), and from that extract the decay rate of the edge zero mode from the pole of the Green's function on the positive imaginary axis in the complex plane. To find the pole of the Green's function, one can instead easily solve the scattering problem because the transmission and reflection coefficients share the same poles as the Green's function. The scattering problem of the incident wave coming from \(X=\infty\) with step function mass distribution is solved by considering \(X<X_{0}\): \[\chi(X<X_{0})\] \[=e^{iEt}\left[A_{1}e^{-\kappa X}\begin{pmatrix}-iM_{0}\\ i\kappa+E\end{pmatrix}+A_{2}e^{\kappa X}\begin{pmatrix}-iM_{0}\\ -i\kappa+E\end{pmatrix}\right], \tag{120}\] and \(X>X_{0}\): \[\chi(X>X_{0})\] \[=e^{iEt}\left[e^{iE(X-X_{0})}\begin{pmatrix}0\\ 1\end{pmatrix}+Be^{-iE(X-X_{0})}\begin{pmatrix}1\\ 0\end{pmatrix}\right], \tag{121}\] where \(\kappa=\sqrt{M_{0}^{2}-E^{2}}\). The coefficients, \(A_{1},A_{2}\) and \(B\), are determined by boundary conditions at \(X=0\) and \(X_{0}\). In the original discrete Schrodinger equation (100), the boundary condition at \(n=0\) is \(\Psi_{0}=\alpha_{0}+\tilde{\alpha}_{0}=0\). In terms of the toy model, one obtains the boundary Figure 8: Off-diagonal matrix element \(b_{n}\) (left panel). \(h_{n}\) (middle panel) and \(\tilde{h}_{n}\) (right panel) are generated according to (117) and (118). As \(b_{n}\)s follow square root growth, both \(h_{n}\) and \(\tilde{h}_{n}\) are plotted on the \(\sqrt{n}\)-scale. \(h_{n}\) describes the average growth of \(b_{n}\) and is fitted with \(a\sqrt{n}+b\). \(\tilde{h}_{n}\) illustrates the dimerization of \(b_{n}\), which only survives up to \(\sqrt{n}\sim 6\). Also plotted is the moving 7-sites average \(\langle\tilde{h}_{n}\rangle_{7}\) as a smooth approximation for the \(\tilde{h}_{n}\). Figure 9: \(\tilde{h}_{n}\) vs coordinate \(X\). The 7-sites moving average \(\langle\tilde{h}_{n}\rangle_{7}\) is fitted with the step function \(M_{0}\theta(X_{0}-X)/2\), where \(M_{0}/2\) is fitted from the value of the first \(\langle\tilde{h}_{n}\rangle_{7}\) or the maximum of \(\langle\tilde{h}_{n}\rangle_{7}\). In this case, they happen to be the same, giving one fitting result. \(X_{0}\) is fitted by extrapolating the \(\langle\tilde{h}_{n}\rangle_{7}\) data to find the smallest \(X_{0}\) where \(\langle\tilde{h}_{n}\rangle_{7}\) becomes \(M_{0}/4\). condition at \(X=0\): \(\sigma_{x}\chi(0)=-\chi(0)\). At \(X=X_{0}\), the wave function is continuous: \(\chi(X\to X_{0}^{-})=\chi(X\to X_{0}^{+})\). From these two conditions, the coefficients can be solved for and they share the common factor in the denominator. The poles are the value of \(E\) at which the common denominator vanishes, \[\kappa\cosh(\kappa X_{0})-m\sinh(\kappa X_{0})+iE\sinh(\kappa X_{0})=0. \tag{101}\] For the decay rate of edge zero mode, one is looking for the solution \(E=i\Gamma\) of the above equation. In the limit \(\Gamma/M_{0}\ll 1\) and \(M_{0}X_{0}\gg 1\), the solution is the WKB approximation \[\Gamma\approx 2M_{0}e^{-2M_{0}X_{0}}. \tag{102}\] We compare the numerical results of the poles with the WKB formula in Fig. 10, and find that they are in good agreement at \(M_{0}X_{0}>2\). Although we have performed a crude approximation to establish the toy model and extract information from \(h_{n}\) and \(\tilde{h}_{n}\), the underlying physical picture is quite simple. The decay of the edge zero mode can be realized as a tunneling event. Without integrability-breaking perturbations, the system has perfect dimerization and \(X_{0}\to\infty\) so that the zero mode has an infinitely long lifetime. With perturbations, the dimerization terminates at some finite \(X_{0}\) and the edge zero mode becomes unstable as it can now tunnel through the finite potential barrier \(M_{0}\). When one gradually turns off the perturbation \(J_{z}\), approaching the free limit, \(X_{0}\) strongly depends on \(J_{z}\) as \(X_{0}\to\infty\) with \(J_{z}\to 0\), but \(M_{0}\) stays around some \(\tilde{\mathcal{O}}(1)\) number. Therefore, the \(J_{z}^{z}\) dependence in the FGR region is expected to arise from the exponential factor of the WKB result, i.e., we expect \(M_{0}X_{0}\propto-\ln J_{z}\), where the \(J_{z}\)-dependence primarily comes from \(X_{0}\). Fig. 11 shows results from the two different fittings of Fig. 12. It shows the trend \(M_{0}X_{0}\propto-\ln J_{z}\), supporting the FGR argument in the main text. One advantage of Krylov space is that one can probe the small \(J_{z}\) region more easily than calculating the autocorrelation function. We take \(J_{z}\) to be as small as \(J_{z}=0.1\) in Figures 11 and 12, but this regime is rather unfeasible for the autocorrelation function which shows strong system size dependence. However, the decay rate from the toy model is sensitive to the way the fitting is done, as shown in Fig. 11. As the oscillations of \(\langle\tilde{h}_{n}\rangle_{7}\) become stronger for small \(J_{z}\), fitting \(M_{0}\) and \(X_{0}\) from a step function is rather ambiguous. A more careful analysis of the numerical data is required for small \(J_{z}\). For the region of \(g,J_{z}\) we have explored, we conclude that the decay rate obeys FGR. ## Appendix B Quasi-conserved operator for finite spin chains The analysis of the edge spin autocorrelation function in section III revealed the existence of a non-local quasi-conserved operator \(O_{c}\) responsible for the observed plateau at intermediate times. Here we give the explicit construction of this operator from the commutation algebra of a finite spin chain. We consider the two Hamiltonians \[\tilde{H}_{L}^{A} =\sum_{i=1}^{L-1}\sigma_{i}^{x}\sigma_{i+1}^{x}+g\sum_{i=1}^{L-1 }\sigma_{i}^{z}+J_{z}\sigma_{1}^{z}\sigma_{2}^{z}, \tag{103}\] \[\tilde{H}_{L}^{B} =\sum_{i=1}^{L-1}\sigma_{i}^{x}\sigma_{i+1}^{x}+g\sum_{i=1}^{L-1} \sigma_{i}^{z}+J_{z}\sum_{i=1}^{L-2}\sigma_{i}^{z}\sigma_{i+1}^{z}. \tag{104}\] Above, the first Hamiltonian (103) is the TFIM perturbed by boundary interactions, and with \(g=0\) on the last site. The second Hamiltonian (104) has interactions on all sites, except the last site where both \(g=J_{z}=0\). It is clear that both Hamiltonians commute with the \(\mathbb{Z}_{2}\) sym Figure 10: Comparison between decay rates from the pole of the Green’s function (101) and from WKB (102). Although WKB is typically valid when \(M_{0}X_{0}\gg 1\), the numerical results already show a good agreement for \(M_{0}X_{0}>2\). metry (parity) operator \(\mathcal{D}=\sigma_{1}^{z}\sigma_{2}^{z}...\sigma_{L}^{z}\) and the spectra of the two Hamiltonians are exactly doubly degenerate. One can attempt to find a conserved operator \(O_{c}\) satisfying \[[\tilde{H}_{L}^{A,B},O_{c}^{A,B}]=0,\hskip 28.452756pt\{\mathcal{D},O_{c}^{A,B} \}=0. \tag{14}\] One way to proceed is to expand \(O_{c}\) as a power series \[O_{c}=\sum_{n=0}^{\infty}J_{z}^{n}\left(\sum_{m=0}^{\infty}g^{m}O_{c}^{(n,m)} \right), \tag{15}\] starting from \(O_{c}^{(0,0)}=\sigma_{1}^{x}\), similar to the approach in [3; 22]. (14) can be solved order by order exactly on a finite chain. For fixed \(L\), one can expand \(O_{c}\) as a linear combination of all the strings of up to \(L\) spin operators which anticommute with \(\mathcal{D}\), and then solve the linear equation \([\tilde{H}_{L}^{A,B},O_{c}^{A,B}]=0\) for the coefficients. For \(L=3\), both Hamiltonians are equal, \(\tilde{H}_{3}^{A}=\tilde{H}_{3}^{B}\). There are multiple solutions of (14) for \(O_{c}=O_{c}^{A,B}\), with two of these having an overlap with \(\sigma_{1}^{x}\), \[\begin{split} O_{c,1}^{L=3}\propto&\sigma_{1}^{x}+ g\sigma_{1}^{z}\sigma_{2}^{x}+g^{2}\sigma_{1}^{z}\sigma_{2}^{z}\sigma_{3}^{x}-J_{z} \sigma_{1}^{y}\sigma_{2}^{y}\sigma_{3}^{x}\\ &+J_{z}g(\sigma_{1}^{z}+\sigma_{2}^{z})\sigma_{3}^{x}\end{split} \tag{16}\] \[\begin{split} O_{c,2}^{L=3}\propto& g^{2}\sigma_{1}^{z}-gJ_{z} \sigma_{1}^{x}\sigma_{2}^{z}+g(g^{2}-J_{z}^{2})\sigma_{1}^{z}\sigma_{2}^{x}\\ &+gJ_{z}(1+g^{2}-J_{z}^{2})\sigma_{2}^{z}\sigma_{3}^{x}+J_{z}(g^ {2}-J_{z}^{2})\sigma_{1}^{x}\sigma_{2}^{x}\sigma_{3}^{x}\\ &+gJ_{z}(g^{2}-J_{z}^{2})\sigma_{1}^{z}\sigma_{3}^{x}\\ &+g^{2}(g^{2}-J_{z}^{2})\sigma_{1}^{z}\sigma_{2}^{x}\sigma_{3}^{x},\end{split} \tag{17}\] up to normalization. Interestingly, these two operators are distinct for \(J_{z}>0\) but reduce to the zero mode of the TFIM (4) as \(J_{z}\to 0\), \[O_{c,1,2}^{L=3}\rightarrow\psi_{0}\propto\sigma_{1}^{x}+g\sigma_{1}^{z}\sigma _{2}^{x}+g^{2}\sigma_{1}^{z}\sigma_{2}^{z}\sigma_{3}^{x}. \tag{18}\] A particular consequence is that, after normalization, the overlap of \(O_{c,1,2}^{L=3}\) with \(\sigma_{1}^{x}\) is of \(\mathcal{O}(1)\) in \(J_{z}\). By expressing \(O_{c,1,2}^{L=3}\) in terms of Majoranas, the interactions lead to the three-Majorana terms (terms with \(J_{z}\)). \[O_{c,1}^{L=3}= a_{1}+ga_{3}+g^{2}a_{5}-iJ_{z}a_{2}a_{3}a_{5}-igJ_{z}a_{3}a_{4}a_{5}\] \[-igJ_{z}a_{1}a_{2}a_{5}, \tag{19}\] \[O_{c,2}^{L=3}= g^{2}a_{1}+g(g^{2}-J_{z}^{2})a_{3}+g^{2}(g^{2}-J_{z}^{2})a_{5}\] \[+igJ_{z}a_{1}a_{3}a_{4}-iJ_{z}(g^{2}-J_{z}^{2})a_{1}a_{4}a_{5}\] \[-igJ_{z}(g^{2}-J_{z}^{2})a_{3}a_{4}a_{5}\] \[-igJ_{z}(1+g^{2}-J_{z}^{2})a_{1}a_{2}a_{5}. \tag{20}\] For chain length \(L=4\), \(\tilde{H}_{4}^{A}\neq\tilde{H}_{4}^{B}\), and indeed \(O_{c}^{A}\) and \(O_{c}^{B}\) are different. For the impurity model \(\tilde{H}_{4}^{A}\), there are four solutions \(O_{c,1,\ldots,4}^{A;L=4}\) with non-zero overlap with \(\sigma_{1}^{x}\) but it vanishes as \(J_{z}=0\). However, this seems to be an artifact of the \(L=4\) case as we have checked different system sizes up to \(L=7\). For the model with interactions on all sites \(\tilde{H}_{4}^{B}\), there are five solutions \(O_{c;1,\ldots,5}^{B;L=4}\) with non-zero overlap with \(\sigma_{1}^{x}\). Of these, three of them are \(O(1)\) and the other two are \(O(J_{z})\). Explicitly, the simplest operator is \[O_{c;1}^{B;L=4}= 3a_{1}+3ga_{3}+3g^{2}a_{5}+3g(g^{2}+J_{z}^{2})a_{7}\] \[-iJ_{z}a_{1}a_{3}a_{6}-3iJ_{z}a_{2}a_{3}a_{5}-igJ_{z}a_{3}a_{6}a_ {7}\] \[-igJ_{z}a_{1}a_{4}a_{7}-3igJ_{z}a_{3}a_{4}a_{5}-4igJ_{z}a_{4}a_{5}a _{7}\] \[-4igJ_{z}a_{2}a_{3}a_{7}-4igJ_{z}a_{1}a_{2}a_{5}\] \[-5ig^{2}J_{z}a_{5}a_{6}a_{7}-6ig^{2}J_{z}a_{3}a_{4}a_{7}\] \[+i(1-5g^{2})J_{z}a_{1}a_{2}a_{7}-ga_{1}a_{3}a_{4}a_{6}a_{7}\] \[-ga_{1}a_{2}a_{3}a_{5}a_{6}-g^{2}a_{2}a_{3}a_{5}a_{6}a_{7}-g^{2}a_{ 1}a_{2}a_{4}a_{5}a_{7}\] \[+g^{2}a_{1}a_{2}a_{3}a_{4}a_{5}+g(1-g^{2}-J_{z}^{2})a_{1}a_{2}a_{5}a _{6}a_{7}\] \[+g(1-g^{2}-4J_{z}^{2})a_{1}a_{2}a_{3}a_{4}a_{7}\] \[-g(g^{2}+4J_{z}^{2})a_{3}a_{4}a_{5}a_{6}a_{7}. \tag{21}\] For \(J_{z}\to 0\), note that Figure 12: \(h_{n}\) (top panels) and \(\tilde{h}_{n}\) (bottom panels) for \(g=0.6\) with different boundary impurity strengths \(J_{z}\). \(h_{n}\) is fitted with \(a\sqrt{n}+b\). The 7-sites moving average \(\langle\tilde{h}_{n}\rangle_{7}\) is fitted with the step function \(M_{0}\theta(X_{0}-X)/2\), where \(M_{0}/2\) can is fitted from the value of the first \(\langle\tilde{h}_{n}\rangle_{7}\) or by its maximum value. \(X_{0}\) is fitted by extrapolating the \(\langle\tilde{h}_{n}\rangle_{7}\) data to find the smallest \(X_{0}\) where \(\langle\tilde{h}_{n}\rangle_{7}\) becomes \(M_{0}/4\). This leads to two different fitting results. (five Majorana terms), which indeed gives an \(\mathcal{O}(1)\) overlap with \(a_{1}\). For \(L=5,6,7\), one can also find conserved quantities \(O_{c}^{A,B;L}\), with overlap of \(\mathcal{O}(1)\) with \(\sigma_{1}^{x}\), i.e., \(a_{1}\), the Majorana on the first site. Since there are too many solutions of the quasi-conserved operators, we simply show their overlap with \(a_{1}\) for \(L=5\) in Fig. 13. The existence of the conservation laws \(O_{c}\) explains the plateaus observed in Fig. 3: for intermediate times, the edge spin excitation relaxes into the conserved quantities \(O_{c}\). The fact that we find multiple solutions \(O_{c;1,\ldots}^{A,B;L}\) does not mean that there are multiple "zero modes". Rather, one should project \(\sigma_{1}^{x}\) onto the space spanned by the conserved quantities \(O_{c,i}^{A,B;L}\), which gives the single commuting operator onto which the system relaxes. The other operators can then be made orthogonal to it by a Gram-Schmidt procedure. We expect the squared norm of this projected operator to give an estimate of the plateau height. As we show in Fig. 14, the projected norm decreases with the size of the chain \(L\), and approaches the values of the plateaus in Fig. 3 upon extrapolation to \(L=10\). We can understand the decrease of the norm of this operator as due to its delocalization. As we noted in equations (14-15), for a longer chain the conserved operators \(O_{c}\) involve longer Majorana strings and the number of possible strings increases exponentially. Fig. 14 shows that the number of terms involving longer strings of Majoranas indeed increases very rapidly, which leads to the \(O_{c}\) becoming less localized on the first site. Note also that, while the number of terms increases at the same rate for both the impurity model and the model with interactions in all sites, in the impurity model the weight of the longer strings is smaller, which is consistent with this model displaying higher plateaus (Fig. 3). ## Appendix C Random state approximation and Trotter decomposition Due to the limitations of ED, a different numerical method is needed in order to explore autocorrelation functions for large system sizes. The autocorrelation of \(\sigma_{1}^{x}\) (2) is explicitly written as \[A_{\infty}(t)=\frac{1}{2^{L}}\mathrm{Tr}\left[U^{\dagger}(t)\sigma_{1}^{x}U(t) \sigma_{1}^{x}\right], \tag{16}\] where \(U(t)\) is the unitary evolution operator. One can replace the last \(\sigma_{1}^{x}\) by \((\sigma_{1}^{x}+\mathbb{I})\), where \(\mathbb{I}\) is the identity Figure 14: Top panel: Squared norm of the projection of the edge spin \(\sigma_{1}^{x}=a_{1}\) onto the subspace of operators spanned by the conserved quantities \(O_{c}\). For both models, this projection becomes smaller for a larger chain, signaling that the resulting plateau observed in the autocorrelation function becomes smaller. Bottom panel: Number of terms as a function of string length in the conserved operator \(O_{c}^{A,B;L=7}\) which best overlaps with the edge spin. The rapid increase in the number of terms is responsible for the delocalization of the conserved quantities. matrix, since \(\text{Tr}[\sigma_{1}^{x}]=\text{Tr}[\sigma_{1}^{x}(t)]=0\) so that the autocorrelation stays the same. Moreover, with \((\sigma_{1}^{x})^{2}=\mathbb{I}\), one derives the identity \((\sigma_{1}^{x}+\mathbb{I})=(\sigma_{1}^{x}+\mathbb{I})^{2}/2\). By cyclic permutation in the trace, autocorrelation function has the following symmetric expression \[A_{\infty}(t)=\frac{1}{2^{L}}\text{Tr}\left[\frac{(\sigma_{1}^{x}+\mathbb{I})} {\sqrt{2}}U^{\dagger}(t)\sigma_{1}^{x}U(t)\frac{(\sigma_{1}^{x}+\mathbb{I})}{ \sqrt{2}}\right]. \tag{10}\] Now, we approximate the trace by average over a Haar random state \(\phi\) up to \(\mathcal{O}(1/\sqrt{2^{L}})\) corrections \[A_{\infty}(t)\approx\left\langle\phi\left|\frac{(\sigma_{1}^{x}+\mathbb{I})}{ \sqrt{2}}U^{\dagger}(t)\sigma_{1}^{x}U(t)\frac{(\sigma_{1}^{x}+\mathbb{I})}{ \sqrt{2}}\right|\phi\right\rangle. \tag{11}\] This approximation can be justified by the following argument. For a Haar random state expanded in eigenstate bases, \(|\phi\rangle=\sum_{n=1}^{2^{L}}c_{n}|n\rangle\), typically each coefficient \(c_{n}\) has size \(1/\sqrt{2^{L}}\) with a random phase. For a given matrix \(M\), the average over a Haar random state is \[\langle\phi|M|\phi\rangle=\sum_{n=1}^{2^{L}}|c_{n}|^{2}\langle n |M|n\rangle+\sum_{\begin{subarray}{c}n,m=1\\ n\neq m\end{subarray}}^{2^{L}}c_{n}^{*}c_{m}\langle n|M|m\rangle, \tag{12}\] where the first term leads to \(\text{Tr}[M]/2^{L}\) since \(|c_{n}|^{2}\sim 1/2^{L}\). The difference between Haar random state average and trace comes from the second term. To estimate the size of the second term, we take the square of it \[\sum_{\begin{subarray}{c}n,m=1\\ n\neq m\end{subarray}}^{2^{L}}\sum_{\begin{subarray}{c}k,l=1\\ k\neq l\end{subarray}}^{2^{L}}c_{n}^{*}c_{m}c_{k}^{*}c_{l}\langle n|M|m\rangle \langle k|M|l\rangle\] \[=\sum_{\begin{subarray}{c}n,m=1\\ n\neq m\end{subarray}}^{2^{L}}|c_{n}|^{2}|c_{m}|^{2}|\langle n|M|m\rangle|^{2} \sim\frac{1}{2^{L}}\cdot\frac{1}{2^{L}}\text{Tr}[M^{\dagger}M]. \tag{13}\] Due to the randomness of the coefficients, only the terms with \(n=l\) and \(m=k\) survive in the summation. In the last line, \(|c_{n}|^{2}\sim|c_{m}|^{2}\sim 1/2^{L}\) and the identity \(\sum_{n,m}|\langle n|M|m\rangle|^{2}=\text{Tr}[M^{\dagger}M]\) are used. Although the identity is only true when the summation includes \(n=m\) terms, it does not matter here since one only needs to estimate the order of this summation. In this article, we focus on \(M=\sigma_{1}^{x}(t)\sigma_{1}^{x}\) and \(\text{Tr}[M^{\dagger}M]/2^{L}=\mathcal{O}(1)\). Therefore, the Haar random state average gives a good approximation of the trace upto \(\mathcal{O}(1/\sqrt{2^{L}})\) corrections as we claim in (11). Based on (11), we define a new time-evolving state, \(|\tilde{\phi}(t)\rangle=U(t)[(\sigma_{1}^{x}+\mathbb{I})/\sqrt{2}]|\phi\rangle\), and the autocorrelation becomes \[A_{\infty}(t)\approx\langle\tilde{\phi}(t)|\sigma_{1}^{x}|\tilde{\phi}(t)\rangle. \tag{14}\] This representation of the autocorrelation function has advantages for large system sizes. It costs much less memory resources to evolve a state with \(2^{L}\) components than performing ED on a \(2^{L}\times 2^{L}\) matrix. However, the computation time depends linearly on \(t\) as the number of time steps to evolve \(|\tilde{\phi}\rangle\) to \(|\tilde{\phi}(t)\rangle\) is proportional to \(t\). For a unitary evolution in time step \(dt\), we apply Trotter-decomposition, \[U(dt)\approx e^{-iH_{\text{xx}}dt}e^{-iH_{x}dt}e^{-iH_{\text{xx}}dt}, \tag{15}\] where \(H_{\text{xx}},H_{z}\) and \(H_{\text{zz}}\) correspond to the three terms in the Hamiltonian (1). The many-body state is represented in the \(\sigma^{z}\) basis so that \(e^{-iH_{x}dt}e^{-iH_{z}dt}\) is diagonal. The nearest neighbor interaction terms in \(H_{\text{xx}}\) commute with each other so that \(e^{-iH_{\text{xx}}dt}\) is a series product of nearest neighbor unitary evolution. The unitary evolution in one time step is explicitly expressed as \[U(dt)\] \[\approx\left[\prod_{j=1}^{L-1}\biggl{\{}\cos{(dt)}-i\sin{(dt)} \sigma_{j}^{x}\sigma_{j+1}^{x}\biggr{\}}\right]e^{-iH_{z}dt}e^{-iH_{\text{zz}}dt}, \tag{16}\] where \(e^{-iH_{z}dt}e^{-iH_{\text{xx}}dt}\) is a diagonal matrix with \(2^{L}\) non-zero elements and \(\sigma_{j}^{x}\sigma_{j+1}^{x}\) permutes different many-body states and is a sparse matrix with \(2^{L}\) non-zero elements. Thus these objects are efficient in memory resources, but the overall computation time increases with system size. Fig. 15 shows the comparison between ED and the approximate method just described, with time steps \(dt=0.1,0.2\) (top panel). \(dt=0.1\) is consistent with ED results. However, to reduce the computation time, we take \(dt=0.2\) in the main text such that the key features of the autocorrelation function are still captured. We sacrifice some precision in order to explore larger system sizes. The energy fluctuation in the bottom panel of Fig. 15 confirms that the system is not heating because of the high-frequency drive (small time step). The fluctuations become smaller for smaller time steps as one expects energy conservation to be recovered in the continuous-time limit. Fig. 16 shows the energy fluctuations for different system sizes. The fluctuations are suppressed for larger system sizes. ## Appendix D Fermi's Golden Rule We present the full derivation of the FGR decay rate of the infinite temperature autocorrelation of the zero modes. The full Hamiltonian consists of two parts: the perturbing interaction \(V\) and the unperturbed Hamiltonian \(H_{0}\). The unitary evolution up to time \(t\) is \[U(t)=e^{-i(H_{0}+V)t}. \tag{17}\] The time evolution of the zero modes up to time \(t\), \(\psi_{0}(t)=U(t)^{\dagger}\psi_{0}U(t)\), can be expressed as \[\psi_{0}(t)=e^{i(\mathcal{L}_{0}+\mathcal{L}_{V})t}\psi_{0}, \tag{18}\] where the notations are as follows: \(\mathcal{L}_{0}\psi_{0}=[H_{0},\psi_{0}]\) and \(\mathcal{L}_{V}\psi_{0}=[V,\psi_{0}]\). The infinite temperature autocorrelation is given by \[A_{\infty}(t)=\frac{1}{2^{L}}\text{Tr}[\psi_{0}(t)\psi_{0}]. \tag{10}\] We will only expand up to second order in \(V\) and denote \(A_{\infty,n}\) as the autocorrelation function to \(n\)-th order in \(V\). The time order expansion of \(\psi_{0}(t)\) up to second order in \(\mathcal{L}_{V}\) is \[\psi_{0}(t)\] \[\approx e^{i\mathcal{L}_{0}t}\psi_{0}+\int_{0}^{t}dt^{\prime}e^{i \mathcal{L}_{0}(t-t^{\prime})}(i\mathcal{L}_{V})e^{i\mathcal{L}_{0}t^{\prime} }\psi_{0}\] \[+\int_{0}^{t}dt^{\prime\prime}\int_{t^{\prime\prime}}^{t}dt^{ \prime}e^{i\mathcal{L}_{0}(t-t^{\prime})}(i\mathcal{L}_{V})e^{i\mathcal{L}_{0 }(t^{\prime}-t^{\prime\prime})}(i\mathcal{L}_{V})e^{i\mathcal{L}_{0}t^{\prime \prime}}\psi_{0}. \tag{11}\] At the zeroth order, one does not pick up any terms containing \(\mathcal{L}_{V}\), so that \[A_{\infty,0}(t)=\frac{1}{2^{L}}\text{Tr}\left[\left\{e^{i\mathcal{L}_{0}t} \psi_{0}\right\}\psi_{0}\right]=1, \tag{12}\] where we have used the commutation relation of the zero mode \(\mathcal{L}_{0}\psi_{0}=0\) and also employed the normalization \(\text{Tr}[\psi_{0}\psi_{0}]/2^{L}=1\). Note that while \(\mathcal{L}_{0}\psi_{0}\) is not exactly zero for a finite system, it is exponentially small in system size and negligible in the computation of decay rate. At first order, \(\mathcal{L}_{V}\) appears once in the expansion \[A_{\infty,1}(t) \tag{13}\] With cyclic permutation within the trace, one can show that for arbitrary operators \(O_{1}\) and \(O_{2}\). Also, from the commutation relations, the first-order expansion is further simplified as \[A_{\infty,1}(t)=\frac{1}{2^{L}}\int_{0}^{t}dt^{\prime}\text{Tr}\left[\left\{(i \mathcal{L}_{V})\psi_{0}\right\}\psi_{0}\right]=0, \tag{14}\] which is traceless due to the cyclic property of trace: \(\text{Tr}\left[\left\{\mathcal{L}_{V}\psi_{0}\right\}\psi_{0}\right]=\text{ Tr}\left[\psi_{0}\left\{-\mathcal{L}_{V}\psi_{0}\right\}\right]=0\). Finally, for the second order correction, \(\mathcal{L}_{V}\) appears twice in the expansion \[A_{\infty,2}(t)\] \[=\frac{1}{2^{L}}\int_{0}^{t}dt^{\prime\prime}\int_{t^{\prime \prime}}^{t}dt^{\prime}\text{Tr}\left[\left\{e^{i\mathcal{L}_{0}(t-t^{\prime} )}(i\mathcal{L}_{V})e^{i\mathcal{L}_{0}(t^{\prime}-t^{\prime\prime})}\right.\right.\] \[\left.\left.\times(i\mathcal{L}_{V})e^{i\mathcal{L}_{0}t^{\prime \prime}}\psi_{0}\right\}\psi_{0}\right]. \tag{15}\] As we have learnt from the first order expansion, \(e^{i\mathcal{L}_{0}(t-t^{\prime})}\) and \(e^{i\mathcal{L}_{0}t^{\prime\prime}}\) contribute an overall factor 1. Then, Figure 16: Energy fluctuations in the results of the random state average with Trotter decomposition \(dt=0.2\), and for different system sizes. The energy difference is measured from the initial energy at \(t=0\). As the system size increases, the energy fluctuations decrease. Figure 15: Top panel: Autocorrelation function calculated by ED and by random state average with Trotter decomposition \(dt=0.1,0.2\). For \(dt=0.1\), the approximate results agree well with ED. With \(dt=0.2\), there are slight deviations from ED but still the key features of ED are captured. Bottom panel: Energy fluctuations for the random state average with Trotter decomposition. The energy difference is measured from the initial energy at \(t=0\). The fluctuations are larger for larger time step. Both \(dt=0.1,0.2\) do not show a steady heating, consistent with a high frequency driving related to \(2\pi/dt\gg 1\). one associates the first \(i\mathcal{L}_{V}\) with the last \(\psi_{0}\) by cyclic permutation. One obtains \[A_{\infty,2}(t)\] \[=-\frac{1}{2^{L}}\int_{0}^{t}dt^{\prime\prime}\int_{t^{\prime\prime }}^{t}dt^{\prime}\text{Tr}\left[\dot{\psi_{0}}(t^{\prime}-t^{\prime\prime}) \dot{\psi_{0}}\right]\] \[=-\frac{1}{2^{L}}\int_{0}^{t}dt^{\prime\prime}\int_{0}^{t}dt^{ \prime}\theta(t-t^{\prime}-t^{\prime\prime})\text{Tr}\left[\dot{\psi_{0}}(t^{ \prime})\dot{\psi_{0}}\right]\] \[=-\frac{t}{2^{L}}\int_{0}^{t}dt^{\prime}\left(1-\frac{t^{\prime} }{t}\right)\text{Tr}\left[\dot{\psi_{0}}(t^{\prime})\dot{\psi_{0}}\right], \tag{109}\] where we define \(\dot{\psi_{0}}=i\mathcal{L}_{V}\psi_{0}\) and \(\dot{\psi_{0}}(t)=e^{i\mathcal{L}_{0}t}\dot{\psi_{0}}\). In the third line, we shift \(t^{\prime}\to t^{\prime}+t^{\prime\prime}\) and impose the Heaviside theta function to preserve time order. On combining the above results, the autocorrelation function up to second order in \(V\) is \[A_{\infty}(t)\approx A_{\infty,0}(t)+A_{\infty,2}(t), \tag{110}\] where \[A_{\infty,0}(t) =1, \tag{111}\] \[A_{\infty,2}(t) =-\frac{t}{2^{L}}\int_{0}^{\infty}dt^{\prime\prime}\text{Tr} \left[\dot{\psi_{0}}(t^{\prime})\dot{\psi_{0}}\right] \tag{112}\] Note that we approximate the upper bound of the integral \(t\) by \(\infty\), and therefore the \((1-t^{\prime}/t)\) in the summation is replaced by \(1\). Since we study quantities where the lifetime is long, \(t\) is chosen to be a large number. In addition, \(\text{Tr}[\dot{\psi_{0}}(t^{\prime})\dot{\psi_{0}}]\) decays fast with a time scale that is much smaller than \(t\). Therefore, we can simply replace \(t\) by \(\infty\) in the integral. The autocorrelation function with decay rate \(\Gamma\) can be formulated as \(A_{\infty}(t)=e^{-\Gamma t}\approx(1-\Gamma t)\). By comparing this to the second-order expansion, we obtain the FGR decay rate \[\Gamma=\frac{1}{2^{L}}\int_{0}^{\infty}dt\ \text{Tr}[\dot{\psi_{0}}(t) \dot{\psi_{0}}(0)], \tag{113}\] which is (9) in the main text. Fig. 17 demonstrates the numerical computation of the infinite temperature autocorrelation \(B_{\infty}(t)=\text{Tr}[\dot{\psi_{0}}(t)\dot{\psi_{0}}]/2^{L}\), and the decay rate derived from it based on (9). The top panel validates the approximation in (112) where \((1-t^{\prime}/t)\) is replaced by \(1\). The numerical time integral is truncated at the minimum in the bottom panel to account for finite system size effects.
2307.00451
Dissipative Preparation of Many-Body Spin Steady States Using Trapped Ultracold Atoms
This article presents a dissipative method of creating a spin steady state, or a state whose spin expectation values approaches a fixed value over time, using a trapped gas of ultracold atoms coupled to a background BEC. The ultracold atoms are trapped in a double potential well embedded in a wide harmonic trap, which has a higher energy level than the double wells. The trapped atoms are then excited out of the double well trap into the harmonic trap using Raman lasers. Due to the coupling of the system to the background BEC, the atoms are then able to return to the double potential well by emitting an excitation into the background BEC, which serves as a reservoir of these excitations. By repeatedly coupling and uncoupling the trapped ultracold atoms and the background BEC over fixed intervals of time, the expectation value of the total spin of these atoms will, over time, reach a steady - state value.
Roland Cristopher F. Caballar
2023-07-02T01:59:20Z
http://arxiv.org/abs/2307.00451v5
# Dissipative Preparation of Many - Body Spin Steady States Using Trapped Ultracold Atoms ###### Abstract This article presents a dissipative method of creating a spin steady state, or a state whose spin expectation values approaches a fixed value over time, using a trapped gas of ultracold atoms coupled to a background BEC. The ultracold atoms are trapped in a double potential well embedded in a wide harmonic trap, which has a higher energy level than the double wells. The trapped atoms are then excited out of the double well trap into the harmonic trap using Raman lasers. Due to the coupling of the system to the background BEC, the atoms are then able to return to the double potential well by emitting an excitation into the background BEC, which serves as a reservoir of these excitations. By repeatedly coupling and uncoupling the trapped ultracold atoms and the background BEC over fixed intervals of time, the expectation value of the total spin of these atoms will, over time, reach a steady - state value. ## I Introduction The role of dissipation in quantum dynamics has been and continues to be an active area of research [1; 2; 3]. In particular, quantum dissipation has been used as a resource to prepare quantum states that are used in both quantum computing and quantum information [4; 5; 6; 7; 8]. One advantage of the use of dissipative methods in quantum state preparation is that by interacting with an environment with a much larger number of degrees of freedom, a quantum system will, over time, eventually attain a steady state with regards to some physical property, thus allowing for a minimal amount of control on the part of the experimenter. One particular dissipative quantum state preparation system involves the use of single trapped atoms which are coupled to a reservoir and whose ground states are coupled to their excited states via Raman lasers with a given detuning and Rabi frequency. Examples of these dissipative quantums state preparation schemes are described in Refs. [9; 10; 11; 12; 13; 14; 15; 16; 17], wherein individual atoms are trapped in optical fields. The atoms are excited from their ground states to one or more of their excited states, and they decay back to their ground states via spontaneous emission of photons into the optical trap, which act as a reservoir of these photons. Through this driven - dissipative mechanism, the atom then evolves over time towards a steady state, with steady state to which it evolves to dependent on the type of atom that is trapped, as well as the trap configuration (e. g. optical lattice, optical cavity or optical tweezers). The resulting states prepared are of interest in quantum computation and quantum information. However, it is also possible to use many - body systems such as trapped bosonic or fermionic atoms or Bose - Einstein Condensates (BECs) for dissipative quantum state preparation schemes, as shown in Refs. [18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28]. The dissipative quantum state preparation schemes described in Refs. [19; 20; 27], in particular, are of interest because instead of using optical fields, they make use of superfluids or BECs as the bath or reservoir of excitations. This quantum state preparation scheme has the advantage of being able to prepare many - body quantum states which are of interest not just in quantum information and quantum computation, but also for more general purposes such as those produced in Refs. [23; 24; 25], wherein the resulting steady - state of the initial many - body system consisting of spin - 1/2 particles is a BEC, and that produced in Ref. [22], which is a p - wave superconductor prepared using a finite number of one - dimensional fermions. Finally, it should be noted that it is also possible, as demonstrated in Ref. [29], to formulate a dissipative quantum state preparation scheme wherein macroscopic systems such as mechanical resonators serve as reservoirs of excitations for a quantum system, in this case an ensemble of microwave photons, which results in the photons behaving coherently. The dissipative quantum state preparation schemes mentioned above are but a sampling of many others that have been proposed and implemented over the years. However, focusing on the dissipative quantum state preparation schemes formulated in Refs. [22; 23; 24; 25], we see that it is possible to induce collective quantum behavior in the form of Bose - Einstein condensation or superconductivity via dissipative mechanisms. For BEC preparation, in particular, these results are significant, considering that the standard method of preparing BECs via optical trapping and laser cooling [30] requires that the gas of atoms be isolated from the surrounding environment to reduce the risk of thermal losses. This in turn requires a high degree of control over the BEC preparation process. However, by introducing dissipation as a dynamical resource, this significantly reduces the degree of control required of the experimenter during the process, since the dissipative dynamics can be used to drive
2308.08688
Lightweight Adaptation of Neural Language Models via Subspace Embedding
Traditional neural word embeddings are usually dependent on a richer diversity of vocabulary. However, the language models recline to cover major vocabularies via the word embedding parameters, in particular, for multilingual language models that generally cover a significant part of their overall learning parameters. In this work, we present a new compact embedding structure to reduce the memory footprint of the pre-trained language models with a sacrifice of up to 4% absolute accuracy. The embeddings vectors reconstruction follows a set of subspace embeddings and an assignment procedure via the contextual relationship among tokens from pre-trained language models. The subspace embedding structure calibrates to masked language models, to evaluate our compact embedding structure on similarity and textual entailment tasks, sentence and paraphrase tasks. Our experimental evaluation shows that the subspace embeddings achieve compression rates beyond 99.8% in comparison with the original embeddings for the language models on XNLI and GLUE benchmark suites.
Amit Kumar Jaiswal, Haiming Liu
2023-08-16T22:16:00Z
http://arxiv.org/abs/2308.08688v1
# Lightweight Adaptation of Neural Language Models via Subspace Embedding ###### Abstract. Traditional neural word embeddings are usually dependent on a richer diversity of vocabulary. However, the language models recline to cover major vocabularies via the word embedding parameters, in particular, for multilingual language models that generally cover a significant part of their overall learning parameters. In this work, we present a new compact embedding structure to reduce the memory footprint of the pre-trained language models with a sacrifice of up to 4% absolute accuracy. The embeddings vectors reconstruction follows a set of subspace embeddings and an assignment procedure via the contextual relationship among tokens from pre-trained language models. The subspace embedding structure1 calibrates to masked language models, to evaluate our compact embedding structure on similarity and textual entailment tasks, sentence and paraphrase tasks. Our experimental evaluation shows that the subspace embeddings achieve compression rates beyond 99.8% in comparison with the original embeddings for the language models on XNLI and GLUE benchmark suites. Word embedding, Language model, Natural language understanding + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + Footnote †: Work done when the first author was at UCL. + + compact embedding structure on English and multilingual datasets. Our main structure of the pre-trained language model for downstream tasks follows RoBERTa (Roh et al., 2017). Also, we employ XLM-R (Wang et al., 2019) for performance tests of subspace embedding on multilingual datasets. ## 2. Related Work **Word Embeddings:** In Word2vec, the vocabulary comprises words based on the input data which encounter the problem of OOV. However, certain language models (Kang et al., 2019; Wang et al., 2019) segment the words into sub-tokens to learn words co-occurrence. Consecutively, attention-based models appeared to embed the semantics in longer sentences. This new generation of self-trained models is led by architectures such as ELMo (Kang et al., 2019), which collect the embeddings of the bidirectional language models and hidden states to unleash contextual embeddings. In the case of the attention-based language models (Kang et al., 2019; Wang et al., 2019; Wang et al., 2019) based on transformer (Roh et al., 2017), this class of neural language models follows an attention mechanism to learn the context of an overall input sequence. Our work focusses on a compact representation of the contextual embeddings by means of subspace embeddings, where we divide the contextual embedding in a way that restrains the generic context of the embedding. **Language Models:** Generally, tokenizers are employed in language models to divide a sequence into tokens. Traditional tokenizers are crafted to split a sequence into contextual entities such as characters, morphs, and words. Such tokenizers can be implemented inherently, however, they are prone to out-of-vocabulary issues and require peculiar knowledge to divide into morphs. Existing work (Kang et al., 2019) introduces that the tokenizers generate formed vocabularies via learning the input data. To overcome the aforementioned problem, the byte pair encoding (Kang et al., 2019) is described to enfold all input cases (Kang et al., 2019), and it iteratively consolidates tokens to a larger token. Canine (Canine, 2018) solves the problem of the traditional tokenisation method, where the tokenizer is altered to character level in the absence of human knowledge. Similarly, Byt5 (Kang et al., 2019) introduces token-free methods which encode input sequences without tokens and rely on contextual units. Our work presents the structure of embedding indifferent of tokenizers. It is different from token-free mechanism (Kang et al., 2019) in a way that our subspace embedding structure requires a tokenizer, but the PLMs do not endure from the available tokens. As the embeddings serve as an important actor in the language model, numerous approaches (Wang et al., 2019) have been introduced to elucidate the embeddings, in terms of embedding compression, where an original embedding corresponds to the output tokens diversity. In our approach, we use a lookup operation to rebuild the embedding regardless of any further computations. ## 3. Subspace Embedding We present the formulation of our proposed approach and the methods of selecting subspace embeddings, including algorithmic descriptions of sharing the subspace embeddings. In Fig. 1, we show that six subspace embedding vectors can be generated from eight embedding vectors. In this paper, we devise an embedding compression method through two devised algorithms. Firstly, we describe how to assign arbitrarily sporadic subspace embedding. Secondly, a cluster-based subspace embedding incorporates contextual information. ### Problem Settings Initially, we divide the embedding vectors horizontally into subspace embedding (SE) vectors which are shared with certain different embedding vectors. In our proposed structure of embedding, the subspace embeddings recipe to distinctly correlate. We devise two-fold steps to calibrate the subspace embedding similar to the original embeddings. a) Consider \(E_{i}\), \(E_{j}\) to be the original embedding vectors and their subspace embedding vectors are \(\{v_{i}^{f}\}\), \(\{v_{j}^{f}\}\), \(\forall i,j\in\{1,2,...,D\}\), where \(f\in\{1,2,...,F\}\) such that \(\{v_{i}^{f}\}\neq\{v_{j}^{f}\}\) and \(i\neq j\). This step is to verify that the partitioned embedding vectors are unique. b) Given a duct of tokens having identical contextual meanings, then their subspace embeddings allocate additional parts than a random duct. This hypothesis deals with the contextual mapping of subspace embedding. Based on the aforementioned steps, we then compute a function that maps the original embeddings into subspace embeddings. We imply a one-to-one correspondence function (or perfect pairing) \(\mathcal{F}:\mathcal{P}\rightarrow\mathcal{Q}\times\ldots\times\mathcal{Q}\) for transforming the original embeddings to SE vectors where \(\mathcal{P}\in\{1,2,...,D\}\subset\mathbb{N}\) represents a set of the embedding index, and \(\mathcal{Q}=\{1,2,...,Q\}\subset\mathbb{N}\) describes a set of each SE vector index. This function can manifest for building subspace embedding vectors in many ways. Thus, the above mapping function can be generalised via the Cartesian product of functions as \(\mathcal{F}(n)=(c_{1}\times c_{2}\times\ldots\times c_{F})\underbrace{(n, \ldots,n)}_{f}\), where the function \(c_{f}:\mathcal{P}\rightarrow\mathcal{Q}\) delineates the \(f\)-th subspace embedding index. As the cardinality of \(f\)-ary Cartesian product is \(\mathcal{Q}^{f}\), the embeddings can be inherent only \(D^{\frac{1}{f}}\) subspace embeddings. This construct can substantially reduce the number of embedding parameters with log scale. We consider the embedding dimension \(d\) to be allocated equally to subspace embedding \(Q=d/f\). So, the number of embedding parameters that contain \(D\times d\) embedding table is replaced with \(f\) distinct \(D\times(d/f)\) embedding table. Specifically, the embedding with \(d-\)dimensional vector for each token in the vocabulary is replaced by \(f\) varied subspace embedding vectors \(\{v_{i}^{f}\}\), each of dimension \(d/f\), where \(v_{i}\) is drawn from the \(i-\)th fixed-size table of embedding vectors. Thus, the embedding representation can be formulated as \(v_{n}=\oplus_{f=1,...,F}vc_{f}(n)\), where \(v_{n},vc_{f}\) are the corresponding embedding vectors and \(\oplus\) denotes the concatenation operation. ### Arbitrarily Dispersed Subspace Embedding We establish \(c_{f}\) to incorporate subspace embedding via a Cartesian product which can procreate up to \(Q^{f}\) embedding vectors. For the procreated embeddings to be unique, the number of each subspace embedding \(Q\) should be larger than \(D^{1/f}\). An algorithmic description of arbitrarily assigning subspace embedding in a sequential manner is reported in Algorithm 1. It continually uses the modulo operation Figure 1. Pictorial representation of subspace embedding. Given the language model with eight embedding vectors (leftmost) to be divided into three subspace embedding vectors. The naming convention of the subspace embedding blocks follows identical letters as they share learning parameters throughout the embeddings. to procreate the entire embeddings. Based on the aforementioned assumption, we apply \(Q=\lceil D^{1/f}\rceil\), where \(\lceil.\rceil\) is a ceiling function, it is used to extract the most compact subspace embedding (the compressed form of an original embedding). This viewpoint is used to obtain the modulo operation through \(Q\)-base number. The transformation to \(Q\)-base number is the one-to-one correspondence function and we use each subspace embedding index as each digit of the base (or radix). ``` 1:\(D\) number of embeddings with dimension \(d\), and set of subspace embeddings \(F\) 2:\(Q\leftarrow\lceil D^{1/f}\rceil\)\(\triangleright\) number of each subspace embedding 3:Initialise \(f\)-th \(Q\) subspace embedding vectors \(\{\omega_{q}^{f}\in\mathbb{R}^{\frac{d}{f}}\}_{q=1}^{Q},\forall f\in\{1,\ldots,F\}\) 4:for\(n=1,2,\ldots,D\)do 5:for f = 1,2,\(\ldots,F\)do 6:\(c_{f}(n)=(n/Q^{f-1})\mod Q^{f}\) 7:endfor 8:\(n_{n}=\oplus_{f=1}^{F}\omega_{c_{f}(n)}\) 9:endfor 10:The incorporated embedding vectors are \(\{\omega_{n}\}_{n=1}^{D}\). ``` **Algorithm 1** Assign Subspace Embedding Arbitrarily ### Cluster-based Subspace Embedding This approach re-establish the subspace embedding based on contextual information from a pre-trained model. Recent advances [24] in exploiting contextual information stem from the attention-based model, namely, Transformer. It learns the entire context of an input sequence where each token's embedding vector is mapped to the embedding space manifesting its context. In Word2Vec [15], they show that two randomly mapped word vectors tend to have identical contexts. So, if each token has given its context, we can amplify the allotment heuristic via the contexts. In addition to tokens that have an identical meaning, we consider that the two tokens can be identified with smaller adjustments. Therefore, we can assign more subspace embedding to be shared, and the similarity of each duel of the tokens can be computed using a pre-trained model. The pre-trained model is employed to estimate the L2 distance among each embedding vector. Our conjecture is that all subspace embeddings are independently assigned arbitrarily, including, the tokens allocating more subspace embeddings that are anticipated to have less L2 distance. The technique of assigning subspace embedding to identical tokens follows the k-means clustering algorithm [1]. Using k-means, we serve the embedding vectors as an instance of this clustering algorithm, and so the algorithm is altered iteratively to each subspace embedding vector. The iterative k-means purpose is to distinct the instances which are assigned in certain similar subspace embeddings. We describe our Algorithm 2, the mapping of subspace embedding can propriate the second step(in Section 3.1) as the k-means algorithm is based on L2 norm. ## 4. Experiments The experiments embark with substituting the original word embeddings of masked language modelling (MLM) [11, 13, 14] with our subspace embedding to calibrate the impact of our proposed method. There exist some varied language models such as causal language and translation language modelling. **Dataset:** The language models are mainly trained with monolingual datasets and substantially fine-tuned on certain downstream tasks. Our work employs the multilingual dataset from Web Crawl [27] in which we extract ten languages corpuses explicated in XNLI [7]. In addition to it, we employ the monolingual datasets - Books [32] and English Wikipedia corpuses2 to conduct the evaluation of our proposed subspace embedding whether it supports large vocabularies. Footnote 2: [https://linguatools.org/tools/corpora/wikipedia-monolingual-corpora/](https://linguatools.org/tools/corpora/wikipedia-monolingual-corpora/) ### Language Model Settings Our work employs the masked language modelling structure for the embedding network without using next-sentence prediction. The reason is due to the fact that token prediction networks such as MLM require language models decoder to identify token representation. Several language embeddings couple join the last output weights to the input embedding weights. Embedding coupling [4] investigated that the decoder independent from the embeddings can be strengthened in terms of performance based on the decoder features. The coupling weights are alike the result of decoupling the decoder and the embedding. In our case, we trained the language models from coupling decoders. Furthermore, we substitute the embedding portion with the previous network into the subspace embedding model. There are still additional embeddings to apprehend external information, including token-type embeddings, and positional embeddings, however, we do not substitute these embeddings for the subspace embeddings. We alter the implementation of RoBERTa [14] and XLM-R [6] models based on attention-based networks framework [29]. Similar to other language models, we employ tokenizers in our approach. However, the tokenizer utilised in our model offers the advantage of easily incorporating new vocabularies through the combination of subspace embeddings. Consequently, our proposed approach is immune to the OOV problem. Our embedding network employs the hyperparameters from RoBERTa with the masking token probability of 0.15. Our base model comprises eight transformer encoder layers with 512-dimension embeddings in paucity to BERT-base. For multilingual cases, XLM-R is altered to eight layers as RoBERTa. The altered networks are indicated as RoBERTaS and XLM-RS, where subscript S refers to subspace embedding. We present an arbitrarily assigned scenario of f-subspace embeddings reported in Table 1. As we show that the number of embedding parameters in the altered models is substantially reduced from RoBERTa\({}_{\text{S}}\), and XLM-R\({}_{\mathcal{S}}\). Our tokenizer's configuration and training settings follow (Kang et al., 2017; Chen et al., 2018). ### Benchmarks The evaluations of altered neural language models are conducted through GLUE (Zhu et al., 2017) benchmark which comprises similarity, paraphrasing, single-sentence, and inference tasks. For multilingual language models, we employ the XNLI benchmark for evaluation, including, the fine-tuning of the pre-trained XLM-R in both manner, multi-language and cross-lingual tasks. **Results of Algorithm 1:** Our model RoBERTa\({}_{\text{S}}\) follow the base network based on (Chen et al., 2018), where we substitute the input embedding component through arbitrarily dispersed \(f\)-subspace embedding constructs reported in Algorithm 1. We present our model (the base model and \(f\)-subspace embedding) results on the GLUE benchmark. It uses the number of SE as determined which is \(Q\) = \(D^{1/f}\) and requires only 4 SE vectors provided \(f\) is 8 if \(D\) is 50,627. In Table 2, the row below \(f\) represents the number of embeddings which are 50k for our model. The results on the GLUE benchmark reflect that \(f\)-SE models are having alike performance among each other. However, the results of SE models are relatively lower than the RoBERTa\({}_{\text{S}}\). This shows that the arbitrarily assigned dispersed subspace embeddings could not alleviate the distinctly entangled part of embedding. Also, we tend to not use certain special tokens, including, padding and separate tokens during assigning the subspace embeddings. It degrades the performance due to the aforementioned reasons. To address this problem, we devised a cluster-based allotment approach that employs contextual information from the pre-trained models. **Results of Algorithm 2:** We have shown above that our embedding compression method using arbitrarily dispersed subspace embedding successfully lightens the original embeddings. However, in certain cases, the performance degrades due to distinctly entangled parts of subspace embeddings. Though, the tokens situated in the latent space suffer to signify the context among them. To alleviate such context mismatching problem, we employ the pre-trained RoBERTa (Chen et al., 2018) model from (Xu et al., 2019). We employ this pre-trained network with 768-d embedding vectors to apply our Algorithm 2 for clustering. Our evaluation on GLUE benchmarks uses the number of subspace embedding \(Q\) with 50, 100, and 200 to assign 3-SE vectors. The results are reported in Table 3. Based on the k-means clustering which uses instances rather than the cluster size, i.e. across varied k-clusters possess different cluster sizes. In our evaluation, we apply both naive k-means and uniformly allocated k-means as per Algorithm 2. The results found that the subspace embedding with uniform cluster size outperforms a small set of embedding. In particular, even with a 3-SE network after clustering has limited parameters than a 2-SE network, it is superior in terms of performance over each GLUE benchmark. Our proposed approach shows comparable performance with the original embedding model and is superior on SST-2 dataset. **Results on Multilingual Dataset:** Multilingual language models are resource intensive, especially the training aspect as compared to the monolingual scenario. We use the XLM-R model based on the Unicoder (Kumar et al., 2017) to evaluate a cross-lingual transfer task. Our altered XLM-R\({}_{\text{S}}\) network with 250k and 63 number of embeddings for 3-subspace embedding, and 128 for clustered SE. On English dataset, their performances are 74%, 72.6%, and 72.9% for XLM-R\({}_{\mathcal{S}}\), 3-SE, and clustered SE. The results on XNLI are improved by 2% on the cross-lingual transfer task, while the embeddings are compressed over 99.95%. ## 5. Conclusion This paper introduced a novel compact embedding structure to lighten the neural language models with its ability for training with far fewer parameters than the original embeddings. We devise two methods to assign shared subspace embedding to the embedding vector, of which, the first way is to allocate sequentially using the modulo operation (Algorithm 1). The second approach is to assign dispersed subspace embedding using the pre-trained language model that incorporates contextual information (Algorithm 2). The compressed subspace embedding significantly reduces the number of parameters by over 99% due to incorporated embedding can be procreated exponentially via Cartesian product. Therefore, these lightweight embeddings (subspace embedding) perform better on GLUE and XNLI and is comparable to the base result. Our evaluation is conducted to substitute the embeddings in MLM using \(f\)-subspace embedding. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline NLMs & Vocabulary Size & \# Embeddings & \(\theta\) & \(\theta_{\text{v}}\) \\ \hline RoBERTa\({}_{\text{S}}\) & 50k & 50k & 51M & 25.7M \\ +2-SE & 50k & 225 & 26M & 115k \\ +3-SE & 50k & 37 & 26M & 18.9k \\ +8-SE & 50k & 4 & 26M & 2k \\ \hline XLM-R\({}_{\text{S}}\) & 250k & 250k & 154M & 128M \\ +3-SE & 250k & 63 & 26M & 32k \\ \hline \end{tabular} \end{table} Table 1. Description of the altered neural language models. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Dataset & Model & RoBERTa\({}_{\text{S}}\) (Ours) & +2-SE & +3-SE & +4-SE & +6-SE & +8-SE \\ \hline \multirow{3}{*}{\(\theta_{\text{S}}\)} & 1 & 2 & 3 & 4 & 6 & 8 \\ & 50k & 225 & 37 & 15 & 7 & 4 \\ \hline SST-2 (Shen et al., 2017) & 89.8 & 88.4 & 88.0 & 88.1 & 87.2 & 88.0 \\ Quora Questions\({}^{\text{S}}\) & 86.5 & 84.0 & 83.0 & 83.3 & 82.6 & 83.0 \\ MNIL (Shen et al., 2017) & 79.5 & 74.3 & 73.1 & 72.8 & 73.5 & 73.0 \\ QNLI (Kumar et al., 2017) & 88.1 & 84.0 & 83.4 & 84.1 & 84.1 & 83.0 \\ MRPC (Kumar et al., 2017) & 88.3 & 88.0 & 85.5 & 87.4 & 85.2 & 86.3 \\ RTE (Shen et al., 2017) & 72.8 & 66.9 & 67.8 & 70.0 & 67.4 & 67.8 \\ STS-B (Shen et al., 2017) & 88.0 & 79.2 & 77.3 & 78.4 & 79.5 & 76.4 \\ CoLA (Zhu et al., 2017) & 38.0 & 35.6 & 18.5 & 23.2 & 25.5 & 20.0 \\ \hline \end{tabular} \end{table} Table 2. Results of Arbitrarily Dispersed Subspace Embedding on GLUE. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Dataset & Model & RoBERTa\({}_{\text{S}}\) (Ours) & +2-SE & +3-SE & +3-SE & +3-SE & +3-SE \\ \hline \multirow{3}{*}{\(\theta_{\text{S}}\)} & 1 & 2 & 3 & 3 & 3 & 3 & 3 \\ & & & & & -0100 & -0200 & -050 & -0100 \\ \hline \multirow{3}{*}{\(\pi\)} & 25.7M & 115.8 & 18.9k & 104.6 & 154.6 & 25.6 & 51.8 \\ & - & 99.5 & 99.3 & 99.6 & 99.3 & 99.7 & 99.8 \\ \hline SST-2 (Shen et al., 2017) & 89.8 & 88.4 & 88.0 & 88.2 & 90.0 & 89.3 & 89.3 \\ Quora Question\({}^{\ast}\) & 86.5 & 84.0 & 83.0 & 84.7 & 85.6 & 84.5 & 84.6 \\ MNIL (Shen et al., 2017) & 79.5 & 74.3 & 73.1 & 75.9 & 77.5 & 75.8 & 77.2 \\ QNLI (Kumar et al., 2017) & 89.1 & 84.0 & 83.4 & 85.1 & 85.5 & 83.5 & 85.8 \\ MRPC (Kumar et al., 2017) & 88.3 & 88.0 & 85.5 & 87.3 & 88.6 & 87.7 & 87.3 \\ RTE (Shen et al., 2017) & 72.8 & 66.9 & 67.8 & 67.1 & 69.7 & 67.9 & 70.7 \\ STS-B (Shen et al., 2017) & 88.0 & 79.2 & 77.3 & 81.6 & 84.5 & 80.1 & 84.8 \\ CoLA (Zhu et al., 2017) & 38.0 & 35.6 & 18.5 & 37.5 & 34.9 & 33.6 & 36.7 \\ \hline \end{tabular} \end{table} Table 3. Results of Cluster-based Subspace Embedding on GLUE. Shaded columns in red and yellow colour denotes the clustered SE using k-means, and uniform cluster size.
2305.01136
Combined effect of mutually frequency-detuned strong and weak drives on a two-level system: Envelope of the Rabi oscillations
Near-resonant ac-drive acting on a two-level system induces the Rabi oscillations of the level occupations. It is shown that additional weak drive properly frequency-detuned from the primary drive causes a resonant response. This response manifests itself in the emergence of the envelope of the oscillations. At resonance, the inverse period of the envelope is proportional to the amplitude of the weak drive. The resonant condition reads: difference of frequencies between the two drives is equal to the ac-splitting of quasilevels in the field of the strong drive. Technically, the resonance can be inferred from the analogy between the equations for the time-evolution of the spin amplitude and the Mathieu equation, which describes e.g. the parametric resonance.
M. E. Raikh
2023-05-02T01:03:28Z
http://arxiv.org/abs/2305.01136v1
Combined effect of mutually frequency-detuned strong and weak drives on a two-level system: Envelope of the Rabi oscillations ###### Abstract Near-resonant ac-drive acting on a two-level system induces the Rabi oscillations of the level occupations. It is shown that additional weak drive properly frequency-detuned from the primary drive causes a resonant response. This response manifests itself in the emergence of the envelope of the oscillations. At resonance, the inverse period of the envelope is proportional to the _amplitude_ of the weak drive. The resonant condition reads: difference of frequencies between the two drives is equal to the ac-splitting of quasilevels in the field of the strong drive. Technically, the resonance can be inferred from the analogy between the equations for the time-evolution of the spin amplitude and the Mathieu equation, which describes e.g. the parametric resonance. ## I Introduction Dynamics of a spin placed in a magnetic field, \(\mathbf{B}=\mathbf{z}_{0}B\), and subjected to a sinusoidal ac drive, \(\mathbf{b}=2b\mathbf{x}_{0}\cos\omega t\), is known in tiniest details since the celebrated papers [1; 2; 3; 4; 5]. Most prominent is the situation when \(\omega\) is close to the Zeeman splitting, \(\Delta_{Z}=\gamma B\), of the spin levels. Here \(\gamma\) is the gyromagnetic ratio. Then the probabilities of \(\uparrow\) and \(\downarrow\) spin projections oscillate with the generalized Rabi frequency \[\Theta_{R}=\Big{[}\big{(}\Delta_{Z}-\omega\big{)}^{2}+\Omega_{R}^{2}\Big{]}^ {1/2}, \tag{1}\] where \(\Omega_{R}=\gamma b\). Concerning the \(x\) and \(y\) spin projections, their dynamics comprises three frequencies:[5]\(\omega\), and \(\omega\pm\Theta_{R}\) (Mollow triplet). Four underlying frequencies of the spin dynamics suggest the way in which the driven spin can be manipulated by the _secondary_ ac field. Namely, the frequency of the secondary field, \(\omega_{m}\), can either be low, of the order of \(\Omega_{R}\ll\omega\) (see e.g. Refs. [6; 7; 8; 9; 10]), or it can be high[11; 12; 13; 14; 15; 16; 17; 18; 19] and detuned from \(\omega\) approximately by \(\Omega_{R}\). Within the first technique, the modification of the primary Rabi oscillations by the secondary drive depends strongly on the relation between the magnitude and frequency of the secondary drive. In particular, strong secondary drive slows down the Rabi oscillations[10]. When the secondary drive is weak, it can affect the Rabi oscillations only under the resonance condition \(\omega_{m}\approx\Omega_{R}\). Under this condition, primary oscillations develop an envelope.[8; 10] In the language of quasienergies this effect can be interpreted as follows. While the primary drive causes the ac-splitting of the bare levels into the quasilevels, under the resonant secondary drive each of these quasilevels gets split additionally. Within the second technique, the effect of two near-resonant drives on the two-level system is much more dramatic. Theoretical studies[11; 12; 17; 18; 19] indicate that adding a weak second (probe) drive can covert the regime of the Rabi oscillations induced by the primary drive (pump) into a chaotic behavior of the observables. The underlying reason for chaos is that, in the presence of two drives, the Floquet theorem is lifted (if the pump and probe frequencies are incommensurate). Meanwhile, the behavior of fluorescence observed in experiments on bichromatically driven two-level systems[13; 14; 15; 16] indicate that the effect of two drives is much less dramatic than in theory. In the time domain, the probe drive leads to the additional modulation of the Rabi oscillations. In the frequency domain, the pump leads to the splitting of each individual peak of the Mollow triplet into the daughter triplets. In the theoretical papers on bichromatic drive the results are presented in the form of numerical curves calculated for certain sets of parameters. This is certainly justified for revealing of chaos. But in the regimes when the effect of secondary drive is modest, there are no analytical formulas predicting the modified behavior of the Rabi oscillations for given amplitudes and frequencies of two drives. Such formulas are derived in the present paper. The reason why these formulas can be derived is that, within the rotating wave approximation (RWA), the Floquet theorem applies not for the primary drive frequency but rather for the _difference_ between the frequencies of two drives. Our analytical treatment yields the resonance condition for which the effect of a weak secondary drive on the Rabi oscillations is most pronounced. ## II Evolution of the amplitudes of the spin components Denote with \(a_{1}\) and \(a_{2}\) the amplitudes of \(\uparrow\) and \(\downarrow\) spin components. In the presence of two drives these amplitudes evolve with time as \[i\frac{da_{1}}{dt} =\frac{\Delta_{Z}}{2}a_{1}+\left[\lambda_{1}e^{-i\omega_{1}t}+ \lambda_{2}e^{-i\omega_{2}t}\right]a_{2}, \tag{2}\] \[i\frac{da_{2}}{dt} =-\frac{\Delta_{Z}}{2}a_{2}+\left[\lambda_{1}e^{i\omega_{1}t}+ \lambda_{2}e^{i\omega_{2}t}\right]a_{1}, \tag{3}\] where \(\omega_{1}\) and \(\omega_{2}\) are the frequencies of the primary and secondary drives, respectively, while \(\lambda_{1}\) and \(\lambda_{2}\) are the amplitudes of these drives in the frequency units. For simplicity we neglect the phase shift between the two drives, and, thus, choose \(\lambda_{1}\), \(\lambda_{2}\) to be real. We also assume that both drives are weak, namely, \(\lambda_{1},\lambda_{2}\ll\ \Delta_{Z}\). This justifies the replacement of \(\cos\omega_{1}t\) and \(\cos\omega_{2}t\) in Eqs. 2 and 3 by the exponents, which is the essence of RWA. Incorporating of counter-rotating exponents amounts to renormalization of \(\Delta_{Z}\) by the Bloch-Siegert shift[2] of the order of \(\frac{\lambda_{1}^{2}}{\Delta_{Z}},\frac{\lambda_{2}^{2}}{\Delta_{Z}}\ll \Delta_{Z}\). Following the standard procedure for a single-frequency drive, we introduce new variables \[A_{1}=a_{1}\exp\left(\frac{i\Delta_{Z}}{2}t\right),\ \ A_{2}=a_{2}\exp\left(- \frac{i\Delta_{Z}}{2}t\right), \tag{4}\] which allows to exclude high frequency, \(\Delta_{Z}\), from the system Eqs. 2 and 3, which takes the form \[i\frac{dA_{1}}{dt} =\left[\lambda_{1}e^{-i\delta_{1}t}+\lambda_{2}e^{-i\delta_{2}t} \right]A_{2}, \tag{5}\] \[i\frac{dA_{2}}{dt} =\left[\lambda_{1}e^{i\delta_{1}t}+\lambda_{2}e^{i\delta_{2}t} \right]A_{1}, \tag{6}\] where the detunings \(\delta_{1}\) and \(\delta_{2}\) are defined as \[\delta_{1}=\Delta_{Z}-\omega_{1},\ \ \delta_{2}=\Delta_{Z}-\omega_{2}. \tag{7}\] Next we reduce the system Eqs. 5, 6 to a single second-order differential equation. Expressing \(A_{2}\) from Eq. 5 and substituting into Eq. 6, we get \[-\frac{d^{2}A_{1}}{dt^{2}}-if(t)\frac{dA_{1}}{dt}\] \[=\left[\lambda_{1}^{2}+\lambda_{2}^{2}+\lambda_{1}\lambda_{2}e^{ i(\delta_{1}-\delta_{2})t}+\lambda_{2}\lambda_{1}e^{-i(\delta_{1}-\delta_{2})t} \right]A_{1}. \tag{8}\] where the function \(f(t)\) is given by \[f(t)=\frac{\lambda_{1}\delta_{1}e^{i\delta_{1}t}+\lambda_{2} \delta_{2}e^{i\delta_{2}t}}{\lambda_{1}e^{i\delta_{1}t}+\lambda_{2}e^{i\delta _{2}t}}\] \[=\frac{\delta_{1}+\delta_{2}}{2}+\frac{\delta_{1}-\delta_{2}}{2} \Bigg{(}\frac{1-\frac{\lambda_{2}}{\lambda_{1}}e^{i(\delta_{2}-\delta_{1})t}}{ 1+\frac{\lambda_{2}}{\lambda_{1}}e^{i(\delta_{2}-\delta_{1})t}}\Bigg{)}. \tag{9}\] It is seen from the expression Eq. 9 that the function \(f(t)\) is periodic with a period \(2\pi/(\delta_{2}-\delta_{1})\). Equally, the right-hand side of Eq. 23 is periodic with the same period. We thus conclude that the Floquet theorem applies to Eq. (23). To boost the similarity to the Mathieu equation, we exclude the first derivative from Eq. (23) by introducing the new variable \[\tilde{A}_{1}(t)=A_{1}(t)\exp\Biggl{[}\frac{i}{2}\int_{0}^{t}dt^{ \prime}f(t^{\prime})\Biggr{]}\] \[=A_{1}(t)\Biggl{[}1+\frac{\lambda_{2}}{\lambda_{1}}e^{i(\delta_{ 2}-\delta_{1})t}\Biggr{]}^{1/2}\exp\Biggl{(}\frac{i\delta_{1}t}{2}\Biggr{)}. \tag{10}\] Substituting Eq. 10 into Eq. 23 we arrive to the following equation for \(\tilde{A}_{1}\) \[-\frac{d^{2}\tilde{A}_{1}}{dt^{2}}-\frac{\tilde{A}_{1}}{4}\Biggl{\{} \delta_{1}^{2}+\delta_{2}^{2}-\left(\frac{\lambda_{2}\delta_{1}e^{i\delta_{2} t}+\lambda_{1}\delta_{2}e^{i\delta_{1}t}}{\lambda_{2}e^{i\delta_{2}t}+\lambda_{1}e^{i \delta_{1}t}}\right)^{2}\Biggr{\}}\] \[=\left[\lambda_{1}^{2}+\lambda_{2}^{2}+\lambda_{1}\lambda_{2}e^{ i(\delta_{1}-\delta_{2})t}+\lambda_{2}\lambda_{1}e^{-i(\delta_{1}-\delta_{2})t} \right]\tilde{A}_{1}. \tag{11}\] Note that in the limit when the secondary drive is absent, \(\lambda_{2}=0\), Eq. 11 reproduces the Rabi oscillations in the field of the primary drive, namely, the solutions of Eq. 11 have the form \(\tilde{A}_{1}\propto\exp\left[\pm i\left(\lambda_{1}^{2}+\frac{\delta_{1}^{2} }{4}\right)^{1/2}t\right].\) ## III Weak secondary drive Within the adopted RWA approximation Eq. 11 applies for arbitrary relation between two drives \(\lambda_{1}\) and \(\lambda_{2}\) and arbitrary relation between the detunings \(\delta_{1}\) and \(\delta_{2}\) as long as they are much smaller than \(\omega\). Next we take the limit of a weak secondary drive \(\lambda_{2}\ll\lambda_{1}\). In the lowest order in the secondary drive we can neglect the term \(\lambda_{2}^{2}\) in the right-hand side. Concerning the left-hand side, we expand the ratio in the brackets as follows \[\frac{\lambda_{1}\delta_{2}e^{i\delta_{1}t}+\lambda_{2}\delta_{1} e^{i\delta_{2}t}}{\lambda_{1}e^{i\delta_{1}t}+\lambda_{2}e^{i\delta_{2}t}}\] \[\approx\delta_{2}\Biggl{[}1+\frac{\lambda_{2}}{\lambda_{1}} \Bigl{(}\frac{\delta_{1}}{\delta_{2}}-1\Bigr{)}e^{i(\delta_{2}-\delta_{1})t} \Biggr{]}. \tag{12}\] Figure 1: (Color online) Schematic illustration of a two-level system in a magnetic field driven by the ac fields with frequencies \(\omega_{1}\) and \(\omega_{2}\) close to the level separation, \(\Delta_{Z}\). Strong drive with magnitude, \(\lambda_{1}\), and ac-splitting of the Zeeman levels, which manifests itself in the Rabi oscillations of the level \(\lambda_{2}\ll\lambda_{1}\) properly detuned from the primary drive, see Eq. 14, leads to an additional splitting, \(s_{1}\), of the ac-split levels, which, in turn, results in the envelope of the Rabi oscillations with a period \(\frac{2\pi}{s_{1}}\). With the above simplifications Eq. 11 takes the form \[-\frac{d^{2}\tilde{A}_{1}}{dt^{2}}-\Bigg{[}\lambda_{1}^{2}+\frac{ \delta_{1}^{2}}{4}+\frac{\lambda_{2}}{2\lambda_{1}}\delta_{2}(\delta_{1}-\delta _{2})e^{-i(\delta_{1}-\delta_{2})t}\Bigg{]}\tilde{A}_{1}\] \[=\Big{[}\lambda_{1}\lambda_{2}e^{i(\delta_{1}-\delta_{2})t}+ \lambda_{2}\lambda_{1}e^{-i(\delta_{1}-\delta_{2})t}\Big{]}\,\tilde{A}_{1}. \tag{13}\] There are two terms proportional to \(\lambda_{2}\) in Eq. 13. Assuming that \(\delta_{1}\) and \(\delta_{2}\) are of the same order, the term in the left-hand side can be estimated as \(\lambda_{2}\delta_{1}^{2}/\lambda_{1}\), while the term in the right-hand side is of the order of \(\lambda_{2}\lambda_{1}\). If \(\lambda_{1}\) is much greater than \(\delta_{1}\), which corresponds to the limit of developed Rabi oscillations, the term in the right-hand side dominates. Note now that, if one neglects the term in the left-hand side, Eq. 13 reduces to the classical Mathieu equation, which describes e.g. the electron motion in a weak one-dimensional potential. The time in Eq. 13 plays the role of coordinate. The sum \(\lambda_{1}^{2}+\frac{\delta_{1}^{2}}{4}\) plays the role of energy. Despite the similarity, there is a dramatic difference between Eq. 13 and the Mathieu equation, since the Mathieu equation describes the "forbidden gaps" or the domains of instabilities, while Eq. 13 does not. Still, as we will see below, the position of resonance captured by Eq. 13 can be found from the same "Bragg condition" as the position of the center of the forbidden gap, namely \[2\Bigg{(}\lambda_{1}^{2}+\frac{\delta_{1}^{2}}{4}\Bigg{)}^{1/2}=|\delta_{1}- \delta_{2}|. \tag{14}\] To trace how the resonance Eq. 14 emerges, we search for the solution of Eq. 13 in the Floquet form \[\tilde{A}_{1}(t)=f\exp{\{ist\}}+g\exp{\Big{\{}i(s+\delta_{1}-\delta_{2})t} \Big{\}}. \tag{15}\] Substituting this form into Eq. 13 and equating the coefficients in front of the two exponents, we arrive to the system \[\left(\lambda_{1}^{2}+\frac{\delta_{1}^{2}}{4}-s^{2}\right)f=- \lambda_{1}\lambda_{2}\Bigg{[}\frac{\delta_{2}(\delta_{1}-\delta_{2})}{2 \lambda_{1}^{2}}+1\Bigg{]}g,\] \[\left(\lambda_{1}^{2}+\frac{\delta_{1}^{2}}{4}-(s+\delta_{1}- \delta_{2})^{2}\right)g=-\lambda_{1}\lambda_{2}f. \tag{16}\] The brackets in the left-hand sides of the two equations coincide under the condition: \(s+\delta_{1}-\delta_{2}=-s\). Thus we set \[s=\frac{\delta_{2}-\delta_{1}}{2}+s_{1}, \tag{17}\] and assume that \(s_{1}\ll|\delta_{2}-\delta_{1}|\). Upon neglecting the \(s_{1}^{2}\)-term, the system Eq. 16 takes the form \[\Bigg{[}\lambda_{1}^{2}-\frac{\delta_{2}^{2}-2\delta_{2}\delta_{1 }}{4}-s_{1}(\delta_{2}-\delta_{1})\Bigg{]}f=\] \[-\lambda_{1}\lambda_{2}\Bigg{[}\frac{\delta_{2}(\delta_{1}-\delta _{2})}{2\lambda_{1}^{2}}+1\Bigg{]}g,\] \[\Bigg{[}\lambda_{1}^{2}-\frac{\delta_{2}^{2}-2\delta_{2}\delta_{1 }}{4}+s_{1}(\delta_{2}-\delta_{1})\Bigg{]}g=\] \[-\lambda_{1}\lambda_{2}f. \tag{18}\] Finally, upon multiplying the two last equations, we obtain the expression for the Floquet exponent \[s_{1}^{2}\left(\delta_{2}-\delta_{1}\right)^{2}=\] \[\left[\lambda_{1}^{2}-\frac{\delta_{2}^{2}}{4}+\frac{\delta_{1} \delta_{2}}{2}\right]^{2}-\lambda_{2}^{2}\left[\frac{\delta_{1}\delta_{2}- \delta_{2}^{2}}{2}+\lambda_{1}^{2}\right]. \tag{19}\] Note now that the first bracket on the right-hand side of Eq. 19 turns to zero when Eq. 14 is satisfied, i.e. at resonance. It is seen from Eq. 19 that the resonance manifests itself via the anomalous sensitivity of the Floquet exponent to the magnitude, \(\lambda_{2}\), of the secondary drive. It is also instructive to rewrite Eq. 19 in a different form by using the resonant condition Eq. 14. Substituting \(\lambda_{1}^{2}=\frac{1}{4}\left(\delta_{2}^{2}-2\delta_{1}\delta_{2}\right)\) into Eq. 19 we get \[s_{1}=\pm\frac{\lambda_{2}\delta_{2}}{2(\delta_{2}-\delta_{1})}=\pm\frac{ \lambda_{2}\delta_{2}}{4\left(\lambda_{1}^{2}+\frac{\delta_{2}^{2}}{4}\right)^ {1/2}}. \tag{20}\] In the second identity we have again used the resonant condition Eq. 14. Yet another way to cast Eq. 19 is to express \(\delta_{2}\) from the resonant condition. This yields \[s_{1}=\pm\frac{\lambda_{2}}{4\Theta_{R}}\left(\Delta_{Z}-\omega_{1}\pm 2\Theta_{R} \right). \tag{21}\] In the last identity we have returned to the original notations, generalized Rabi frequency and the detuning of the primary drive (see Eq. 1). Since we assumed that \(s_{1}\) is much smaller than the frequency spacing between the two drives, the criterion of applicability of Eq. 21 is \(\lambda_{2}\ll|\delta_{2}-\delta_{1}|\). Eq. 21 is our main result. It suggests that, upon tuning the frequency of the secondary drive to the resonance Eq. 14, the Floquet exponent, describing the drive-induced modification of the Rabi oscillations, increases _linearly_ with the amplitude of the secondary drive. ## IV Discussion (_i_). Upon increasing of \(\lambda_{2}\) the spectrum of the secondary Rabi oscillations becomes progressively richer due to emergence of the higher-order Bragg resonances for which the detunings satisfy the condition \[2\Bigg{(}\lambda_{1}^{2}+\frac{\delta_{1}^{2}}{4}\Bigg{)}^{1/2}=N|\delta_{1}- \delta_{2}| \tag{22}\] with higher \(N\). To trace the emergence of these resonances, we expand the ratio Eq. 12 to the second order in the ratio \(\frac{\lambda_{2}}{\lambda_{1}}\) \[\Bigg{(}\frac{\lambda_{2}\delta_{1}e^{i\delta_{2}t}+\lambda_{1} \delta_{2}e^{i\delta_{1}t}}{\lambda_{2}e^{i\delta_{2}t}+\lambda_{1}e^{i\delta_ {1}t}}\Bigg{)}^{2}\approx\delta_{2}^{2}\Bigg{[}1+2\frac{\lambda_{2}}{\lambda_ {1}}\Bigg{(}\frac{\delta_{1}}{\delta_{2}}-1\Bigg{)}e^{i(\delta_{2}-\delta_{1})t}\] \[+\Bigg{(}\frac{\lambda_{2}}{\lambda_{1}}\Bigg{)}^{2}\Bigg{(}\frac {\delta_{1}}{\delta_{2}}-1\Bigg{)}\Bigg{(}\frac{\delta_{1}}{\delta_{2}}-3 \Bigg{)}e^{2i(\delta_{2}-\delta_{1})t}\Bigg{]}. \tag{23}\] We see that the new term in the expansion gives rise to the second harmonics in the "effective potential" which causes the resonance with \(N=2\). (_ii_). Floquet exponent translates into an envelope. The situation addressed in the present paper is standard and was previously addressed in the literature, see e.g. Refs. [11; 12; 13; 14; 15; 16; 17; 18; 19]. We considered a two-level system driven by two ac fields with strongly different amplitudes. A new finding reported in the present paper is the "Bragg resonance" Eq. 14 which takes place when the difference of frequencies of the drives is equal to the ac-splitting of quasienergies in the field of the strong drive. Interplay of the two drives can be interpreted as a modulation of the amplitude of a primary drive with a frequency \(\omega_{1}-\omega_{2}=\delta_{2}-\delta_{1}\), which couples the ac-split levels. The result of this coupling is the secondary splitting of quasienergies. Most importantly, at resonance, the magnitude of this secondary splitting is _linear_ in the amplitude of the weak drive. Emergence of the Bragg resonance is illustrated schematically in the figure. Physically, the Bragg resonance manifests itself in the modulation of the primary Rabi oscillations with a frequency, \(s_{1}\), proportional to the amplitude, \(\lambda_{2}\), of the weak drive, see Eq. 21. It should be emphasized that the _depth_ of modulation is _independent_ of \(\lambda_{2}\). Indeed, this depth is determined by the interference of the amplitudes \(f\) and \(g\) in Eq. 15. At resonance, the ratio of these amplitudes can be expressed from the system Eq. 18 as follows \[\frac{f}{g}=-\frac{s_{1}(\delta_{2}-\delta_{1})}{\lambda_{1}\lambda_{2}}. \tag{24}\] Substituting the Floquet exponent from Eq. 20 we find \[\frac{f}{g}=\pm\frac{\delta_{2}}{2\lambda_{1}}, \tag{25}\] i.e. the amplitude \(\lambda_{2}\) drops out. In other words, no matter how weak is the secondary drive, at resonance, it lead s to the modulation of the primary Rabi oscillations with a depth \(\sim 1\). If the detuning, \(\delta_{2}\), of the secondary drive does not satisfy the Bragg condition, Eq. 14 by a small quantity, \(\Delta\delta_{2}\), the modulation of the Rabi oscillations gradually vanishes. As can be seen from the system Eq. 18, the Floquet exponent changes with \(\Delta\delta_{2}\) as \[s_{1}(\Delta\delta_{2})=\Big{[}s_{1}(0)^{2}+\frac{(\Delta\delta_{2})^{2}}{4} \Big{]}^{1/2}, \tag{26}\] i.e. similarly to the primary Rabi oscillations. This suggests that the dependence of the Floquet exponent on the magnitude to the secondary drive gradually transforms from linear to quadratic. Note that the resonance Eq. 14 was not uncovered in the previous theoretical studies. The reason is that these studies attempted to incorporate finite lifetimes of the levels from the very beginning. Then, the \(2\times 2\) system Eqs. 2, 3 transforms into the \(4\times 4\) system for the elements of the density matrix, which necessarily complicates the analytical treatment. (_iii_). Clearly, the solution of Eq. 11 exhibits anomalous behavior in a particular case when the amplitudes of two drives are equal: \(\lambda_{1}=\lambda_{2}=\lambda\), while their frequencies are detuned _symmetrically_ with respect to the inter-level distance, \(\omega\), i.e. \(\delta_{2}=-\delta_{1}=\delta\). In this particular case Eq. 11 takes the form \[-\frac{d^{2}\tilde{A}_{1}}{dt^{2}}-\frac{\tilde{A}_{1}}{4}\Bigg{(}\delta^{2}+ 16\lambda^{2}\cos^{2}\delta t+\frac{\delta^{2}}{\cos^{2}\delta t}\Bigg{)}=0. \tag{27}\] From the form of Eq. 27 it can be concluded that the solution is singular in the vicinity of the time moments, \(t_{n}=\frac{(2n+1)\pi}{2\delta}\), where it can be viewed as a Schrodinger equation in the attractive potential \(-\frac{1}{4(t-t_{n})^{2}}\). The form of the singular solution is \(\tilde{A}_{1}\propto(t-t_{n})^{1/2}\). Note that, within the RWA, Eq. 27 is exact. The only small-time scale, which can "cure" the singularity is \(\sim\Delta_{Z}^{-1}\ll\lambda^{-1},\delta^{-1}\) is related to the violation of the RWA. Certainly, any asymmetry, \((\Delta\lambda)\), in the amplitudes of the two drives would cut off the singularity at \((t-t_{n})\lesssim\frac{(\Delta\lambda)}{\lambda}\).
2307.14424
Verifiable measurement-based quantum random sampling with trapped ions
Quantum computers are now on the brink of outperforming their classical counterparts. One way to demonstrate the advantage of quantum computation is through quantum random sampling performed on quantum computing devices. However, existing tools for verifying that a quantum device indeed performed the classically intractable sampling task are either impractical or not scalable to the quantum advantage regime. The verification problem thus remains an outstanding challenge. Here, we experimentally demonstrate efficiently verifiable quantum random sampling in the measurement-based model of quantum computation on a trapped-ion quantum processor. We create and sample from random cluster states, which are at the heart of measurement-based computing, up to a size of 4 x 4 qubits. By exploiting the structure of these states, we are able to recycle qubits during the computation to sample from entangled cluster states that are larger than the qubit register. We then efficiently estimate the fidelity to verify the prepared states -- in single instances and on average -- and compare our results to cross-entropy benchmarking. Finally, we study the effect of experimental noise on the certificates. Our results and techniques provide a feasible path toward a verified demonstration of a quantum advantage.
Martin Ringbauer, Marcel Hinsche, Thomas Feldker, Paul K. Faehrmann, Juani Bermejo-Vega, Claire Edmunds, Lukas Postler, Roman Stricker, Christian D. Marciniak, Michael Meth, Ivan Pogorelov, Rainer Blatt, Philipp Schindler, Jens Eisert, Thomas Monz, Dominik Hangleiter
2023-07-26T18:00:03Z
http://arxiv.org/abs/2307.14424v2
# Verifiable measurement-based quantum random sampling with trapped ions ###### Abstract Quantum computers are now on the brink of outperforming their classical counterparts. One way to demonstrate the advantage of quantum computation is through quantum random sampling performed on quantum computing devices. However, existing tools for verifying that a quantum device indeed performed the classically intractable sampling task are either impractical or not scalable to the quantum advantage regime. The verification problem thus remains an outstanding challenge. Here, we experimentally demonstrate efficiently verifiable quantum random sampling in the measurement-based model of quantum computation on a trapped-ion quantum processor. We create random cluster states, which are at the heart of measurement-based computing, up to a size of \(4\times 4\) qubits. Moreover, by exploiting the structure of these states, we are able to recycle qubits during the computation to sample from entangled cluster states that are larger than the qubit register. We then efficiently estimate the fidelity to verify the prepared states--in single instances and on average--and compare our results to cross-entropy benchmarking. Finally, we study the effect of experimental noise on the certificates. Our results and techniques provide a feasible path toward a verified demonstration of a quantum advantage. + Footnote †: preprint: APS/123-QED In _quantum random sampling_, a quantum device is used to produce samples from the probability distribution generated by a random quantum computation [1]. This is a particularly challenging task for a classical computer asymptotically [2; 3; 4] and in practice [5; 6] and thus at the center of recent demonstrations of a quantum advantage [7; 8; 9; 10]. A key challenge for such experiments, however, is to verify that the produced samples indeed originate from the probability distribution generated by the correct random quantum computation. Verification based only on classical samples from the device is fundamentally inefficient [11]. In practice, the verification problem has been approached using so-called linear _cross-entropy benchmarking_ (XEB) [7; 12]. The corresponding _XEB score_ is obtained by averaging the ideal probabilities corresponding to the observed experimental samples. XEB is appealing since it has been argued that even achieving any non-trivial XEB score might be a classically computationally intractable task [13; 14] and that it can be used to sample-efficiently estimate the _quantum fidelity_ of the experimental quantum state [7; 15]. However, XEB requires a classical simulation of the implemented circuits to obtain the ideal output distribution. The computational run-time of estimating XEB from samples thus scales exponentially, rendering it practically infeasible in the quantum advantage regime. Moreover, it is not always a good measure of the quantum fidelity [16; 17; 18]. Another way classical verification of quantum devices has been approached is via interactive proof systems [19; 20], albeit at the cost of large device overheads [21; 22]. Hence, classical approaches to verification have limited applicability for devices operating in the quantum advantage regime. These challenges raise the question of whether there are quantum verification techniques that could be used to _efficiently verify_ quantum random sampling experiments, even when their simulation is beyond the computational capabilities of classical devices. Answering this question in the affirmative, we turn to a different universal model of quantum computation--_measurement-based quantum computing_ (MBQC) [23; 24]. In contrast to the circuit model, a computation in MBQC proceeds through measurements, instead of unitary operations, applied sequentially to an entangled _cluster state_[24; 2]. With appropriately randomized initial state preparation, cluster states are a source of random samples appropriate for demonstrating a quantum advantage via random sampling [25; 26; 27]. Crucially, though, each cluster state is fully determined by a small set of so-called stabilizer operators. By measuring the stabilizer operators using well-characterized single-qubit measurements, preparations of these cluster states can be efficiently verified [28; 29; 30; 31; 32; 33]. Here, we experimentally demonstrate efficiently verifiable quantum random sampling in the MBQC model in two _trapped-ion quantum processors_ (TIQP). While cluster state generation in TIQP has previously been limited to a size of \(2\times 2\)[35], we overcome this limitation with a two-fold approach. First, we use pairwise addressed Molmer-Sorensen entangling operations [34, 36] in a fully connected linear chain to enable the efficient generation of clusters up to a size of \(4\times 4\) qubits. Second, we make use of spectroscopic decoupling and optical pumping [37] to measure and reset qubits mid-sequence in order to recycle them. In this way, we are able to sequentially measure rows of the cluster and then reuse the measured qubits to prepare a new row of the cluster, while maintaining entanglement with the remaining qubits, see Fig. 1(c). This allows us to sample from a cluster state on a lattice that is larger than the size of the physical qubit register. This combination of techniques provides a feasible path towards generating large-scale entangled cluster states using trapped ions. We then estimate the fidelity of the experimental cluster states in order to verify those states. Specifically, we apply _direct fidelity estimation_[32, 28] to estimate the single-instance fidelity of a fixed cluster state, and by extending the protocol, the average fidelity of random cluster states. Direct (average) fidelity estimation thus provides us with a unified framework for verification and benchmarking of MBQC, analogously to XEB. However, in contrast to XEB, the fidelity estimation approach has several major advantages: First, it is efficient in terms of both sample and computational complexity. Second, it requires knowledge only of the measurement noise as opposed to the noise properties of all gates as required for XEB [16, 17, 18]. Finally, fidelity estimation provides us with a rigorous bound on the quality of the samples from a fixed quantum state. In order to assess the performance of the fidelity-derived certificates, we compare them to the available--but inefficient--classical means of certification of the samples, which is still possible in our proof-of-principle demonstration. In the single-instance case, we compare the experimental performance of the single-instance fidelity estimate to the empirical total-variation distance of the sampled distribution. In the average case, we compare the average fidelity estimate to the average XEB score. Additionally, we study the effect of native noise sources on the different measures of quality. **Sampling and verification protocols.** In the circuit model, natural examples of random computations are, for instance, circuits composed of Haar-random two-qubit gates [38], or composed of native entangling gates and random single-qubit gates [7]. In contrast, in MBQC a computation is performed using single-qubit rotations around the \(Z\)-axis on a cluster state and measurements in the Hadamard basis [39]. This leads to a natural notion of random MBQC wherein those single-qubit \(Z\)-rotations are applied with angles chosen randomly from an appropriate discretization of the unit circle [26, 27]. The minimal choice of discretization such that MBQC is computationally universal consists of eight evenly spaced angles, corresponding to powers of the \(T\) gate. Specifically, the MBQC random sampling protocol we apply is the following [26, 27] (see Fig. 1, and Section S3 of the Supplementary Information (SI) for explicit circuits): 1. Prepare a cluster state on \(N=n\times m\) qubits on a rectangular lattice by preparing each qubit in the \(\ket{+}\) state Figure 1: **Overview of the experiment.****(a) Sketch of the ion trap quantum processor.** Strings of up to 16 ions are trapped in a linear chain. Any single ion or pair of ions can be individually addressed by means of steerable, tightly focused laser beams (dark red) to apply resonant operations \(\mathds{R}_{j}\) or \(\mathsf{MoImmer}\)-\(\mathsf{S}\)orensen entangling gates \(\mathrm{MS}_{i,j}\). Global detection, cooling (blue), and repumping (pink) beams are used to reset part of the qubit register in-sequence [34]. **(b) Implemented cluster states.** Cluster states with local rotation angles \(\beta_{i}\in\{0,\frac{\pi}{4},\ldots,\frac{\pi\pi}{4}\}\) up to a size of \(4\times 4\) qubits are created in the qubit register. Each cluster state is defined by its \(N\) stabilizers \(S_{k},k=1,\ldots,N\). **(c) Recycling of qubits.** Using sub-register reset of qubits, we prepare cluster states that are larger than the qubit register. For example, using four ions, we prepare cluster states of size \(2\times 3\). **(d) Single-instance verification.** In order to verify single cluster state preparations with fixed rotation angles \(\beta\), we measure it in different bases. To perform fully efficient fidelity estimation we measure uniformly random elements of its stabilizer group, which is obtained by drawing a random product of the \(N\) stabilizers \(S_{k}\). To sample from the output distribution, we measure in the \(X\)-basis. These samples are verified in small instances by the empirical total-variation distance (TVD). **(e) Average-case verification.** To assess the average quality of the cluster state preparations, we perform measurements on cluster states with random rotations. By measuring a random element of the stabilizer group of each random cluster state, we obtain an estimate of the average fidelity. From the samples from random cluster states in the \(X\)-basis, we compute the cross-entropy benchmark (XEB) by averaging the ideal probabilities \(p_{\beta}(x)\) corresponding to the samples \(x\) and the cluster with angles \(\beta\). and applying controlled-\(Z\) gates between all neighbors. 2. Apply single-qubit rotations \(Z(\beta)=\mathrm{e}^{-\mathrm{i}\beta Z/2}\) with random angles \(\beta\in\{0,\frac{\pi}{4},\dots,\frac{7\pi}{4}\}\) to every qubit. 3. Measure all qubits in the Hadamard basis. There is strong complexity-theoretic evidence that classically simulating this protocol for \(m\in O(n)\) up to constant total-variation distance error is intractable [26; 27]. This evidence lives up to the same standard as for universal circuit sampling in the circuit model [40]. In both cases, already producing samples from a quantum state with a non-vanishing fidelity is likely classically intractable [16; 41]. Using a variant of direct fidelity estimation (DFE), we assess the quality of both single and random instances of MBQC circuits. We estimate the fidelity \(F(\rho,|\psi\rangle\langle\psi|)=\langle\psi|\rho|\psi\rangle\) of a fixed experimental state \(\rho\) by measuring random operators from the _stabilizer group_ of the random cluster state \(|\psi\rangle\) and averaging the results. The stabilizer group is the group generated by the \(N\) stabilizers of the random cluster. Each stabilizer is the product of a rotated \(X\)-operator--given by \(Z(\beta)XZ(-\beta)\)--and \(Z\)-operators on the neighboring sites, giving rise to a characteristic star shape on the square lattice, see Fig. 1(b). Importantly, all elements of the stabilizer group are products of single-qubit operators. Our trust in the fidelity estimate therefore only depends on our ability to reliably perform single-qubit measurements, which we verify. In order to measure the average fidelity over the set of cluster states, we prepare random cluster states and for each state measure a random element of its stabilizer group. We then average the results to obtain an estimate of the average fidelity. On a high level, fidelity estimation thus exploits our ability to measure the experimental state in different bases. It requires a number of experimental state preparations that is _independent_ of the size of the system, making it scalable to arbitrary system sizes, see Methods for details. We note that we also measured a witness for the fidelity [29] and find that it is not practical in a scalable way, as we detail in Section S1 of the SI. Given the relatively small system sizes of the experiments in this work, we are also able to directly compute non-scalable measures of quality that make use of the classical samples only. This enables us to compare fidelity estimation with inefficient classical verification methods in different scenarios. To classically assess the quality of samples from a fixed experimental state preparation, we use the _total-variation distance_ (TVD) \(d_{\mathrm{TV}}(P,Q)=\sum_{x}|P(x)-Q(x)|/2\). The TVD quantifies the optimal probability of distinguishing the experimentally sampled distribution \(Q\) and the ideal one for a noiseless cluster \(P\). Note that the TVD is the classical analog of the trace distance \(d_{\mathrm{Tr}}(\rho,|\psi\rangle\langle\psi|)=\mathrm{Tr}(|\rho-|\psi\rangle \langle\psi||)/2\), which quantifies the optimal probability of distinguishing the sampled quantum states \(\rho\) and \(|\psi\rangle\langle\psi|\). The fidelity \(F\) upper-bounds the trace distance via the Fuchs-van de Graaf inequality [42] and therefore the TVD of the sampled distributions as \[d_{\mathrm{TV}}\leq d_{\mathrm{Tr}}\leq\sqrt{1-F}. \tag{1}\] The root infidelity \(\sqrt{1-F}\) can therefore be used to certify the classical samples from \(\rho\). We note that it is a priori not clear how tight this bound is in an experimental scenario and how experimental noise affects the different verification methods. In order to classically assess the average quality of the quantum device, we estimate the linear XEB fidelity between \(Q\) and \(P\), which is defined as Figure 2: **Sketch of the circuit with qubit recycling.****(a) Recycling.** After detection, a measured qubit is either still in the \(|1\rangle\) state (“dark” outcome) or in one of the two S levels (“bright” outcome). We reset it to the \(|0\rangle\) state by first applying an addressed \(\pi\)-pulse (1) on the \(|S^{\prime}\rangle\rightarrow|D^{\prime}\rangle\)-transition. A subsequent global \(854\,\mathrm{nm}\) quench pulse (2) transfers population from all D-levels to the P\({}_{3/2}\) manifold, from which (3) spontaneous decay occurs, preferentially to the \(|0\rangle\) state in the S manifold. We repeat this process twice, which is sufficient to return about 99% of the population to the \(|0\rangle\) state. **(b) Circuit.** The individual qubits are prepared in a product state depending on the random angles \(\beta_{i}\) and entangled via \(XX\) interactions and some single-qubit gates (white boxes) to create a cluster state; see the SI, Section S3 for details. The measurement of the qubits is achieved by exciting the \(P\leftrightarrow S\) transition. In order to perform a circuit with recycling, a coherent \(\pi\)-pulse on the \(S\to D^{\prime}\) transition (denoted by H\({}_{S}\)) is applied to ‘hide’ the qubits which should not be measured in the D-manifold. After the measurement, the chain is cooled using polarization-gradient cooling. The reset makes use of local pulses on the measured qubit that transfer the remaining population of \(|S^{\prime}\rangle\) to the D\({}_{5/2}\)-manifold (denoted by P) and global pulses that transfer the population of that manifold back to \(|0\rangle\). Prior to the reset, all unmeasured qubits are ‘hidden’ in the S\({}_{1/2}\)-manifold. For this, the population which was in \(|0\rangle\) prior to the measurement is coherently transferred back to \(|S\rangle\) via a \(\pi\)-pulse (H\({}_{S}^{-1}\)), and the population which is in \(|1\rangle\) is transferred to \(|S^{\prime}\rangle\) via a \(\pi\)-pulse on the \(D\to S^{\prime}\) transition (H\({}_{D}\)). After the reset procedure (a), a \(\pi\)-pulse (H\({}_{D}^{-1}\)) is applied to the unmeasured qubits to transfer the population which was previously in \(|1\rangle\) back from \(S^{\prime}\). \(2^{n}\sum_{x}Q(x)P(x)-1\)[1]. The average XEB fidelity over the random cluster states measures the average quantum fidelity in the regime of low noise [16; 17; 18], see Section S5 of the SI for details. **Experimental implementation.** We implement the random MBQC sampling and verification protocols in two ion-trap quantum processors. Quantum information is encoded in the S\({}_{1/2}\) ground state and D\({}_{5/2}\) excited state of up to 16 \({}^{40}\)Ca\({}^{+}\) ions confined in a linear Paul trap [36; 37]. We use these devices to implement two sets of experiments. First, we generate rectangular \(n\times m\) random cluster states of up to 16 ions by appropriately entangling the respective ions in a linear chain using pairwise addressed Molmer-Sorensen-gates [34; 36]. In a second set of experiments on a different device, we make use of spectroscopic decoupling and optical pumping to recycle qubits to implement a more qubit-efficient way to sample from large-scale entangled cluster states. By construction, the 2D cluster states require entangling gates between neighboring qubits only. As a consequence, when generating the cluster from top to bottom, once the first row has been entangled to the second, we can measure the qubits of the first row. Once measured, these qubits can be reset to the ground state, prepared in their appropriate initial states and entangled as the third row of the cluster state, and so on. Due to the local entanglement structure of the cluster state, the measurement statistics obtained in this way are identical to the statistics that would be obtained from preparing and measuring the full cluster state at once. Experimentally, we make use of in-sequence readout capabilities [43] using an EMCCD camera to read out a subset of the qubits, while spectroscopically decoupling the remaining qubits from the readout beams, see Fig. 2. After the readout, we re-cool the ion string using a combination of Doppler cooling and polarization-gradient cooling for a total of \(3\)\(\mathrm{ms}\). Then we employ two rounds of optical pumping using addressed \(729\)\(\mathrm{nm}\) pulses in combination with a global \(854\)\(\mathrm{nm}\) quench beam to reset the qubits to the \(|0\rangle\) ground state [37], while the remaining qubits are spectroscopically decoupled. This completes the reset and we can now prepare the measured qubits in their new states and entangle them to the remaining qubits of the cluster, see Fig. 2. This procedure enables us to sample from entangled quantum states with more qubits than the physical register size of the used quantum processor. For every state, we perform sampling and verification measurements. We measure the state in the Hadamard basis in order to perform sampling. For verification, we measure a random element of its stabilizer group. When verifying a single instance of a state preparation, we repeat this procedure for a fixed state and then estimate the fidelity from the random stabilizer measurements and the TVD from the classical samples. To estimate the average performance of the device, we repeat the procedure for random states and estimate the average fidelity and the average XEB fidelity, see Fig. 1(d,e). Finally, for the \(2\times 2\) cluster, we study the effect of increasing global (local) dephasing noise on the verification performance by adding small (un)-correlated random \(Z\)-rotations before and after each entangling gate. **Results.** We first measure the fidelity and TVD of single random cluster state preparations for various cluster sizes. The results demonstrate that the root-infidelity provides meaningful upper bounds on the TVD, see Fig. 3. Importantly, while the efficiently measurable and computable root infidelity estimate is guaranteed to bound the TVD per Eq. (1), these scalable bounds are not tight. This is seen in Fig. 3 as a gap between the root infidelity upper bound and the measured TVD values. Indeed, it is expected that reproducing the full quantum state (as measured by the fidelity) is a more stringent requirement than merely reproducing the outcome distribution in one particular measurement basis (as measured by the TVD). Hence, the efficient quantum methods require higher fidelities for the corresponding certificate to meet the quantum advantage threshold. Notably, above the cluster size of \(3\times 3\) qubits, empirically estimating the TVD with sufficient accuracy is practically infeasible due to the exponentially growing state space. In the case in which recycling is Figure 3: **Experimental results for single-instance verification.** Root infidelity estimate \(\sqrt{1-F}\) (hexagons), and empirical TVD (stars) for single instances of random MBQC cluster states with recycling (blue) and without (pink). Note that the horizontal axis is labelled with the cluster size \(n\times m\) and scaled with qubit number \(n\,m\). The root infidelity upper-bounds the TVD per Eq. (1). Colored error bars represent the \(3\sigma\) interval of the statistical error. Uncorrelated measurement noise reduces or increases the measured state fidelity compared to the true fidelity asymmetrically depending on its value, such that the shown values are lower bounds to the true state fidelity, see the Methods section for details. The worst-case behaviour of the measurement noise is represented by gray error bars. In the non-recycling experiment, the register size is increased between the \(2\times 3\) and the \(3\times 3\) instance, leading to a decrease in the local gate fidelities. Modeling the noise as local depolarizing noise after each entangling gate (dotted lines), we obtain effective local Pauli error probabilities after the two-qubit gates of 5.3%, 2.6%, and 1.0%, for the recycling data, for the large-register non-recycling data, and the small register non-recycling data, respectively; see Section S5 of the SI. The shaded green area is the acceptance region corresponding to an infidelity threshold of 8.6% arising in the rigorous hardness argument as sketched in Section S4 of the SI. Since the accuracy of the TVD estimate scales with the system dimension already for cluster sizes of \(4\times 3\) and \(4\times 4\) infeasible amount of samples would be required for an accurate estimate, and hence these are not shown. See Table S1 of the SI for experimental details. used, we see the same qualitative behavior, although the overall root infidelities are higher. This is likely due to imperfect re-cooling, which only cools the system to low motional occupation of \(\bar{n}\sim 2\) phonons. While the _Moulmer-Sorensen_ (MS) gate is insensitive to the motional occupation to first order [44], higher phonon number leads to a larger sensitivity to calibration errors. Moreover, the recooling process takes \(3\;\mathrm{ms}\), during which the system experiences some dephasing. Faster re-cooling [45] and/or cooling closer to the motional ground state could improve the state fidelities. Fig. 4 shows the results of the fidelity and TVD measurements for an increasing amount of noise on the \(2\times 2\) cluster state in comparison to numerical simulations. We observe an increasing gap between TVD and upper bound from the root infidelity estimate (cf. Eq. (1)) with the amount of noise in a fixed quantum circuit. These results indicate that output distributions of states subject to a significant amount of dephasing noise may still have a TVD well below the root infidelity. Comparing the experimental results with the simulations also allows us to deduce the natural noise floor in the experiment. We then measure the fidelity of cluster state preparations, averaged over the random circuits and show the results in Fig. 5. We compare the fidelity estimates to the classical estimates of fidelity via XEB. We observe a consistently larger variance of the XEB estimate of the fidelity than of the direct fidelity estimate. Moreover, the XEB estimate deviates from the actual fidelity for the \(2\times 5\) cluster. This may be due to the fact that the XEB fidelity depends on the type and strength of the experimental noise as well as the random circuit ensemble [16, 17, 18]. Hence, care must be taken when using the XEB as an estimator of the fidelity. **Discussion and conclusion.** We conclude that direct (average) fidelity estimation provides an efficient and scalable means of certifying both single instances and the average quality of measurement-based computations. This is the case since the sample complexity of the fidelity estimate for arbitrary generalized stabilizer states is independent of the size of the system and the postprocessing is efficient. Larger systems can therefore be verified with the same number of experiments as we have performed. More generally, our results demonstrate that the measurement-based model of quantum computation provides a viable path toward efficient verification of quantum random sampling experiments, which is not known to be possible in the circuit model. In particular, all known methods for fidelity estimation [28, 46] in general scale exponentially with the number of qubits. We also note that, although MBQC is formally equivalent to the circuit model, relating a quantum circuit to an MBQC requires a space-time mapping and a feedforward procedure. Hence, our verification protocol at the level of the cluster state has no direct analog in circuit-based computations. While the experiments in this work are still far from the quantum advantage regime, we have successfully demonstrated how to use qubit recycling to perform large-scale MBQC with a qubit number that can Figure 4: **Single-instance verification with artificially added phase noise.** Root infidelity estimate \(\sqrt{1-F}\) (hexagons) and empirical total-variation distance \(d_{\mathrm{TV}}\) (stars) of a \(2\times 2\) cluster state with artificially introduced local (pink) and global (green) phase noise—\(Z\)-rotations with rotation angle drawn from a Gaussian distribution with variance \(\sigma^{2}\)—before and after Moulmer-Sorensen gate applications as a function of the noise strength \(\sigma\), see Methods for details. Solid (dashed) lines show simulated root infidelity (total-variation distance) for the respective types of noise. The experimental data (top axis) is shifted with respect to the simulations (bottom axis) due to the fact that there is residual noise when no artificial noise is introduced. The value of the relative shift given by \(0.045\pi\) (dashed vertical line) provides an estimate for the natural noise strength. Colored error bars represent the \(3\sigma\) interval of the statistical error. The systematic measurement error of the fidelity estimate is represented by gray error bars. Figure 5: **Experimental results for average performance verification.** Average fidelity estimate from direct fidelity estimation (DFE) (pink hexagons), from linear XEB (triangles), and from logarithmic (log) XEB (diamonds, see Methods for the definition) using \(1000\) random cluster states and \(50\) shots per state. Based on calibration data for the gate fidelities of single-qubit gates \(f_{1Q}=99.8\%\), two-qubit gates \(f_{2Q}=97.5\pm 0.5\%\), and measurements \(f_{M}=99.85\%\), we compute a prediction for the fidelity (gray shaded line). We extract an effective local Pauli error probability of \(1.7\%\) (dotted line), see Section S5 of the SI. Colored error bars represent the statistical \(3\sigma\) error. For uncorrelated measurement noise, the fidelity estimate provides a lower bound to the true state fidelity. Gray error bars represent the worst-case systematic measurement error. be quadratically larger than the used ion register. This will enable trapped-ion quantum processors comprising on the order of 100 ions and depth 50 to achieve a fully verified quantum advantage in sampling from cluster states with more than \(50\times 50\) nodes. Indeed, a full \(n\times m\) cluster state could even be prepared using recycling with only \(n+1\) qubits at the expense of an additional linear depth overhead. Quantum advantage aside, the random \(n\)-qubit measurement-based computations obtained from random cluster states generate a unitary \(2\)-design [27] and can thereby serve as an average-performance benchmark [47]. In MBQC, fidelity estimation thus allows us to scalably benchmark single-instance as well as the average-case performance of arbitrary computations. Besides trapped ions, several other platforms are compelling candidates for demonstrating a verifiable quantum advantage via random cluster state sampling. Examples include arrays of Rydberg atoms in optical tweezers, where the creation of stabilizer states has recently been demonstrated [48]. Another leading platform for cluster state generation is photonics [49], and continuous-variable optical systems where cluster states with up to 30 000 nodes have been experimentally prepared [50; 51]. Currently, these states are still Gaussian states and therefore not useful for quantum computing, but it is intriguing to think about how the non-standard topologies of continuous-variable cluster states might be exploited. Traditionally, such continuous variable systems have been used for boson sampling, rather than quantum circuit sampling. While boson sampling is not a universal model for computation, its efficient verification is possible for both photon-number [52] and Gaussian [53] input states. In practice, however, the verification measurements are entirely different in type compared to the sampling experiments, requiring a different apparatus. In contrast, for verifying MBQC states as performed in this work, the difference between sampling and verification is only local basis rotations. This makes MBQC a particularly compelling candidate for verifiable quantum random sampling. ## Methods ### Verification protocols MBQC with cluster states is amenable to various types of verification. In particular, we can perform single-instance verification, that is, verification of a single quantum state using many copies of that state. We also perform average verification, that is, an assessment of the quality of state preparations averaged over the ensemble of measurement-based computations defined by the random choices of single-qubit rotation angles \(\beta\). We distinguish classical means of verification in which we only make use of classical samples from the cluster state measured in a fixed (the Hadamard) basis, and quantum means of verification in which we measure the cluster state in various different bases. #### Single-instance verification In order to perform single-instance verification we apply direct fidelity estimation [28], which uses single-qubit measurements on preparations of the target state \(\ket{\psi}\). Since the target state vector \(\ket{\psi}\) for our random sampling problem is a locally rotated stabilizer state, with stabilizer operators \(S_{i}\), \(\rho=\ket{\psi}\bra{\psi}\) is the projector onto the joint \(+1\)-eigenspace of its \(N\) stabilizers. We can therefore expand \(\rho\) as the uniform superposition over the elements of its stabilizer group \(\mathcal{S}=\langle S_{1},\ldots,S_{N}\rangle\), where \(\langle S_{1},\ldots,S_{N}\rangle\) denotes the multiplicative group generated by \(S_{1},\ldots,S_{N}\). The fidelity can then be expressed as \[F=\frac{1}{2^{N}}\sum_{s\in\mathcal{S}}\langle s\rangle_{\rho}=\frac{1}{2^{N}} \sum_{s\in\mathcal{S}}\sum_{\sigma=\pm 1}\langle\pi_{s}^{\sigma}\rangle_{\rho} \cdot\sigma, \tag{2}\] where \(s=\pi_{s}^{+}-\pi_{s}^{-}\) is the eigendecomposition of the stabilizer \(s\) into its \(\pm 1\) subspaces, and \(\langle\,\cdot\,\rangle_{\rho}=\operatorname{Tr}[\rho\,\cdot\,]\) denotes the expectation value. This suggests a simple verification protocol where in each run a uniformly random element of \(\mathcal{S}\) is measured on \(\rho\). Averaging over the measurement outcomes \(\sigma\) then gives an unbiased estimate of the fidelity according to Eq. (2). Since the measurement outcomes \(\sigma\) are bounded by \(1\) in absolute value, we can estimate the average up to error \(\epsilon\) using a number \(M\) of measurements from \(\mathcal{S}\) that scales as \(1/\epsilon^{2}\) and is independent of the number of qubits. We also directly estimate the TVD between the empirical distribution and the ideal distribution. Note that estimating the TVD is sample-inefficient since the empirical probabilities need to be estimated, requiring exponentially many samples [11]. It is also computationally inefficient since the ideal probabilities need to be computed. #### Average-case verification We measure the average quality of the cluster state preparations \(\rho_{\beta}\) by their average state fidelity \[\overline{F}\coloneqq\mathbb{E}_{\beta}[\langle\psi_{\beta}|\rho_{\beta}|\psi _{\beta}\rangle] \tag{3}\] with the generalized cluster state \(\ket{\psi_{\beta}}\) with random angles \(\beta\in\{0,\frac{\pi}{4},\ldots,\frac{7\pi}{4}\}^{n\times m}\). Here, \(\mathbb{E}_{\beta}[\,\cdot\,]\) denotes the expectation value over random \(\beta\in[8]^{nm}\) In order to classically estimate the average state fidelity, one can make use of _cross-entropy benchmarking (XEB)_ as proposed by Arute _et al._[7] and Boixo _et al._[12]. XEB makes use of the classical samples from a distribution \(Q\) and aims to measure how distinct \(Q\) is from a target distribution \(P\). The linear and logarithmic XEB fidelities between \(Q\) and \(P\) are defined as \[f_{\text{fin}}(Q,P) \coloneqq 2^{n}\sum_{x}Q(x)P(x)-1, \tag{4}\] \[f_{\text{log}}(Q,P) \coloneqq -\sum_{x}Q(x)\log P(x), \tag{5}\] respectively. Letting \(P_{\beta}\) be the output distribution of \(\ket{\psi_{\beta}}\) and \(Q_{\beta}\) the output distribution of \(\rho_{\beta}\) after Hadamard-basis measurements, we can estimate the average state fidelity from the average linear (logarithmic) XEB fidelity \[\overline{f}_{\text{fin}\,(\text{log})}\coloneqq\mathbb{E}_{\beta}\left[f_{ \text{fin}\,(\text{log})}(Q_{\beta},P_{\beta})\right], \tag{6}\] assuming that the total noise affecting the experimental state preparation \(\rho_{\beta}\) is not correlated with \(\ket{\psi_{\beta}}\bra{\psi_{\beta}}\). In order to estimate the (average) XEB fidelities, we need to compute the ideal output probabilities \(P_{\beta}(x)\) and average those over the observed samples \(x\). This renders the XEB fidelities a computationally inefficient estimator of the fidelity. They are _sample-efficient_ estimators, [32], however, provided that the target distribution \(P_{\beta}\) satisfies the expected exponential shape for deep random quantum circuits (or larger cluster states). That is, to achieve an additive estimation error \(\epsilon\), a polynomial number of samples in \(n\) and \(1/\epsilon\) are required. In Section S5 of the SI, we provide the details of the estimation procedure. To date, XEB is the only available means of practically verifying (on average or in the single-instance) universal random quantum circuits. Here, we observe that in the measurement-based model of quantum computations _fully efficient_ (i.e., computationally and sample-efficiently) average-case verification is possible using single-qubit measurements. In fact, we observe that direct fidelity estimation can be extended to measure the average fidelity of random MBQC state preparations. To this end, we observe that the average state fidelity (3) can be expressed analogously to Eq. (2) as \[\overline{F}=\frac{1}{2^{nm}}\frac{1}{8^{nm}}\sum_{\beta\in\frac{ \sigma}{2}\cdot[8]^{nm}}\sum_{s_{\beta}\in\mathcal{S}_{\beta}}\sum_{\sigma=\pm 1 }\langle\pi_{s_{\beta}}^{\sigma}\rangle_{\rho_{\beta}}\cdot\sigma, \tag{7}\] where \(\mathcal{S}_{\beta}\) denotes the stabilizer group of the locally rotated cluster state \(|\psi_{\beta}\rangle\) with rotation angles \(\beta\), \(\pi_{s_{\beta}}^{\sigma}\) is the projector onto the \(\sigma\)-eigenspace of \(s_{\beta}\in\mathcal{S}_{\beta}\). We also let \([8]=\{1,2,\ldots,8\}\) and \([k]^{l}=[k]\times\cdots\times[k]\)\(l\) times. Hence, in order to estimate the average state fidelity with respect to the choice of \(\beta\), we simply need to sample uniformly random rotation angles \(\beta\), and elements \(s_{\beta}\) from the stabilizer group \(\mathcal{S}_{\beta}\) and then measure \(s_{\beta}\) on the state preparation \(\rho_{\beta}\) of \(|\psi_{\beta}\rangle\), yielding outcome \(\sigma\in\{\pm 1\}\). Averaging over those outcomes yields an estimator of the average state fidelity with the same sample complexity as direct fidelity estimation has for a single instance. As discussed below, the only assumption required to trust the validity of the result is that the noise in the local single-qubit measurements does not behave adversarially. Direct fidelity estimation and direct average fidelity estimation thus provide a unified method for efficiently assessing the single-instance quality and the average quality of MBQC state preparations. ### Finite sampling and error bars When performing direct fidelity estimation of a fixed cluster state, the simplest protocol is to sample an element \(s\in\mathcal{S}\) of the stabilizer group uniformly at random and measure \(s\) once; cf. Eq. (2). In this case, the samples are distributed biomomially with ideal probability \(p=\sum_{s}\langle\pi_{s}^{\sigma}\rangle_{\rho}/2^{N}\), and the error on the mean estimation is given by the standard deviation of the observed binomial distribution. However, in practice, it is much cheaper to repeat a measurement of a stabilizer than to measure a new stabilizer, which requires a different measurement setting. This is why we estimate the fidelity according to the following protocol. We sample \(K\) stabilizers uniformly at random and measure each of them \(M\) times, obtaining an empirical estimate of the conditional expectation value \(\mathbb{E}[\sigma|s]=\sum_{\sigma=\pm 1}\mathrm{Tr}[\rho\pi_{s}^{\sigma}]\sigma\). In Section S6 of the SI, we show that the variance of the fidelity estimator \(\hat{F}=(KM)^{-1}\sum_{i=1}^{K}\sum_{j=1}^{M}\sigma_{i,j}\), where \(\sigma_{i,j}\) is the outcome of measuring stabilizer \(s_{i}\) the \(j^{\text{th}}\) time, is given by \[\mathrm{Var}[\hat{F}]=\frac{4}{KM}(\mathbb{E}[p_{s}](1-\mathbb{E} [p_{s}]))+\frac{4}{K}\left(1-\frac{1}{M}\right)\mathrm{Var}[p_{s}]. \tag{8}\] Here, the expectation value and variance are taken over \(s\in\mathcal{S}\) and \(p_{s}=\mathrm{Tr}[\rho\pi_{s}^{+1}]\), respectively. Furthermore, the same results carry over to the average fidelity estimate, since sampling from the stabilizer group \(\mathcal{S}\) of a single cluster state is now replaced by sampling a random choice of angles \(\beta\), and random element of the corresponding stabilizer group \(\mathcal{S}_{\beta}\), not altering the variance. Eq. (8) gives rise to an optimal choice of \(K\) and \(M\) for a fixed total number of shots \(K\cdot M\), depending on the expectation value and variance of the stabilizer values \(p_{s}\) and the experimental trade-off between repetitions of the same measurement and changing the measurement setting. In particular, if the distinct elements of the stabilizer group have a small variance over the imperfect state preparation \(\rho\), a larger choice of \(M\) might be advantageous. In practice, for the instances in which we have abundant data, we subsample the data in order to remain in the situation \(M=1\) of Eq. (2), while in the case of sparse data, we make use of a larger number of shots \(M\) per stabilizer. The variance of the estimate of the XEB fidelity is also given by the law of total variance, generalizing Eq. (8), and spelled out in detail in Section S6 of the SI. Finally, for the TVD, we estimate the error using bootstrapping by resampling given the observed distribution. Specifically, we repeatedly sample from the empirical distribution the same number of times as the experiment and compute the TVD of the samples to the sampled distribution. The resulting TVD follows a Gaussian distribution of which we show the \(3\sigma\) interval estimated from \(1000\) iterations. ### Measurement errors A key assumption for the efficient verification of the cluster states prepared here is the availability of accurate, well-characterized single-qubit measurements. A deviation in the measurement directly translates into a deviation in the fidelity estimate, and hence a high measurement error in the worst case translates into a high error in the resulting fidelity estimate. Because the single-qubit measurements we use comprise single-qubit gates followed by readout in a fixed basis, the measurement error has two main contributions: (i) imperfections in the single-qubit rotations for the basis choice, and (ii) imperfections in the readout. The single-qubit gate errors are well characterized by randomized benchmarking, showing an average single-qubit Clifford error rate of \(3\pm 2\cdot 10^{-4}\)[34] for the recycling device and \(14\pm 1\cdot 10^{-4}\)[36] for the second device. The native \(Z\) measurement is then performed by scattering photons on the short-lived \(\mathrm{S}_{1/2}\leftrightarrow\mathrm{P}_{1/2}\) transition. Ions in the \(|0\rangle\) state will scatter photons, while ions in the \(|1\rangle\) state remain dark. Hence, there are two competing contributions to the readout error. On the one hand, long measurement times suffer from amplitude damping noise due to spontaneous decay of the \(|1\rangle\) state (lifetime \(\sim 1.15\mathrm{s}\)) during readout. On the other hand, for short readout times, the Poisson distributions for the two outcomes will start to overlap, leading to discrimination errors. In the experiments presented here, the second contribution is suppressed to well below \(10^{-5}\) by using measurement times of 1ms for the recycling device and 2ms on the non-recycling device, leaving only a spontaneous decay error of \(<1\cdot 10^{-3}\)[37] for the recycling device and \(<2\cdot 10^{-3}\) for the non-recycling device. Hence, the worst-case readout error is \(<1.5\cdot 10^{-3}\) per qubit for the recycling device and \(<3.5\cdot 10^{-3}\) per qubit for the second device. Given the single-qubit readout error \(e_{1}\), the overall measurement error on an \(n\)-qubit device is then given by \(e_{M}=1-(1-e_{1})^{n}\). Given a true pre-measurement state fidelity \(F\), we consider the effect of the measurement errors on the estimated fidelity \(\hat{F}\). In the one extreme case, the measurement errors flip the sign of the stabilizers with value \(+1\) on the pre-measurement state, but keep the sign of those with a \(-1\) outcome, resulting in a reduced state fidelity \(\hat{F}_{\mathrm{min}}=2((1+F)/2-e_{M}\cdot(1+F)/2)-1\). In the other extreme case, they flip the sign of only the stabilizers with value \(-1\) on the pre-measurement state yielding \(\hat{F}_{\mathrm{max}}=2((1+F)/2+e_{M}\cdot(1-F)/2)-1\). This defines the worst-case error interval for \(\hat{F}\) as \([(\hat{F}-e_{M})/(1-e_{M}),(\hat{F}+e_{M})/(1-e_{M})]\). If on the other hand the measurement errors are benign, i.e., uncorrelated from the circuit errors, they will flip all stabilizers regardless of their value on the pre-measurement state with equal probability. In this case, the measured fidelity satisfies \(\hat{F}=F\cdot(1-2e_{M})\) so that we can deduce the true fidelity \(F\) from the measured fidelity and the measurement error. Note that in this case, the measured state fidelity is always a lower bound to the true state fidelity. ### Noisy circuits In order to study the influence of experimental noise on the reliability and tightness of our bounds on the TVD, we artificially induce dephasing noise on the \(2\times 2\) cluster. This simulates a reduced spin-coherence time, which could come from laser phase noise or magnetic field noise. These are the dominant error sources in the experiment. To this end, we pick a fixed instance of the \(2\times 2\) cluster and add small random \(Z\) rotations on all qubits at roughly equidistant time steps. Specifically, we apply virtual \(Z\) gates (i.e., realized in software as an appropriate phase shift on all subsequent gate operations) after the initial local state preparation gates, and again after each MS gate, see Fig. 6. In each run of the experiment (with 50 shots each), we randomly pick rotation angles for the virtual \(Z\) gates from a normal distribution with 0 mean and standard deviation \(\sigma\). Here \(\sigma\) is a measure of the noise strength and corresponds to a local phase-flip probability of \(\xi/2\), where \(\xi=1-e^{-\sigma^{2}/2}\). If we want to engineer global, correlated noise, we use the same angle for all \(Z\) gates in a given "time-step", whereas for engineering local, uncorrelated noise we pick each angle independently. We then average these random choices over 50 instances for the fidelity estimate and 150 instances for the TVD. This averaging turns the random phase shifts into independent (correlated) dephasing channels in the case of local (global) noise. This effectively appears as single-qubit depolarizing noise after every two-qubit gate with a local Pauli error probability of \(3\gamma/4\), where \(\gamma=1-e^{-0.310\sigma^{2}}\), where the constant was obtained from a numerical fit to simulated data, see Section S5 of the SI for details. **Acknowledgements.** D.H. acknowledges funding from the U.S. Department of Defense through a QuICS Hartree fellowship. This work was completed while D. H. was visiting the Simons Institute for the Theory of Computing. The Berlin team acknowledges funding from the BMBF (DAQC, MUNIQC-ATOMS), the DFG (specifically EI 519/21-1 on paradigmatic quantum devices, but also CRC 183), the Einstein Foundation (Einstein Research Unit), the BMWi (EniQmA, PlanQK) and the Studienstiftung des Deutschen Volkes. This research is also part of the Munich Quantum Valley (K-8), which is supported by the Bavarian state government with funds from the Hightech Agenda Bayern Plus. It has also received funding from the EU's Horizon 2020 research and innovation programme under the Quantum Flagship projects PASQuanS2 and Millenion. JBV acknowledges funding from EU Horizon 2020, Marie Sklodowska-Curie GA. Nr. 754446 - UGR Research and Knowledge Transfer Fund Athenea3i; Digital Horizon Europe project FQaCIA, GA. Nr. 1010/07558., and FEDER/Junta de Andalucia program A.FQM.752.UGR20. The Innsbruck team acknowledges support by the Austrian Science Fund (FWF), through the SFB BeyondC (FWF Project No. F7109) and the EU QuantERA project T-NiSQ (I-6001), and the Institut fur Quanteninformation GmbH. We also acknowledge funding Figure 6: **Noisy circuits for the \(2\times 2\) cluster.** Dephasing noise is simulated by adding random (virtual) \(Z\) rotations on all qubits after the initial state preparation and after each MS gate, see Methods. This amounts to roughly equidistant time steps. The rotation angles for the \(Z\) rotations are drawn randomly from a normal distribution with zero mean and standard deviation \(\sigma\in[0,0.2\pi]\) every 50 shots. For correlated noise, the parameters in each time step are chosen equally and for uncorrelated noise, they are chosen independently. from the EU H2020-FETFLAG-2018-03 under Grant Agreement No. 820495, by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via US Army Research Office (ARO) grant No. W911NF-20-1-0007, and the US Air Force Office of Scientific Research (AFOSR) via IOE Grant No. FA9550-19-1-7044 LASCEM. This research was funded by the European Union under Horizon Europe Programme--Grant Agreement 101080086--NeQST. Funded by the European Union (ERC, QUDITS, 101039522). Views and opinions expressed are, however, those of the author(s) only and do not necessarily reflect those of the European Union or the European Commission. Neither the European Union nor the granting authority can be held responsible for them.
2306.03271
Dual self-distillation of U-shaped networks for 3D medical image segmentation
U-shaped networks and its variants have demonstrated exceptional results for medical image segmentation. In this paper, we propose a novel dual self-distillation (DSD) framework for U-shaped networks for 3D medical image segmentation. DSD distills knowledge from the ground-truth segmentation labels to the decoder layers and also between the encoder and decoder layers of a single U-shaped network. DSD is a generalized training strategy that could be attached to the backbone architecture of any U-shaped network to further improve its segmentation performance. We attached DSD on two state-of-the-art U-shaped backbones, and extensive experiments on two public 3D medical image segmentation datasets (cardiac substructure and brain tumor) demonstrated significant improvement over those backbones. On average, after attaching DSD to the U-shaped backbones, we observed an improvement of 4.25% and 3.15% in Dice similarity score for cardiac substructure and brain tumor segmentation respectively.
Soumyanil Banerjee, Ming Dong, Carri Glide-Hurst
2023-06-05T21:41:00Z
http://arxiv.org/abs/2306.03271v1
# Dual self-distillation of U-shaped networks for 3D medical image segmentation ###### Abstract U-shaped networks and its variants have demonstrated exceptional results for medical image segmentation. In this paper, we propose a novel dual self-distillation (DSD) framework for U-shaped networks for 3D medical image segmentation. DSD distills knowledge from the ground-truth segmentation labels to the decoder layers and also between the encoder and decoder layers of a single U-shaped network. DSD is a generalized training strategy that could be attached to the backbone architecture of any U-shaped network to further improve its segmentation performance. We attached DSD on two state-of-the-art U-shaped backbones, and extensive experiments on two public 3D medical image segmentation datasets (cardiac substructure and brain tumor) demonstrated significant improvement over those backbones. On average, after attaching DSD to the U-shaped backbones, we observed an improvement of 4.25% and 3.15% in Dice similarity score for cardiac substructure and brain tumor segmentation respectively. Keywords:Dual self-distillation U-shaped networks segmentation. ## 1 Introduction Deep learning algorithms such as Convolutional Neural Networks (CNNs) have proven to be extremely useful in performing medical image segmentation [16], which is a very important and challenging task in medical image analysis [13, 23]. One of the breakthrough algorithms which produced state-of-the-art results for end-to-end 2D and 3D medical image segmentation task is the U-Net [19] and the 3D U-Net [5], respectively. These U-shaped architectures consist of a CNN-based contracting encoder to capture the context of the input image and a CNN-based expanding decoder to localize the object in the image. Skip-connections are included between the encoder and decoder to concatenate the feature maps from encoder layers to the corresponding decoder layers. These skip-connections allow U-Nets to use the fine-grained details learned from the encoder blocks and construct a localized image in the decoder blocks. More recently, vision transformers (ViT) [6] have been used in the encoder path of U-shaped networks [8, 9]. The self-attention mechanism of transformers helped the models capture the long-range dependencies between regions of the image. Knowledge distillation is the process by which a large pre-trained model that acts as the teacher network can transfer its knowledge to a smaller, lightweight model that acts as the student network during its training [10]. It was first proposed to reduce the inference time by using a lightweight model while maintaining similar accuracy as the large pre-trained model. Recently, knowledge distillation has been used to improve the performance of lightweight networks for semantic segmentation [14] including medical image segmentation tasks [18]. The need of transferring knowledge from a large teacher network to a smaller student network was later eliminated by the process of self-distillation [25]. Self-distillation uses the deepest layer of a single model which acts as the teacher network to distill the knowledge to the shallower layers of the same model which acts as the student network. Self-distillation has been applied for numerous computer vision tasks such as image classification [25, 24] and object detection [26]. A rigorous theoretical explanation of knowledge and self-distillation was recently provided in [1]. In this paper, we propose a novel 3D dual self-distillation (DSD) framework that could be attached to any U-shaped image segmentation backbones. In DSD, the deepest encoder and decoder of the U-shaped backbones act as the teacher network for the shallower encoders and decoders which act as student networks. We discovered that there is "dark knowledge" [1, 10, 20] in the output of the deepest encoder which is at the bottom of the contracting encoder path of the U-shaped network. The "dark knowledge" is also present in the output of the deepest decoder which is at the top of the expanding decoder path of the network. Thus, in our DSD framework, this "dark knowledge" distills in a bottom-up manner on the encoder side and in a reverse top-down manner on the decoder side of an U-shaped backbone. Additionally, DSD also includes the distillation of the knowledge from the ground-truth labels to the decoder layers of the U-shaped network, which is a process known as deep supervision in medical image segmentation [7]. Thus, DSD leverages the benefits of deep supervision by overcoming optimization difficulties and achieving faster convergence [7]. Our major contributions are: (i) To the best of our knowledge, this is the first application of self-distillation to U-shaped networks for medical image segmentation. Our novel design of DSD between encoders and decoders could be generalized to any U-shaped segmentation backbones. (ii) Our proposed DSD is a more general approach to improve the segmentation performance of any U-shaped backbones, and the widely-adopted deep supervision for medical image segmentation is a special case of our model. (iii) We performed extensive experiments on two public 3D medical image segmentation datasets (one with cardiac substructures and the other with brain tumors), with DSD attached to two stat-of-the-art U-shaped backbones (one ViT-based and the other CNN-based encoder) and demonstrated significant quantitative and qualitative improvements over those backbones without DSD. ## 2 Method In the following sections, we provide a detailed explanation of our proposed DSD framework as shown in Fig. 4. ### U-shaped backbone The U-shaped backbone maps an input image \(I\) (\(I\in\mathbb{R}^{C\times H\times W\times D}\) with \(H\), \(W\), \(D\) denoting the height, width and depth of a 3D input image, respectively and \(C\) denoting number of imaging modalities/sequences such as CT or multi-modal MRI) to the ground-truth (GT) labels \(G\) (\(G\in\mathbb{R}^{K\times H\times W\times D}\) with \(K\) classes). The most common loss function used by a U-shaped network [8, 9] is the Dice Cross-Entropy (CE) loss (\(L_{DCE}\)), which is a compound loss function defined for 3D multi-class image segmentation as follows: \[\begin{split} L_{DCE}^{Y}&=L_{dice}^{Y}+L_{CE}^{Y}\\ &=1-\frac{2}{K}\sum_{k=1}^{K}\frac{\sum_{p=1}^{N}G_{p,k}Y_{p,k}}{ \sum_{p=1}^{N}G_{p,k}^{2}+\sum_{p=1}^{N}Y_{p,k}^{2}}-\frac{1}{N}\sum_{p=1}^{N} \sum_{k=1}^{K}G_{p,k}\log Y_{p,k}\end{split} \tag{1}\] where, \(G_{p,k}\in\mathbb{R}^{K\times H\times W\times D}\) denotes the one-hot encoded ground-truth labels and \(Y_{p,k}\in\mathbb{R}^{K\times H\times W\times D}\) denotes the probability (softmax) output of the network for class \(k\) at pixel \(p\). \(N=H*W*D\) denotes the total number of pixels in input image \(I\) and ground-truth labels \(G\). The Dice loss \(L_{dice}^{Y}\) measures the pixel-wise similarity while the cross-entropy loss \(L_{CE}^{Y}\) measures the pixel-wise difference in distributions between the network output \(Y\) and ground-truth labels \(G\). This loss is back-propagated to the network to update the weights of the encoders and decoders. Figure 1: Self-distillation demonstrated with an U-shaped network for 3D medical image segmentation. \(Y\) and \(G\) indicate the softmax of network output and one-hot encoded ground-truth (GT) labels respectively. \(E_{i}|_{i=1}^{Z}\) and \(D_{i}|_{i=1}^{Z}\) denote the output of the bottleneck module on the encoder and decoder side respectively. \(T\) and \(S\) denote the teacher and student probability distributions respectively. All dashed lines shown are only used during training and removed during inference. ### Bottleneck Module The bottleneck module shown by the red colored boxes in Fig. 4 constitutes an integral component of our proposed framework, which converts the feature maps \(F\) (\(F\in\mathbb{R}^{K^{\prime}\times H^{\prime}\times W^{\prime}\times D^{\prime}}\)) obtained from different encoder and decoder layers to a probability distribution \(P\) (\(P\in\mathbb{R}^{K\times H\times W\times D}\)) of the same shape as the network output \(Y\). For the feature maps \(F\), \(K^{\prime},H^{\prime},W^{\prime}\) and \(D^{\prime}\) denote the number of channels, height, width and depth respectively at a given layer and they vary depending on the position of the encoder (Encoder \(i|_{i=1}^{Z}\)) and decoder (Decoder \(i|_{i=1}^{Z}\)) layers. The bottleneck module consist of three layers: (i) 1D convolution layer: this layer changes the number of channels of feature maps obtained from the encoder and decoder layers to match the number of output classes \(K\) (from \(F\in\mathbb{R}^{K^{\prime}\times H^{\prime}\times W^{\prime}\times D^{\prime}}\) to \(F^{\prime}\in\mathbb{R}^{K\times H^{\prime}\times W^{\prime}\times D^{\prime}}\)), (ii) Deconvolution layer: this layer upsamples the feature maps obtained from the 1D convolution layer to generate logits (\(L\)) that match the dimension of the output \(Y\) (from \(F^{\prime}\in\mathbb{R}^{K\times H^{\prime}\times W^{\prime}\times D^{\prime}}\) to \(L\in\mathbb{R}^{K\times H\times W\times D}\)), (iii) Softmax layer: this layer converts the logits \(L\) (\(L\in\mathbb{R}^{K\times H\times W\times D}\)) to soft labels which is a probability distribution \(P\) of the same shape as \(L\), i.e. \(P_{p,k}=\frac{\exp^{L_{p,k}/\tau}}{\sum_{j=1}^{K}\exp^{L_{p,j}/\tau}}\), where \(\tau\) (\(\tau>1\)) denotes the temperature to generate the soft labels, and \(p\in N\) and \(k\in K\) are indices for pixels and classes, respectively. ### U-shaped backbone with Dual Self-distillation (DSD) We propose a novel dual self-distillation (DSD) framework for U-shaped backbones as shown by the purple and blue dashed arrows in Fig. 4. Our DSD framework consists of two main components. (i) **Distillation from ground-truth labels**: the first part (purple dashed arrows in Fig. 4) is the distillation of knowledge from the ground-truth labels \(G\) to each decoder of the U-shaped network. This process is known as deep supervision in medical image segmentation [7]. For our DSD framework, we calculate the Dice Cross-Entropy (DCE) loss (Eq. 1) between each decoder layer's softmax output \(D_{i}|_{i=1}^{Z}\) and the ground-truth labels \(G\). This loss is defined as: \[\begin{split} L_{DS}&=L_{DCE}^{Y}+\eta\sum_{i=1}^{Z }L_{DCE}^{D_{i}}=L_{DCE}^{Y}\\ &+\eta\sum_{i=1}^{Z}\left(1-\frac{2}{K}\sum_{k=1}^{K}\frac{\sum_ {p=1}^{N}G_{p,k}D_{i_{p,k}}}{\sum_{p=1}^{N}G_{p,k}^{2}+\sum_{p=1}^{N}D_{i_{p,k }}^{2}}-\frac{1}{N}\sum_{p=1}^{N}\sum_{k=1}^{K}G_{p,k}\log D_{i_{p,k}}\right) \end{split} \tag{2}\] where, \(L_{DS}\) denotes the deep supervision loss, \(Z\) denotes the number of decoders in the U-shaped architecture, \(L_{DCE}^{D_{i}}\) denotes the Dice cross-entropy loss between \(i^{th}\) decoder and \(G\), and \(\eta\) denotes the coefficient that controls the amount of supervision from \(G\) to \(D_{i}|_{i=1}^{Z}\). (ii) **Distillation between encoder and decoder layers:** the second part (blue dashed arrows in Fig. 4) is the distillation of knowledge between encoder and decoder layers of the U-shaped network. On the encoder side, the deepest encoder (Encoder Z) forms the teacher network to the shallower encoders (Encoder 1, 2,..., (Z-1)) which form the student networks. We reverse the order of teacher and student on the decoder side due to the deconvolution operation. Hence, the deepest decoder (Decoder 1) forms the teacher network to the shallower decoders (Decoder 2, 3,..., Z) which form the student networks. For all the teacher-student pairs in the encoders and decoders of the U-shaped network, we compute the pixel-wise Kullback-Leibler (KL) divergence [12] between the output probability distributions (softmax) of teacher and student as follows: \[\begin{split} L_{KL}&=\alpha_{1}\sum_{i=1}^{Z-1}KL( E_{i},E_{Z})+\alpha_{2}\sum_{i=2}^{Z}KL(D_{i},D_{1})\\ &=\alpha_{1}\sum_{i=1}^{Z-1}\left(\frac{1}{N}\sum_{p=1}^{N}\sum _{k=1}^{K}E_{Z_{p,k}}\log\frac{E_{Z_{p,k}}}{E_{i_{p,k}}}\right)+\alpha_{2}\sum _{i=2}^{Z}\left(\frac{1}{N}\sum_{p=1}^{N}\sum_{k=1}^{K}D_{1_{p,k}}\log\frac{D_ {1_{p,k}}}{D_{i_{p,k}}}\right)\end{split} \tag{3}\] where \(KL(P^{S},P^{T})\) is the KL divergence between student (\(P^{S}\)) and teacher (\(P^{T}\)) probability distributions, \(E_{i}\) and \(D_{i}\) are the \(i^{th}\) shallow encoder and decoder's (student's) softmax output (\(P^{S}\)) respectively, \(E_{Z}\) and \(D_{1}\) are the deepest encoder and decoder's (teacher's) softmax output (\(P^{T}\)) respectively, \(Z\) is the number of encoders and decoders, \(K\) denotes the number of classes of the ground-truth labels and \(N\) denotes the total number of pixels. We define our proposed dual self-distillation loss \(L_{DSD}\) as follows: \[L_{DSD}=L_{DCE}^{Y}+\eta\sum_{i=1}^{Z}L_{DCE}^{D_{i}}+\alpha_{1}\sum_{i=1}^{Z- 1}KL(E_{i},E_{Z})+\alpha_{2}\sum_{i=2}^{Z}KL(D_{i},D_{1}) \tag{4}\] where \(\alpha_{1}\) and \(\alpha_{2}\) denote the coefficients that controls the amount of self-distillation between the encoder and decoder layers, respectively. Note that our generalized DSD framework is reduced to deep supervision when \(\alpha_{1},\alpha_{2}=0\). ## 3 Experiments and Results We attached our dual self-distillation (DSD) framework on two state-of-the-art U-shaped backbones and applied it on two benchmark datasets for medical image segmentation tasks, specifically whole heart and brain tumor segmentation. ### Datasets **MMWHS dataset (Heart)** - High resolution 3D CT angiography datasets of 20 patients from the Multi-Modal Whole Heart Segmentation (MMWHS) dataset [15, 27, 28], with 7 classes of ground-truth labels of cardiac substructures was used. We split the data into a train/validation/test set of 12/4/4 patients. **MSD dataset (Brain)** - The brain tumor segmentation task [3, 4, 17] was used from the Medical Segmentation Decathlon (MSD) dataset [2, 21]. This task comprised of 484 patients having multi-modal multi-site MRI data with 3 classes of ground truth labels and 4-channel multi-modal input (FLAIR, T1w, T1gd, T2w). We split the data into a train/validation/test set of 388/72/24 patients. ### Experimental setup and implementation details In our experiments, we selected the UNETR [9] and nnU-Net [11] as our U-shaped backbones and attached the DSD framework to them. These backbones are selected because they have recently shown promising and state-of-the-art results for several medical image segmentation tasks. For all experiments, the training was performed including the background and evaluated only on the foreground classes. All DSD experiments were performed with \(\eta=1\) and \(\alpha_{1},\alpha_{2}=1\). The temperature (\(\tau\)) used to generate the soft labels in our DSD experiments was 3. These hyperparameters were empirically decided based on the performance on the validation set. The experiments were conducted with PyTorch v1.12 and MONAI v0.9 framework using a NVIDIA Quadro RTX 6000 GPU. Quantitative evaluations between predicted and ground truth segmentation regions were performed using the Dice similarity coefficient (Dice score) [22] and \(95^{th}\) percentile of the Hausdorff distance (HD95 in mm)[9, 11]. ### Ablation Study Table 1 presents the results of an ablation study with both UNETR and nnU-Net on MMWHS testing set comparing the following components of our DSD framework: (i) Deep Supervision (DS: \(\eta=1\), \(\alpha_{1},\alpha_{2}=0\) in Eq. 4): DSD in this case is reduced to Deep Supervision. (ii) Self-Distillation between Encoders (SDE: \(\eta=1\), \(\alpha_{1}=1\), \(\alpha_{2}=0\) in Eq. 4): This shows the effect of self-distillation only between the encoder layers in DSD. (iii) Self-Distillation between Decoders (SDD: \(\eta=1\), \(\alpha_{1}=0\), \(\alpha_{2}=1\) in Eq. 4): This shows the effect of self-distillation only between the decoder layers in DSD. (iv) Dual Self-Distillation (DSD: \(\eta=1\), \(\alpha_{1}=1\), \(\alpha_{2}=1\) in Eq. 4): This shows the effect of our proposed dual self-distillation. Clearly, dual self-distillation achieves the best performance. As the original self-distillation implementation used the L2 loss between the feature maps of the teacher and student [25, 24], we applied self-distillation from the feature maps of encoders and decoders. However, no noticeable improvement in performance was observed with this strategy, likely because we are performing pixel-level classification where the loss (Eq. 4) from each pixel is back-propagated to update the network weights, and hence the loss from feature maps become redundant. This is different from the image-level classification tasks [25, 24], where \begin{table} \begin{tabular}{l|c c|c c|c c|c c} \hline Study & \multicolumn{2}{c|}{DS} & \multicolumn{2}{c|}{SDE} & \multicolumn{2}{c|}{SDD} & \multicolumn{2}{c}{DSD} \\ settings & & & & & & & \\ \hline Network & Dice & HD95 & Dice & HD95 & Dice & HD95 & Dice & HD95 \\ \hline UNETR & 82.4 & 15.4 & 83.6 & 12.8 & 83.8 & 11.6 & **84.9** & **8.7** \\ nnU-Net & 85.2 & 16.3 & 85.5 & 14.8 & 87.2 & 13.2 & **87.9** & **9.2** \\ \hline \end{tabular} \end{table} Table 1: The mean Dice score and HD95 of all 7 cardiac substructures in MMWHS testing set obtained by different components of our DSD framework. The highest Dice score (%) and lowest HD95 (in mm) are marked in **bold**. the loss from feature maps help improve the classification performance. Thus, feature map-based distillation was not included in our remaining experiments. ### Prediction with MMWHS dataset Table 2 summarizes the Dice score and HD95 of the 7 classes for the CT angiography (CTA) MMWHS dataset on the testing set using UNETR and nnU-Net as the backbone along with the DSD framework. DSD outperforms the UNETR backbone with 4.2% increase in Dice score and 8.4 mm decrease in HD95 and outperforms the nnU-Net backbone with 4.3% increase in Dice score and 10.6 mm decrease in HD95. DSD also significantly reduced the variance of Dice score and HD95 for the 4 patients in the testing set. A qualitative comparison between the ground-truth (GT) labels and prediction with UNETR backbone, UNETR with DSD, nnU-Net backbone and nnU-Net with DSD on a 2D CT slice is shown in Fig. 2 (top row) and 3D volume rendering (bottom row). \begin{table} \begin{tabular}{c|c c|c c||c c|c c} \hline Network & \multicolumn{2}{c|}{UNETR} & \multicolumn{2}{c||}{UNETR with DSD} & \multicolumn{2}{c|}{nnU-Net} & \multicolumn{2}{c}{nnU-Net with DSD} \\ \hline Structure & Dice & HD95 & Dice & HD95 & Dice & HD95 & Dice & HD95 \\ \hline MYO & 85.0\(\pm\)4.7 & 5.0\(\pm\)4.9 & 85.8\(\pm\)4.6 & 3.8\(\pm\)3.4 & 83.7\(\pm\)5.7 & 20.7\(\pm\)37.7 & 88.5\(\pm\)1.8 & 1.6\(\pm\)0.3 \\ LA & 83.6\(\pm\)10.7 & 13.6\(\pm\)18.9 & 88.6\(\pm\)8.2 & 3.6\(\pm\)2.0 & 89.1\(\pm\)6.2 & 3.8\(\pm\)2.0 & 88.6\(\pm\)7.1 & 3.9\(\pm\)2.5 \\ LV & 86.9\(\pm\)10.8 & 5.9\(\pm\)4.0 & 90.9\(\pm\)3.7 & 3.9\(\pm\)2.2 & 89.3\(\pm\)5.3 & 18.0\(\pm\)27.8 & 92.6\(\pm\)3.5 & 2.1\(\pm\)0.8 \\ RA & 76.7\(\pm\)12.8 & 22.0\(\pm\)30.9 & 80.9\(\pm\)9.9 & 8.6\(\pm\)7.9 & 77.2\(\pm\)10.6 & 28.0\(\pm\)19.3 & 83.5\(\pm\)7.9 & 4.1\(\pm\)2.0 \\ RV & 83.2\(\pm\)10.0 & 22.3\(\pm\)34.6 & 83.2\(\pm\)9.6 & 5.6\(\pm\)3.2 & 80.1\(\pm\)14.7 & 26.1\(\pm\)43.6 & 87.8\(\pm\)6.7 & 27.5\(\pm\)50.0 \\ AA & 78.6\(\pm\)24.0 & 13.3\(\pm\)13.3 & 87.5\(\pm\)8.7 & 16.0\(\pm\)16.9 & 87.2\(\pm\)13.2 & 16.2\(\pm\)23.0 & 92.1\(\pm\)5.2 & 4.0\(\pm\)3.4 \\ PA & 71.2\(\pm\)15.1 & 37.4\(\pm\)21.3 & 77.5\(\pm\)13.3 & 19.3\(\pm\)26.1 & 78.4\(\pm\)13.0 & 25.7\(\pm\)29.1 & 82.0\(\pm\)8.8 & 21.1\(\pm\)25.6 \\ \hline mean\(\pm\)std & 80.7\(\pm\)13.2 & 17.1\(\pm\)21.6 & **84.9\(\pm\)8.9** & **8.7\(\pm\)12.4** & 83.6\(\pm\)10.4 & 19.8\(\pm\)26.7 & **87.9\(\pm\)6.7** & **9.2\(\pm\)21.2** \\ \hline \end{tabular} \end{table} Table 2: Quantitative comparison on high-resolution cardiac CTA MMWHS dataset with UNETR and nnU-Net. The mean and standard deviation of Dice score (unitless) and HD95 (mm) are shown for each sub-structure. The highest Dice score and lowest HD95 are marked in **bold**. Label abbreviations are provided in Fig. 2. Figure 2: Qualitative comparison of an axial slice with ground-truth (GT) labels (on CTA) and predictions with UNETR and nnU-Net, highlighting the improved segmentations with DSD. Additional visualizations are provided in supplementary material. ### Prediction with Brain Tumor dataset Table 3 summarizes the prediction results for the MSD dataset using UNETR and nnU-Net as backbone along with the DSD framework. DSD outperforms the UNETR backbone with 3.6% increase in Dice score and 9.2 mm decrease in HD95 and outperforms the nnU-Net backbone with 2.7% increase in Dice score and 3.8 mm decrease in HD95. Similar to the observation in the MMWHS dataset, DSD significantly reduces the variance of Dice score and HD95 for the 24 patients in the testing set. A qualitative comparison between the ground-truth (GT) labels and predictions with UNETR backbone, UNETR with DSD, nnU-Net backbone and nnU-Net with DSD on a 2D slice (on FLAIR MRI) is shown in Fig. 3. ## 4 Conclusion In this paper, we introduced a novel dual self-distillation framework which could be attached into any U-shaped backbones for medical image segmentation. We incorporated our DSD framework into UNETR and nnU-Net and evaluated it on two benchmark datasets with promising results. Our results demonstrated that DSD is a generalized training strategy that could further boost the segmentation performance of these U-shaped networks. **This work is not peer-reviewed**. \begin{table} \begin{tabular}{c|c c|c c||c c} \hline Network & \multicolumn{2}{c|}{UNETR} & \multicolumn{2}{c||}{UNETR with DSD} & \multicolumn{2}{c|}{nnU-Net} & \multicolumn{2}{c}{nnU-Net with DSD} \\ \hline Structure & Dice & HD95 & Dice & HD95 & Dice & HD95 & Dice & HD95 \\ \hline WT & 67.6\(\pm\)15.5 48.4\(\pm\)29.4 & 74.5\(\pm\)13.8 & 29.9\(\pm\)28.4 & 75.7\(\pm\)13.3 25.7\(\pm\)27.9 & 78.5\(\pm\)11.1 & 19.0\(\pm\)25.2 \\ ET & 56.1\(\pm\)30.2 18.8\(\pm\)26.5 & 58.7\(\pm\)28.6 & 13.5\(\pm\)21.2 & 65.1\(\pm\)28.4 18.8\(\pm\)26.6 & 67.8\(\pm\)28.5 & 15.7\(\pm\)24.1 \\ TC & 81.4\(\pm\)14.0 14.8\(\pm\)26.7 & 82.8\(\pm\)9.7 & 10.9\(\pm\)22.7 & 81.8\(\pm\)15.5 10.9\(\pm\)25.2 & 84.4\(\pm\)10.1 & 9.6\(\pm\)22.8 \\ \hline mean\(\pm\)std & 68.4\(\pm\)23.3 27.3\(\pm\)31.1 & **72.0\(\pm\)21.4** **18.1\(\pm\)25.4** & 74.2\(\pm\)21.1 18.5\(\pm\)26.9 & **76.9\(\pm\)19.6** **14.7\(\pm\)24.0** \\ \hline \end{tabular} \end{table} Table 3: Quantitative comparison for brain tumor segmentation task of MSD dataset with UNETR and nnU-Net. The mean and standard deviation of Dice score and HD95 are shown for each sub-structure. The highest Dice score (%) and lowest HD95 (in mm) are marked in **bold**. Label abbreviations are provided in Fig. 3. Figure 3: Qualitative comparison of an axial slice with ground-truth (GT) labels (on FLAIR MRI) and predictions from UNETR and nnU-Net, highlighting the improvements with DSD. Additional visualizations are provided in supplementary material. ## 5 Acknowledgement Research reported in this publication was partially supported by the National Institute of Health under Award Number NIHR01HL153720. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. The authors are in collaboration with Modus Medical Devices and GE Healthcare.
2310.11193
Thermally induced localization of dopants in a magnetic spin ladder
I unveil a novel variant of Anderson localization. This emergent phenomenon pertains to the motion of a dopant in a thermal spin lattice, rendered localized by thermal fluctuations. This is in stark contrast to the intrinsic origin of localization for quenched disorder. The system of interest consists of spin-$1/2$ particles organized in a two-leg ladder with nearest neighbor Ising interactions $J$. The motion of a hole -- the dopant -- is initialized by suddenly removing a spin from the thermal spin ensemble, which then moves along the ladder via nearest neighbor hopping $t$. I find that the hole remains \emph{localized} for all values of $J/t$ and for \emph{all} nonzero temperatures. The origin is an effective disorder potential seen by the hole and induced by thermal spin fluctuations. Its length scale is found to match with the underlying spin-spin correlation length at low temperatures. For ferromagnetic couplings ($J<0$), the associated localization length of the hole increases with decreasing temperature and becomes proportional to the correlation length at low temperatures, asymptotically delocalizing at low temperatures. For antiferromagnetic couplings ($J>0$), there is a smooth crossover between thermal localization at high temperatures to localization driven by the antiferromagnetic order at low temperatures. At infinite temperatures, the dynamics becomes independent of the sign of the spin coupling, whereby the localization length is a universal function of $|J|/t$, diverging as $(t/J)^{2}$ for $|J| \ll t$. Finally, I analyze a setup with Rydberg-dressed atoms, which naturally realizes finite range Ising interactions, accessible in current experimental setups. I show that the discovered localization phenomenon can be probed on experimentally accessible length- and timescales, providing a strong testing ground for my predictions.
K. Knakkergaard Nielsen
2023-10-17T12:15:00Z
http://arxiv.org/abs/2310.11193v2
# Thermally induced localization of dopants in a magnetic spin ladder ###### Abstract I unveil a novel variant of Anderson localization. This emergent phenomenon pertains to the motion of a dopant in a thermal spin lattice, rendered localized by thermal fluctuations. This is in stark contrast to the intrinsic origin of localization for quenched disorder. The system of interest consists of spin-\(1/2\) particles organized in a two-leg ladder with nearest neighbor Ising interactions \(J\). The motion of a hole - the dopant - is initialized by suddenly removing a spin from the thermal spin ensemble, which then moves along the ladder via nearest neighbor hopping \(t\). I find that the hole remains _localized_ for all values of \(J/t\) and for _all_ nonzero temperatures. The origin is an effective disorder potential seen by the hole and induced by thermal spin fluctuations. Its length scale is found to match with the underlying spin-spin correlation length at low temperatures. For ferromagnetic couplings (\(J<0\)), the associated localization length of the hole increases with decreasing temperature and becomes proportional to the correlation length at low temperatures, asymptotically delocalizing at low temperatures. For antiferromagnetic couplings (\(J>0\)), there is a smooth crossover between thermal localization at high temperatures to localization driven by the antiferromagnetic order at low temperatures. At infinite temperatures, the dynamics becomes independent of the sign of the spin coupling, whereby the localization length is a universal function of \(|J|/t\), diverging as \((t/|J|)^{5/3}\) for \(|J|\ll t\). Finally, I analyze a setup with Rydberg-dressed atoms, which naturally realizes finite range Ising interactions, accessible in current experimental setups. I show the discovered localization phenomenon can be probed on experimentally accessible length- and timescales, providing a strong testing ground for my predictions. ## I Introduction The motion of dopants in magnetic spin lattices is crucial to our understanding of strongly correlated materials. The possible formation of polaronic quasiparticles and their induced interactions are believed [1; 2; 3] to be deeply connected to high-temperature superconductivity [4]. Indeed, the behavior of such magnetic polarons in antiferromagnetic lattices [5] has been shown to compare very well with exact diagonalization studies at zero temperature [6; 7; 8]. Furthermore, exciting new experiments has enabled the direct observation of dopant motion [9], made possible by the quantum simulation of Fermi-Hubbard-type models [10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26] combined with single-site resolution techniques [27; 28; 29; 30]. The observed dynamics was successfully explained [31] by the correlated formation and propagation of magnetic polarons [32], in which the dopant eventually slows down and moves with a greatly reduced propagation speed. Despite these recent successes, there is still debate about the accuracy of this quasiparticle description [33; 34; 35]. The observation of such propagation dynamics [9], as well as the measurement of the spatial structures appearing around dopants [19], is a major new vantage point for our microscopic understanding of these systems. Indeed, previous work has mainly focused either on macroscopic observables such as currents driven by extrinsic force fields, spectroscopic measurements [36; 37; 38; 39], or Ramsey interferometry [40; 41; 42; 43; 44]. While the measurement of currents gives invaluable insights into e.g. the physics of topological systems [45; 46], and spectral analyses gives access to some aspects of the appearing quasiparticles, it does not offer us detailed knowledge about their propagation. In particular, it does not provide a deep and microscopic understanding of the impact of the order - or lack thereof - of the environment. Recently, theoretical studies of dopant motion in thermal spin lattices [47; 48; 49] has ventured into this new paradigm. While some evidence of delocalization above the Neel temperature in a spin Ising environment [49] and hints of diffusive behavior at intermediate timescales at infinite temperatures has been seen for a more generic Fermi-Hubbard setup [47], these studies were limited to fairly short evolution times and/or system sizes. As a result, the nature of the propagation on long timescales remains unsettled. Intrigued by these investigations, I study the motion of a dopant in a thermal Ising spin ensemble. In particular, I consider a mixed-dimensional model in a two-leg ladder [50; 51; 52]. Here, the doped hole is allowed to move _only along_ the ladder with nearest neighbor hopping \(t\), while the spin-\(1/2\) particles are assumed to couple with Ising-type nearest neighbor spin couplings \(J\) [Fig. 1(a)]. I investigate both ferro- (\(J<0\)) and antiferromagnetic (\(J>0\)) couplings. The non-equilibrium motion of a hole at zero temperature in these two scenarios features highly distinct behaviors. Indeed, while the hole for antiferromagnetic couplings is localized due to a confining string potential [52], it moves completely freely in the ferromagnetic phase. However, at any nonzero temperatures the system is disordered due to its one-dimensional nature. As a result, I find that the hole is _always_ localized at any value of \(J/t\) and at _any nonzero temperature_. Exemplary results for the localization length is shown in Fig. 1(b). This furthermore reveals that the increasing localization length on the ferromagnetic side becomes proportional to the underlying spin-spin correlation length of the ensemble at low temperatures. The underlying reason for the localization is traced back to an omnipresent _disorder potential_, whose strength increases for increasing temperature [see Fig. 1(c)]. At low temperatures and ferromagnetic couplings, the underlying length scale of the disorder potential is the spin-spin correlation length [Fig. 1(d)], giving rise to the simple proportionality between the localization and correlation lengths in this regime. Moreover, as infinite temperatures are reached, the disorder potential reaches its full strength, and the hole motion no longer depends on the sign of the magnetic interactions. In this universal regime, the localization length becomes a universal function of \(|J|/t\), diverging in a power-law fashion as \((t/|J|)^{5/3}\) for \(|J|/t\lesssim 2\). The computation of these results rests on a combination of two precise approaches. First, for a specific spin realization, I determine numerically exactly the non-equilibrium hole motion. Second, using large-scale Monte Carlo sampling of the thermal ensemble, I determine the appropriate thermal average of these pure state evolutions. I am, hereby, able to go to large system sizes of \(800\) sites and arbitrarily long evolution times, giving very small estimated statistical errors. The present studies establishes rare insights into how the disorder in the underlying spin lattice crucially impacts the motion of dopants. Moreover, it describes an abrupt change in their qualitative characteristics. Indeed for ferromagnetic spin couplings, the dopant behaves as a free particle at zero temperature, but as soon as the phase transition at \(T=0\) is crossed, it completely looses its quasiparticle character, and its motion becomes localized. This localization phenomenon can be considered to be a variant of Anderson localization [53; 54; 55], which very generically describes the localization of waves in disordered media. I emphasize, however, that the origin of the localization does not come from _quenched_ disorder from e.g. a random distribution of onsite energies [53], but is an _emergent_ property of the system itself [56] arising at nonzero temperatures due to thermally induced spin fluctuations. It is worth noting, however, that the currently discovered localization would strictly speaking occur simultaneously with regular Anderson localization in one dimension for any realistic medium that would feature a nonzero disorder strength. In this context, I stress that the localization length due to thermal spin fluctations found in the present analysis should very easily be orders of magnitude smaller than the one arising due to Anderson localization, and, therefore, completely dominate the phenomenology. Finally, to showcase the possibility of detecting this localization phenomenon experimentally, I analyze a setup with Rydberg-dressed atoms in a two-leg optical lattice that supports finite-range Ising-type interactions [57], already demonstrated experimentally [12]. Here, I find that the localization can be probed on realistic timescales and system sizes, providing a strong testing ground for the predicted results. The Article is organized as follows. In Sec. II, I describe the overall setup, including a description of the system Hamiltonian, as well as the thermal initial state of the spin ensemble, and finally the exact computation of the non-equilibrium hole motion for a specific spin realization in Sec. II.1. In Sec. III, I describe the universal regime of infinite temperatures. In Sec. IV, I go away from this universal limit and give a detailed analysis of the propagation across a wide range of temperatures. Finally, in Sec. V.2 analyze the Rydberg-dressed atoms setup, before I conclude in Sec. VI. Throughout the Article, I work in units where the reduced Planck constant, \(\hbar\), and the lattice spacing is set to \(1\). ## II Setup I consider a system of spin-\(1/2\) particles placed along a two-leg ladder, described by a \(t\)-\(J\) model with nearest neighbor Ising interactions, \[\hat{H}=-t\sum_{\langle\mathbf{i},\mathbf{j}\rangle_{\parallel},\sigma}\left[ \tilde{c}^{\dagger}_{\mathbf{i}\sigma}\tilde{c}_{\mathbf{j}\sigma}+\tilde{c} ^{\dagger}_{\mathbf{j}\sigma}\tilde{c}_{\mathbf{i}\sigma}\right]+J\sum_{ \langle\mathbf{i},\mathbf{j}\rangle}\hat{S}^{(z)}_{\mathbf{i}}\hat{S}^{(z)}_{ \mathbf{j}}. \tag{1}\] The hopping is constrained through the operator \(\tilde{c}^{\dagger}_{\mathbf{i}\sigma}=\hat{c}^{\dagger}_{\mathbf{i}\sigma}( 1-\hat{n}_{\mathbf{i}})\), such that at most a single spin resides on each site. I, furthermore, assume a mixed-dimensional setup in which the spins are only allowed to hop along the ladder. I will both analyze antiferromagnetic (\(J>0\)) and ferromagnetic (\(J<0\)) spin-coupling cases. To have an efficient description of the hole and spin excitation degrees of freedom, I employ a Holstein-Primakoff transformation on top of the _ferromagnetic Figure 1: (a) Motion of a single hole (green circle) along a spin ladder consisting of spin-\(\uparrow\) (red balls) and spin-\(\downarrow\) (blue balls), with hopping \(t\). The spins are assumed to feature Ising-type nearest neighbor spin couplings \(J\). The hole is found to localize for any nonzero temperature. In (b), the associated localization length at temperature \(k_{B}T/J\) is plotted as a function of the spin-spin correlation length \(\xi_{1}(k_{B}T/J)\). For ferromagnetic couplings (FM, \(J<0\)), the localization length is linear in \(\xi_{1}\) at low temperatures. For antiferromagnetic couplings (AFM, \(J>0\)), the localization length decreases and reaches a plateau at low temperatures due to a confining string potential emerging in that regime. The localization across all temperatures is traced back to an emergent disorder potential \(V(x)\) experiences by the hole, whose variance shown in (c) is linear for all temperatures \(T=1/k_{B}\beta\). (d) The associated length scale of the potential \(x_{0}\) is plotted as a function of \(\xi_{1}\), showing a linear dependency at large \(\xi_{1}\), explaining the low temperature results in (b) on the FM side. ground state \(|\mathrm{FM}\rangle=|..\uparrow\uparrow\uparrow..\rangle\), with all spins pointing up. As a result, the Hamiltonian \(\hat{H}=\hat{H}_{t}+\hat{H}_{J}\) may be written in terms of the hopping, \[\hat{H}_{t}=t\,\sum_{\langle\mathbf{i},\mathbf{j}\rangle_{\parallel}} \!\!\!\left[\hat{h}_{\mathbf{j}}^{\dagger}F(\hat{h}_{\mathbf{i}},\hat{s}_{ \mathbf{i}})F(\hat{h}_{\mathbf{j}},\hat{s}_{\mathbf{j}})\hat{h}_{\mathbf{i}}\right. \\ +\hat{h}_{\mathbf{j}}^{\dagger}\hat{s}_{\mathbf{i}}^{\dagger}F( \hat{h}_{\mathbf{i}},\hat{s}_{\mathbf{i}})F(\hat{h}_{\mathbf{j}},\hat{s}_{ \mathbf{j}})\hat{s}_{\mathbf{j}}\hat{h}_{\mathbf{i}}\right]+\mathrm{H.c.}, \tag{2}\] and the spin coupling \[\hat{H}_{J}=J\sum_{\langle\mathbf{i},\mathbf{j}\rangle}\left[\frac{1}{2}- \hat{s}_{\mathbf{i}}^{\dagger}\hat{s}_{\mathbf{i}}\right]\!\left[\frac{1}{2} \!-\!\hat{s}_{\mathbf{j}}^{\dagger}\hat{s}_{\mathbf{j}}\right]\!\left[1\!- \!\hat{h}_{\mathbf{i}}^{\dagger}\hat{h}_{\mathbf{i}}\right]\!\left[1\!-\! \hat{h}_{\mathbf{j}}^{\dagger}\hat{h}_{\mathbf{j}}\right]\!. \tag{3}\] Here, the spin excitation operator \(\hat{s}_{\mathbf{i}}^{\dagger}\) is bosonic, and creates a spin-\(\downarrow\) on site \(\mathbf{i}\). Also, the hole is created by the operator \(\hat{h}_{\mathbf{i}}^{\dagger}\), and inherits the statistics of the underlying spins, be it fermionic _or_ bosonic [52]. In the hopping Hamiltonian \(\hat{H}_{t}\), the operator \(F(\hat{h},\hat{s})=\sqrt{1-\hat{s}^{\dagger}\hat{s}-\hat{h}^{\dagger}\hat{h}}\) ensures the single-occupancy constraint. The two terms in the bracket of \(\hat{H}_{t}\) describe distinct hopping events. The first term describes a hole hopping from site \(\mathbf{i}\) to \(\mathbf{j}\) in the absence of a spin excitation on site \(\mathbf{j}\). The second term, on the contrary, describes this hopping in the presence of a spin excitation, whereby the hole and spin excitation swap places. While the Holstein-Primakoff transformation slightly complicates the expression for the Hamiltonian, it makes it much easier to write down concise expressions for the non-equilibrium wave functions to come. I assume that the system is initially thermalized in the absence of holes, i.e. precisely at half-filling. The resulting partition function \(Z_{0}=\mathrm{tr}[e^{-\beta\hat{H}_{J}}]\) along with the spin-spin correlator, \[C_{z}(d) =4\,\langle\hat{S}_{\mathbf{i}}^{(z)}\hat{S}_{\mathbf{i}+ \mathbf{i}\mathbf{\hat{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{ \mathbf{ }}}}}}{}}}}}}}}}} \ \left(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{ }}}}}}}}}}}}}}}} \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}{{}}}}}}}} \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}}} \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}{{}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\} simply the thermal average of the probabilities \(|C_{\mathbf{\sigma}}(x,\tau)|^{2}\), \[P(x,\tau) =\mathrm{tr}\left[\hat{h}_{1,x}^{\dagger}\hat{h}_{1,x}\hat{\rho}( \tau)\right]\] \[=\sum_{\sigma_{0},\mathbf{\sigma}}\frac{e^{-\beta E_{J}(\sigma_{0}, \mathbf{\sigma})}}{Z_{0}}|C_{\mathbf{\sigma}}(x,\tau)|^{2}. \tag{10}\] In this manner, the problem of describing the motion of the hole has now been reduced to finding the probability amplitudes \(C_{\mathbf{\sigma}}(x,\tau)\) for a given spin realization \(\mathbf{\sigma}\) and then performing the sum in Eq. (10). While this is hardly feasible to do exactly, I employ a standard Metropolis-Hastings algorithm [60, 61] to perform accurate sampling of the sum, from which the root-mean-square distance is calculated \[x_{\mathrm{rms}}(\tau)=\left[\sum_{x}x^{2}P(x,\tau)\right]^{1/2}. \tag{11}\] To be able to perform this calculation, I describe how the probability amplitudes \(C_{\mathbf{\sigma}}(x,\tau)\) are determined numerically exactly in the following subsection, by which we can accurately describe the motion of the hole at essentially any temperature. ### Determining the probability amplitudes While the task of determining \(C_{\mathbf{\sigma}}(x,\tau)\) may seem daunting at first, we are significantly helped by the fact that the hole only moves along the ladder. In fact, by expressing the non-equilibrium wave function in terms of the retarded and advanced states [31, 62]\(|\Psi_{\mathbf{\sigma}}(\tau)\rangle=|\Psi_{\mathbf{\sigma}}^{\mathrm{R}}(\tau) \rangle+|\Psi_{\mathbf{\sigma}}^{\mathrm{A}}(\tau)\rangle=\mathrm{e}^{-\eta|\tau |}[\theta(\tau)\,|\Psi_{\mathbf{\sigma}}(\tau)\rangle+\theta(-\tau)\,|\Psi_{\mathbf{ \sigma}}(\tau)\rangle],\) I express the Schrodinger equation, \(i\partial_{\tau}\,|\Psi_{\mathbf{\sigma}}(\tau)\rangle=\hat{H}\,|\Psi_{\mathbf{ \sigma}}(\tau)\rangle\) in frequency space \[(\omega+i\eta)\,|\Psi_{\mathbf{\sigma}}^{\mathrm{R}}(\omega)\rangle=+i\,|\Psi_{ \mathbf{\sigma}}(\tau=0)\rangle+\hat{H}\,|\Psi_{\mathbf{\sigma}}^{\mathrm{R}}(\omega) \rangle\,. \tag{12}\] Here, \(\eta\) is a positive infinitesimal. Denoting the probability amplitudes of \(|\Psi_{\mathbf{\sigma}}^{\mathrm{R}}(\omega)\rangle\) as \(R_{\mathbf{\sigma}}(x,\omega)\) and using that the advanced state simply has the complex conjugated terms of the retarded state, \(|\Psi_{\mathbf{\sigma}}^{\mathrm{A}}(\omega)\rangle=[|\Psi_{\mathbf{\sigma}}^{\mathrm{ R}}(\omega)\rangle]^{*}\), then shows that \(C_{\mathbf{\sigma}}(x,\tau)\) can be retrieved as the Fourier transform \[C_{\mathbf{\sigma}}(x,\tau)=\int\frac{d\omega}{2\pi}\mathrm{e}^{-i(\omega+i\eta) \tau}\times 2\mathrm{Re}[R_{\mathbf{\sigma}}(x,\omega)]. \tag{13}\] Crucially, the amplitudes \(R_{\mathbf{\sigma}}(x,\omega)\) satisfy a set of equations of motion, \[[\omega+i\eta]R_{\mathbf{\sigma}}(x,\omega)= i\delta_{x,0}+V_{\mathbf{\sigma}}(x)R_{\mathbf{\sigma}}(x,\omega)\] \[+t\left[R_{\mathbf{\sigma}}(x-1,\omega)+R_{\mathbf{\sigma}}(x+1,\omega) \right], \tag{14}\] which may be solved recursively, as has been detailed recently in similar contexts [62, 52, 63]. Here, \(V_{\mathbf{\sigma}}(x)\) designates the magnetic potential experiences by the hole as it moves through the lattice. This arises because motion of the hole changes the magnetic energy or the underlying spin lattice. Put in another way, as the hole moves through the ladder it breaks up a series of spin bonds and creates new ones as illustrated in Fig. 2. The effective potential, \(V_{\mathbf{\sigma}}(x)=V_{\mathbf{\sigma},\|}(x)+V_{\mathbf{\sigma},\perp}(x)\), can be decomposed in terms of an intra-leg potential \[V_{\mathbf{\sigma},\|}(x) =J[\sigma_{1,1}\sigma_{1,-1}-\sigma_{1,x}\sigma_{1,x+1}],\ x>0,\] \[V_{\mathbf{\sigma},\|}(x) =J[\sigma_{1,1}\sigma_{1,-1}-\sigma_{1,x}\sigma_{1,x-1}],\ x<0. \tag{15}\] and a trans-leg potential \[V_{\mathbf{\sigma},\perp}(x) =J\sum_{j=+1}^{x}\sigma_{1,j}[\sigma_{2,j-1}-\sigma_{2,j}],\ x>0,\] \[V_{\mathbf{\sigma},\perp}(x) =J\sum_{j=-1}^{x}\sigma_{1,j}[\sigma_{2,j+1}-\sigma_{2,j}],\ x<0. \tag{16}\] Here, the index of the spins \(\sigma=\pm 1/2\equiv\uparrow,\downarrow\) refer to their positions _before_ the hole has started to move. In Eq. (III.1), the term \(J\sigma_{1,1}\sigma_{1,-1}\) refers to the spin-bond energy arising around the origin as the hole has moved, while the term \(J\sigma_{1,x}\sigma_{1,x+1}\) for \(x>0\) is the energy of the bond broken up by the hole once it has moved to position \(x\). These are shown in light red and light blue in Fig. 2. Similarly, the two terms in the summand of Eq. (III.1) correspond to the energies \(J\sigma_{1,j}\sigma_{2,j-1}\), \(J\sigma_{1,j}\sigma_{2,j}\) of the newly established and broken bonds every time the hole hops, shown in dark red and dark blue in Fig. 2. By finally defining the recursion function \(f_{\mathbf{\sigma}}(x,\omega)\) through the relations \[R_{\mathbf{\sigma}}(x+1,\omega) =tf_{\mathbf{\sigma}}(x+1,\omega)\,R_{\mathbf{\sigma}}(x,\omega),\ x\geq 0,\] \[R_{\mathbf{\sigma}}(x-1,\omega) =tf_{\mathbf{\sigma}}(x-1,\omega)\,R_{\mathbf{\sigma}}(x,\omega),\ x\leq 0. \tag{17}\] leads to the recursive solutions \[f_{\mathbf{\sigma}}(x,\omega) =\frac{1}{\omega\!+\!i\eta\!-\!V_{\mathbf{\sigma}}(x)\!-\!t^{2}f_{\bm {\sigma}}(x\!+\!1,\omega)},\ x>0,\] \[f_{\mathbf{\sigma}}(x,\omega) =\frac{1}{\omega\!+\!i\eta\!-\!V_{\mathbf{\sigma}}(x)\!-\!t^{2}f_{\bm {\sigma}}(x\!-\!1,\omega)},\ x<0. \tag{18}\] Inserting this into the equations of motion for \(x=0\) yields the lowest order amplitude \[R_{\mathbf{\sigma}}(0,\omega)=\frac{i}{\omega\!+\!i\eta\!-\!V_{\mathbf{\sigma}}(0)\!-\!t^ {2}[f(-1,\omega)\!+\!f(1,\omega)]}, \tag{19}\] Figure 2: As the hole moves through the ladder (top to bottom), it breaks up spin bonds across (dark blue) and along (light blue) the ladder. In the same manner new spin bonds are created across (dark red) and along (light red) the ladder. The effective magnetic potential experienced by the hole, hereby, arises by subtracting the energy of the broken spin bonds and adding the energy of the newly formed. which may be identified simply as the retarded hole Green's function for the spin realization \(\mathbf{\sigma}\). The higher-order amplitudes \[R_{\mathbf{\sigma}}(x,\omega) =t^{x}\prod_{j=+1}^{x}f_{\mathbf{\sigma}}(x,\omega)\times R_{\mathbf{\sigma }}(0,\omega),\ x>0,\] \[R_{\mathbf{\sigma}}(x,\omega) =t^{|x|}\prod_{j=-1}^{x}f_{\mathbf{\sigma}}(x,\omega)\times R_{\mathbf{ \sigma}}(0,\omega),\ x<0. \tag{20}\] are found by using the recursive structure in Eq. (17). Finally, using the Fourier transform in Eq. (13), \(C_{\mathbf{\sigma}}(x,\tau)\) is found. ## III Infinite temperature limit In the limit of infinite temperature, \(\beta J=J/k_{B}T\to 0\), the partition function simply becomes the number of spin configurations \(Z_{0}=2^{2N}\), where \(N\) is the number of sites in each leg. Furthermore, the terms in Eq. (10) all have the same statistical weight \[P(x,\tau)\rightarrow\frac{1}{2^{2N-1}}\sum_{\mathbf{\sigma}}|C_{\mathbf{\sigma}}(x, \tau)|^{2}. \tag{21}\] As a result, we need to describe how the hole moves in a completely _random_ spin ensemble. As was previously noticed in the context of Bethe lattices [32], the resulting potential experienced by the hole, \(V_{\mathbf{\sigma}}(x)\), becomes a disordered potential. In fact, in any hop, the potential changes _at random_ by an amount \(|J|/2\). This is detailed in Fig. 3. The equations of motion in Eq. (14) now becomes very reminiscent of the 1D Anderson model for _Anderson localization_[53]. However, in contrary to the original model, the potential is correlated from site to site, as is also apparent from Fig. 3, _and_ the trans-leg potential \(V_{\mathbf{\sigma},\perp}(x)\) becomes arbitrarily large at large \(x\). This is in contrast to the usual case studied in Anderson localization, where some constant width of for the disordered potential is usually assumed. In fact, the potential performs a classical random walk in its allowed values. As a result, its variance, \[\mathrm{Var}[V_{\mathbf{\sigma}}(x)]=\frac{J^{2}}{8}\left[|x|+1\right], \tag{22}\] scales linearly in \(|x|\), as shown explicitly in Appendix B. Due to the differences with the usual Anderson model, it is a priori not clear whether the hole will localize or not in this specific kind of disorder potential. By closer inspection of the probabilistic behavior of the potential sketched in Fig. 3, it becomes clear that the behavior at infinite temperature does not depend on the sign of the spin coupling, \(J\), and the motion of the hole becomes universal in this limit. The only remaining parameter in the system is \(|J|/t\). For value of this ratio, I, thus, generate \(N_{s}\) samples by using the probabilistic update rules for the potential shown in Fig. 3. For \(|J|/t\geq 1.6\), I generate \(N_{s}=400\) samples. For smaller values of \(|J|/t\), the computation time dramatically increases, and I, therefore, use smaller samples though always with \(N_{s}\geq 92\). For each of the generated realizations, I compute \(C_{\mathbf{\sigma}}(x,\tau)\) up to as large times as \(\tau=300/t\) and from there the rms distance (Eq. (11)). An example of the rms distance dynamics is given in Fig. 4(a) for three indicated value of the spin coupling. For all of these, we clearly see that the hole remains _localized_, stalling at a finite distance to its origin. This is further backed up by the underlying hole density distribution \(P(x,\tau)\) shown in Fig. 4(b) for indicated times. This explicitly shows that the hole remains exponentially localized. I, thus, define the _localization_ length as the long-time asymptote of the rms distance. This is plotted in Fig. 4(c) for a wide range of spin couplings. At intermediate to small spin couplings of \(|J|<2t\), I find very good agreement with a power-law behavior \[l_{\mathrm{loc}}\rightarrow\left[\frac{\alpha t}{|J|}\right]^{5/3}, \tag{23}\] with \(\alpha\simeq 6.7\). This power-law behavior strongly suggests that the hole will remain localized at any value of \(|J|/t\), analogous to the fact that a particle moving in a one-dimensional random potential is localized for any disorder strength, \(W\), and only asymptotically move ballistically in the extreme limit of \(|J|/t\to 0\). I note that the scaling behavior in the usual 1D Anderson model to a very good approximation goes as \((t/W)^{2}\)[54], such that the localization in the present case is actually _stronger_, since the exponent is smaller. I attribute this to the fact that the strength of the disorder in the present case goes up with the distance to the origin [Eq. (22)]. More generally, I find that the localization length is well described by \[l_{\mathrm{loc}}\simeq\left[1+\gamma\left(\frac{\alpha t}{|J|}\right)^{-5/2} \right]^{2/3}\left[\frac{\alpha t}{|J|}\right]^{5/3}, \tag{24}\] with \(\gamma\simeq 1.48\), shown as a pink line in Fig. 4(b). Figure 3: At infinite temperature, the magnetic potential is random. (a) In each hop (arrows), the potential across the ladder remains unchanged with probability \(P=1/2\) (left), or goes down (middle) or up (right) by \(J/2\) with probability \(P=1/4\). A purple ball indicates that it does not matter, whether that spin is \(\uparrow\) or \(\downarrow\). (b) The potential along the ladder, \(V_{\parallel}(x)\), can only take the values \(0,\pm J/2\), and the change depends on whether \(V_{\parallel}(x)=0\) (left), \(V_{\parallel}(x)=-J/2\) (middle), or \(V_{\parallel}(x)=J/2\) (right). ## IV Finite temperature behavior For finite temperatures, I employ a standard Metropolis-Hastings Monte Carlo algorithm [60; 61] to generate a total of \(N_{s}=2000\) samples for every investigated value of \(k_{B}T/|J|\). This sampling, in particular, uses single spin flip dynamics. I find that I can greatly decrease the statistical error by generating a much larger set of samples, from which the \(2000\) samples used in the computation are drawn. I typically take \(1\) in every \(60.000\) samples. I benchmark the sampling by computing the average magnetization per spin, which in the true thermodynamic sample is identically \(0\). I require this to be on the order of or less than \(10^{-3}\). I also compare the estimated average energy to the exact result \(\langle\hat{H}\rangle=-\beta^{-1}\partial_{\beta}\ln Z_{0}\) calculated from the partition function found in Appendix A, and find these to Figure 4: (a) Rms distance of the hole versus time for indicated values of the spin coupling on a log-log scale, with shaded lines showing the estimated standard error. This shows an initial ballistic behavior with expansion speed \(\sqrt{2}t\) (black line), before being localized on long timescales. (b) Hole density \(P(x,\tau)\) at indicated times for the same values of \(|J|/t\) as in (a), showing exponential localization of the hole. Mind the change of scale between the plots. (c) Extracted localization length \(l_{\rm loc}\) as the long-time asymptote of the rms distance as a function of \(|J|/t\). The errorbars indicate the estimated standard error. For large spin couplings, \(l_{\rm loc}\) approaches a nonzero value of \(\simeq 1.3\), whereas I find a power-law behavior \((t/|J|)^{5/3}\) for intermediate to small spin couplings (black line). The pink line is a guess of the general behavior [Eq. (24)]. Figure 5: (a) Rms distance of the hole versus time for indicated values of the temperature for \(|J|/t=2.5\), with shaded lines showing the estimated standard error. The black line again shows ballistic behavior with expansion speed \(\sqrt{2}t\). (b) Hole density \(P(x,\tau)\) at indicated times for the same values of \(J/k_{B}T\) as in (a), again showing exponential localization of the hole. Mind the change of scale between the plots. (c) Localization length \(l_{\rm loc}\) versus \(J/k_{B}T\) for indicated values of \(|J|/t\). The estimated standard errors are smaller than the dot size. On the antiferromagnetic side, \(J/k_{B}T>0\), \(l_{\rm loc}\) decreases and eventually approaches the zero temperature value indicated by the lines to the right. The asymptotic value for \(|J|/t=20\) is below the scale of the plot. On the ferromagnetic side, \(J/k_{B}T<0\), \(l_{\rm loc}\) increases, but _remains finite_ for any temperature on a length scale that is larger than the spin-spin correlation length (black line). agree within a difference of \(10^{-3}\) across all investigated temperatures. In Fig. 5(a), I compare the rms distance dynamics for \(|J|/t=2.5\) at \(|J|/k_{B}T=2\) to the infinite temperature limit. Although the hole spreads out significantly more on the ferromagnetic side, it _remains localized_ at this intermediate temperature. I support this further by showing the hole density distribution in Fig. 5(b), which again shows exponential localization of the hole to its origin. In fact, in Fig. 5(c) I show the localization length across a broad range of temperatures and values of \(|J|/t\), revealing that the hole remains localized for all investigated temperatures and interactions. This shows that the localization phenomenon discovered in the previous section at infinite temperatures is a robust effect and seems to happen as long as the temperature is nonzero. The underlying reason for this robustness, I believe, is that the system, due to its one-dimensional geometry, is always disordered. Therefore, on length scales longer than the spin-spin correlation length \(\xi_{1}(\beta J)\) [see Eq. (5)], the hole still sees a randomized potential \(V(x)\). To check this intuition, I compare the extracted localization length to the correlation length in Fig. 5(b). Indeed, we see that the correlation length sets a lower bound for how localized, the hole can be. Moreover, the effect of decreasing temperature is also seen to accelerate when the correlation length starts to exceed \(1\). To get a better understanding of the above effects, I next replot the localization length as a function of the correlation length. This is shown in Fig. 1(b). This reveals that at low temperatures, corresponding to \(\xi_{1}(\beta J)\gg 1\), the localization length becomes linear in the correlation length for ferromagnetic coupling, \[l_{\rm loc}(\beta J)=\alpha\times\xi_{1}(\beta J). \tag{25}\] The analysis additionally unveils that the prefactor \(\alpha\)_increases with decreasing_\(|J|/t\). In this manner, the hole motion only delocalizes in the asymptotic limit of zero temperature. Here, all spins align at \(T=0\) and the magnetic potential obtained in Eqs. (15) and (16) vanishes identically, whereby the hole is free to move ballistically through the system. On the antiferromagnetic side, the localization length is conversely seen to decrease. The reason is that at zero temperature, the accompanying magnetic potential defined in Eqs. (15) and (16) increases linearly with distance, \(V(x)=J/2[|x|+1]\), as also obtained previously [52], which localizes the hole more strongly than in the disordered case. Moreover, the decrease in localization length is seen to be very rapid at low \(\xi_{1}\), but quickly saturates as \(\xi_{1}\gg l_{\rm loc}\). This is also intuitively clear, since the correlation length sets the typical length scale over the system is ordered. Therefore, if the localization length is much smaller than the correlation length, it does not see the long-range disorder. To quantitatively understand the appearance of the correlation length as a dominant length scale for the hole dynamics, I then calculate the variance of the magnetic potential for finite temperatures on the ferromagnetic side. Examples of this are shown in Fig. 1(c). This provides conclusive evidence that its variance \[{\rm Var}[V(x)]=\frac{J^{2}}{8}\frac{|x|}{x_{0}(\beta J)}+b(\beta J), \tag{26}\] remains linear in \(|x|\) at any temperature, not just at infinite temperatures. The associated length scale \(x_{0}(\beta J)\) starts out at \(x_{0}(0)=1\) from Eq. (22) at infinite temperatures. Away from this limit, it is clearly seen in Fig. 1(c) to increase with decreasing temperature. In fact, I show in Fig. 1(d) that this length scale is closely related to the correlation length, with \[\begin{array}{l}x_{0}(\beta J)=1+[\xi_{1}(\beta J)]^{2},\ \ \xi_{1}(\beta J)\ll 1,\\ x_{0}(\beta J)=1+\xi_{1}(\beta J),\ \ \ \ \xi_{1}(\beta J)\gg 1,\end{array} \tag{27}\] The crossover between the two behaviors is very rapid and happens around \(\xi_{1}(\beta J)=1\), corresponding to \(k_{B}T=-J\). The linear relation valid at low temperatures, hereby, explains the simple proportionality found in Fig. 1(b) and Eq. (25). Finally, it is worth noting that in contrast to the usual cases considered in Anderson localization, the disordered potential in this case is actually _biased_. In particular, \(\langle V_{\mathbf{\sigma},\perp}(x)\rangle\propto|Jx|>0\) for any nonzero and finite temperature, \(0<T<\infty\), as detailed in Appendix C. This increasing bias of the potential should contribute to the localization effect. However, in the two extreme limits of \(T=0\) and \(T=\infty\), the mean value vanishes \(\langle V_{\mathbf{\sigma},\perp}(x)\rangle=0\), meaning that the slope of \(\langle V_{\mathbf{\sigma},\perp}(x)\rangle\) varies _non-monotonically_ with temperature. Had this effect stood alone, the hole would delocalize both at zero and infinite temperatures on the ferromagnetic side. Therefore, this does not account for the monotonic behavior found in Fig. 5, which instead comes from the disordered character of the magnetic potential as described above. ## V Detection in optical lattices with Rydberg-Dressed atoms In this section, I describe how the discovered localization phenomenon can be detected using current experimental setups with Rydberg-dressed atoms [12]. Such a setup natively implements finite range density-density interactions, \[\hat{H}_{J}=\frac{1}{2}\sum_{\mathbf{i}\neq\mathbf{j}}J_{\mathbf{ij}}\hat{n}_ {\mathbf{i}\uparrow}\hat{n}_{\mathbf{j}\uparrow}, \tag{28}\] of the internal atomic state \(|\uparrow\rangle\) that is being dressed by a higher-lying Rydberg state via an optical light field. Here, \(J_{\mathbf{ij}}=J_{0}/(1+(|\mathbf{i}-\mathbf{j}|/r_{c})^{6})\) takes on a soft-core shape, with \(r_{c}\) the soft-core size [57]. The \(|\downarrow\rangle\) state remains uncoupled from the light field and does not experience the interaction. Furthermore, an interstate Feshbach resonance may be used to drive the system into the Mott-insulating phase, such that there is at most a single spin on each site. Crucially important, the associated onsite interaction \(U\) between \(|\downarrow\rangle\) and \(|\uparrow\rangle\) can be increased independently of the light-induced interaction \(J_{\mathbf{ij}}\). As a result, low-energy spin-exchange interactions \(\propto 4t^{2}/U\)[3] can be made negligible compared to the interactions of Eq. (28) on the investigated timescales. The density-density interaction in Eq. (28) can equivalently be thought of as asymmetric finite-range Ising interactions. Doping the system with holes and allowing the spins to tunnel along the ladder with rate \(t\), hereby, realizes a modified Ising \(t\)-\(J\) model that can be used to test the predictions made in this Article. In particular, we can express Eq. (28) in terms of spin excitation and hole operators as \[\hat{H}_{J}=\frac{1}{2}\sum_{\mathbf{i}\neq\mathbf{j}}\!J_{\mathbf{j}}\!\left[1\!- \!\hat{s}_{\mathbf{i}}^{\dagger}\hat{s}_{\mathbf{i}}\right]\!\left[1\!-\!\hat{s }_{\mathbf{j}}^{\dagger}\hat{s}_{\mathbf{j}}\right]\!\left[1\!-\!\hat{h}_{ \mathbf{i}}^{\dagger}\hat{h}_{\mathbf{i}}\right]\!\left[1\!-\!\hat{h}_{ \mathbf{j}}^{\dagger}\hat{h}_{\mathbf{j}}\right]\!, \tag{29}\] and I will then analyze the motion of a hole starting out at \(\mathbf{i}=\mathbf{0}\). While precise experimental control of the temperature is generally difficult, we can take an alternative route to investigate the propagation of the hole in an effectively disordered medium. In particular, the system can be initialized with a hole at \(\mathbf{i}=\mathbf{0}\) by applying a strong repulsive light field to that site [9]. Moreover, I assume that the spins are initially all polarized into the non-interacting \(\ket{\downarrow}\) state, such that \(\ket{\Psi_{\pi/2}}=\prod_{\mathbf{i}\neq\mathbf{0}}\hat{c}_{\mathbf{i}\uparrow }^{\dagger}\ket{0}\). Then, a depolarizing field can be applied to mix the \(\ket{\uparrow}\) and \(\ket{\downarrow}\) states on each site with a specified mixing angle \(\theta\) \[\ket{\Psi_{\theta}}=\prod_{\mathbf{i}\neq\mathbf{0}}\left[\cos(\theta)\hat{c} _{\mathbf{i}\uparrow}^{\dagger}+\sin(\theta)\hat{c}_{\mathbf{i}\downarrow}^{ \dagger}\right]\ket{0}. \tag{30}\] With this as the initial state for a given mixing angle \(\theta\), the light field on site \(\mathbf{i}=\mathbf{0}\) can be turned off such the hole is now allowed to tunnel _along the ladder_, as described by \(\hat{H}_{t}\) in Eq. (2). The ability to turn off hopping between the legs relies on an additional energy offset between the legs [51]. Although this at face value is different from the nonzero temperatures considered previously in the Article, the dynamics of the hole can be described in a completely equivalent manner. In particular, the probability of finding the hole at site \(x\) at time \(\tau\) \[P(x,\tau) =\bra{\Psi_{\theta}}e^{+i\hat{H}\hat{H}}\hat{h}_{1,x}^{\dagger} \hat{h}_{1,x}e^{-i\hat{H}\tau}\ket{\Psi_{\theta}}\] \[=\sum_{\boldsymbol{\sigma}}p_{\boldsymbol{\sigma}}(\theta)|C_{ \boldsymbol{\sigma}}(x,\tau)|^{2}, \tag{31}\] takes on exactly the same form as Eq. (10) for the nonzero temperature case. The probabilities \(p_{\boldsymbol{\sigma}}(\theta)\) are now, however, not given by the thermal statistics, but a binomial distribution depending on the number of spin-\(\downarrow\) atoms, \(N_{\downarrow}(\boldsymbol{\sigma})\) in the spin realization \(\boldsymbol{\sigma}\) \[p_{\boldsymbol{\sigma}}(\theta)=[\sin^{2}\theta]^{N_{\downarrow}(\boldsymbol{ \sigma})}[\cos^{2}\theta]^{N-N_{\downarrow}(\boldsymbol{\sigma})}. \tag{32}\] As a result, such an experimental setup simulates the thermally induced localization phenomenon described in the previous sections. Here, the ferromagnetic states correspond to \(\theta=0,\pi/2\), whereas the infinite temperature limit corresponds to \(\theta=\pi/4\). Inbetween, there is strictly speaking no one-to-one correspondence with a specific temperature, but the variation of \(\theta\) in the interval \([0,\pi/2]\) qualitatively describes the same behavior as a varying temperature. Moreover, as the hole hops through the system, it experiences a magnetic potential akin to Eqs. (15) and (16). Specifically, for a given initial spin realization \(\boldsymbol{\sigma}\), a certain subset of the sites \(S_{\uparrow}(\boldsymbol{\sigma},0)\) contains spin-\(\uparrow\) atoms. This leads to the overall energy offset \[V_{0}=\frac{1}{2}\sum_{\mathbf{i},\mathbf{j}\in S_{\uparrow}(\boldsymbol{ \sigma},0)}J_{\mathbf{i}\mathbf{j}}. \tag{33}\] As the hole hops, the surpassed spin hops one site in the opposite direction. As a result, the subset of sites \(S_{\uparrow}(\boldsymbol{\sigma},x)\) with spin-\(\uparrow\) depends on the position of the hole \(x\). The resulting magnetic potential is then simply the magnetic energy differences \[V_{\boldsymbol{\sigma}}(x)=\frac{1}{2}\sum_{\mathbf{i},\mathbf{j}\in S_{ \uparrow}(\boldsymbol{\sigma},x)}J_{\mathbf{i}\mathbf{j}}-V_{0}. \tag{34}\] experienced as the hole moves through the system. With this at hand, the computation of the hole dynamics now follows the same recipe as in Sec. IV. In this case, I assume a finite size of the system with hard-wall boundary conditions and a total length \(N=41\) to properly describe a feasible experimental setup. The Metropolis-Hastings algorithm again uses \(N_{s}=2000\) samples, and is benchmarked by comparing the achieved magnetization per spin to the exact value of \([\cos^{2}\theta-\sin^{2}\theta]/2\). I find agreement within \(1\%\) for any value of \(\theta\). Figure 6(a) shows the resulting dynamics of the rms distance for indicated points on the Bloch sphere. Here, the north and south poles correspond to all spins pointing up and down, respectively, such that the mixing angle \(\theta\) is nothing but half the polar angle on the Bloch sphere. The dynamics is qualitatively similar to the cases shown in Figs. 4(a) and 5(a). The only major difference is that the free motion of the hole at \(\theta=\pi/2\) now Figure 6: (a) Rms distance versus time for \(J_{0}=10t\) and indicated mixing angles shown on the Bloch sphere (inset), corresponding to \(\theta=\pi/2,0.7\pi/2,0.6\pi/2,0.4\pi/2\) for black, red, green and blue respectively. For \(\theta=\pi/2\) (black), the hole is free to propagate and only attains an oscillatory behavior due to the finite size of the system (total length of \(N=41\)). (b) Long-time average of rms distance for indicated values of \(J_{0}\) as a function of the mixing angle \(\theta\), following the black line on the Bloch sphere in (a). This shows a minimum between \(\theta=\pi/8\) and \(\theta=\pi/4\) due to localization. The estimated standard errors are smaller than the dot size and are omitted for clarity. I use a soft-core size of \(r_{c}=1\). becomes oscillatory due to the finite size of the system. We see that as the mixing angle goes away from \(\theta=\pi/2\), the hole starts to localize. This is shown in more detail in Fig. 6(b), where the long-time average of the rms distance is plotted as a function of the mixing angle. At \(\theta=0,\pi/2\), the average rms distance of the hole settles around half the distance to the edge of the system. However, as the \(50\)-\(50\) spin mixing at \(\theta=\pi/4\) is approached, this dramatically decreases and reaches a minimum around \(\theta=0.75\pi/4\). This is a direct signature of localization of the hole. Indeed, the observed localization length around the minimum is no longer sensitive to the system size. One may reasonably wonder why the minimum is not located exactly at \(\theta=\pi/4\). The reason is that the spin interactions in Eq. (28) are not symmetric in spin-\(\uparrow\) and -\(\downarrow\), and indeed vanishes identically for the latter states. Moreover, the limiting values at the top and bottom of the Bloch sphere, \(\theta=0,\pi/2\) respectively, do not perfectly coincide. This is because the repulsive interactions of the spin-\(\uparrow\) atoms in the case of \(\theta=0\) favors the hole not to move all the way out to the edge of the system. This is a very minor effect that only shows up at the sites just before the edge. Crucially, this analysis directly shows that the localization can be probed on reasonably short timescales of just \(\tau=5/t\). This is highly important for the considered experimental protocol, because the Rydberg-dressed spin-\(\uparrow\) atoms inherit some of the decay of the high-lying Rydberg state. Here, it is also beneficial that the localization can be probed on the side where there is a majority of spin-\(\downarrow\) atoms (\(\pi/4<\theta<\pi/2\)), making this inherent decay less severe. This analysis, thus, establishes that the thermally induced localization discovered in the present Article may be realistically probed in current experimental platforms using Rydberg-dressed atoms. Moreover, I emphasize that this phenomenon should show up in any system that has polarized interactions, like the Ising cases considered here, and short-range hopping of a dopant. This suggests that one could also come up with a protocol using dipolar gases in optical lattices [64; 65] or trapped ions [66], in which a similar localization could happen. ## VI Conclusions and Outlook In this Article, I have described a novel localization phenomenon of dopants in Ising-type magnetic spin ladders. The effect arises _not_ due to inherent disorder in the system Hamiltonian, but as an _emergent_ phenomenon [56] due to thermal spin fluctuations. In particular, since the system is one-dimensional, it is disordered at any nonzero temperature. Therefore, even for ferromagnetic spin couplings for which one might expect the hole to delocalize completely, I show that it _remains_ localized across a huge range of spin couplings \(J/t\) and temperatures \(k_{B}T/J\). The effect is traced back to a _disorder potential_ experienced by the hole, whose strength increases towards infinite temperatures. In this infinite temperature limit, the dynamics no longer depends on the sign of the spin interactions and becomes a universal function of \(|J|/t\). Moreover, I showcased have the localization phenomenon may be explored using current experimental platforms with Rydberg-dressed atoms in optical lattices. The results strongly suggest that in general disordered Ising environments, dopants moving in one spatial direction will remain localized. As a result, the system will be an insulator provided that the inter-dopant spacing is large compared to the localization length. This emphasizes the possibility of a _reversed_ metal-insulator transition at the Curie temperature \(T_{C}\), such that dopants are localized _above_\(T_{C}\), and delocalized _below_\(T_{C}\), at least in the absence of regular Anderson localization. While this transition occurs at zero temperature, \(T_{C}=0\), in the current one-dimensional setup, it would be interesting to study the same scenario in two and three spatial dimensions, in which the Cure temperature is nonzero. Additionally, it is important to analyze the robustness of the localization phenomenon going away from the idealized models considered in the present Article. For example, does the same phenomenology arise when the dopants are allowed to move in two or three dimensions? Here, studies of dopant motion in a _non-interacting_ two-dimensional spin lattice at infinite temperatures [47; 48] shows that there are crucial qualitative differences. In particular, these results suggest diffusive motion of the dopant due to lack of path interferences in the disordered medium, which is in contrast to the ballistic behavior obtained for the one-dimensional motion in the present setup in this limit of \(J/t\to 0\). Turning on spin interactions in such a two-dimensional setup [49], and carefully analyzing the long-time dynamics for a broad range of spin interactions could help to answer this question. Along the same lines, one could also analyze what happens in the presence of flip-flop spin interaction terms, which should be amenable to matrix product states approaches. From a perturbative point of view, it would be difficult to see why a small admixing of such terms should immediately destroy localization. Therefore, there could be very intriguing behaviors in a similar two-leg ladder setup in e.g. an XXZ model, as one tunes the anisotropy of the spin couplings towards the isotropic Heisenberg model. ###### Acknowledgements. The author thanks Pascal Weckesser, Johannes Zeiher, Pavel Kos, and J. Ignacio Cirac for valuable discussions. A special thanks to Pascal Weckesser for providing valuable references. This Article was supported by the Carlsberg Foundation through a Carlsberg Internationalisation Fellowship. ## Appendix A Thermodynamics in the absence of dopants The thermodynamics of the system at half filling can be studied using a transfer matrix technique closely related to the analysis of a single chain. The Hamiltonian of the system is simply the nearest neighbor Ising Hamiltonian \[\hat{H}_{J}=J\sum_{\langle\mathbf{i},\mathbf{j}\rangle}\hat{S}_{\mathbf{i}}^{ (z)}\hat{S}_{\mathbf{j}}^{(z)}\rightarrow|J|\sum_{\langle\mathbf{i},\mathbf{j }\rangle}\hat{S}_{\mathbf{i}}^{(z)}\hat{S}_{\mathbf{j}}^{(z)}. \tag{29}\] In the last expression, I perform a local rotation on every second site \(\hat{S}_{3}^{(z)}\rightarrow-\hat{S}_{3}^{(z)}\) for ferromagnetic couplings, \(J<0\). This shows that the thermodynamics is equivalent for antiferro- and ferromagnetic couplings. Denoting the spin configurations in legs 1 and 2 respectively \(\mathbf{\sigma}_{1}\) and \(\mathbf{\sigma}_{2}\), we get the partition function in the canonical ensemble \[Z_{0} = \mathrm{tr}[e^{-\beta\hat{H}_{J}}] \tag{30}\] \[= \sum_{\mathbf{\sigma}_{1},\mathbf{\sigma}_{2}}e^{\beta|J|\sigma_{1,1} \sigma_{1,2}}e^{\beta|J|\sigma_{2,1}\sigma_{2,2}}e^{\beta|J|\sigma_{1,1}\sigma_ {2,1}}e^{\beta|J|\sigma_{1,2}\sigma_{2,2}}\times\] \[\cdots\times e^{\beta|J|\sigma_{1,N}\sigma_{1,1}}e^{\beta|J| \sigma_{2,N}\sigma_{2,1}}\] for a system of length \(N\) with periodic boundary conditions. Defining the \(4\times 4\) transfer matrix \[V_{\sigma_{1,1},\sigma_{1,2}}^{\sigma_{2,1},\sigma_{2,2}}=e^{\beta|J|\left[ \sigma_{1,1}\sigma_{1,2}+\sigma_{2,1}\sigma_{2,2}+\sigma_{1,1}\sigma_{2,1}/2 +\sigma_{1,1}\sigma_{2,1}/2\right]}, \tag{31}\] we can then write the partition function much more concisely as \[Z_{0} = \mathrm{tr}[e^{-\beta\hat{H}}]=\sum_{\mathbf{\sigma}_{1},\mathbf{\sigma} _{2}}V_{\sigma_{1,1},\sigma_{1,2}}^{\sigma_{2,1},\sigma_{2,2}}\times\cdots \times V_{\sigma_{1,N},\sigma_{1,1}}^{\sigma_{2,N},\sigma_{2,1}} \tag{32}\] \[= \sum_{\{n\}_{l=1}^{N}}V_{\eta_{1},\eta_{2}}V_{\eta_{2},\eta_{3}} \times\cdots\times V_{\eta_{N},\eta_{1}}=\mathrm{tr}[V^{N}].\] In the second line, I used the states \(\left|\sigma_{1},\sigma_{2}\right\rangle\) in the ordered basis \(\{\left|\uparrow\uparrow\right\rangle,\left|\downarrow\uparrow\right\rangle, \left|\uparrow\downarrow\right\rangle,\left|\downarrow\downarrow\right\rangle\}\), such that \(\eta=1,2,3,4\) refers to these elements respectively. The rows and columns of \(V\) correspond to different values of \(\left(\sigma_{1,1},\sigma_{2,1}\right)\) and \(\left(\sigma_{1,2},\sigma_{2,2}\right)\), respectively. Hence, \[V=\begin{bmatrix}e^{+3\beta|J|/4}&1&1&e^{-\beta|J|/4}\\ 1&e^{+\beta|J|/4}&e^{-3\beta|J|/4}&1\\ 1&e^{-3\beta|J|/4}&e^{+\beta|J|/4}&1\\ e^{-\beta|J|/4}&1&1&e^{+3\beta|J|/4}\end{bmatrix}. \tag{33}\] The problem has now been reduced to finding the \(4\) eigenvalues, \(v_{1},\ldots,v_{4}\), of the transfer matrix \(V\). In fact, letting \(v_{1}\) be the largest eigenvalue, we get \[Z_{0} = \mathrm{tr}[e^{-\beta\hat{H}_{J}}]=\mathrm{tr}[V^{N}]=\sum_{j} \left\langle v_{j}\right|V^{N}\left|v_{j}\right\rangle=\sum_{j}v_{j}^{N} \tag{34}\] \[= v_{1}^{N}\left[1+\sum_{j=2}\left(\frac{v_{j}}{v_{1}}\right)^{N} \right]\to v_{1}^{N},\] as \(N\rightarrow\infty\). So we only need the largest eigenvalue \(v_{1}\). From here, the free energy per spin is (there are \(2N\) spins) \[F_{0}=-\frac{1}{2\beta N}\ln Z_{0}=-\frac{1}{2\beta}\ln v_{1}. \tag{35}\] Diagonalizing a \(4\times 4\) matrix is not trivial, however, since it in general means that we have to solve a fourth order characteristic polynomial. However, we may use that the Hamiltonian does not couple the triplet \(\{\left|\uparrow\uparrow\right\rangle,\left|\downarrow\downarrow\right\rangle, \left(\left|\uparrow\downarrow\right\rangle+\left|\downarrow\uparrow\right\rangle \right)/\sqrt{2}\}\) and singlet \(\{((\left|\uparrow\downarrow\right\rangle-\left|\downarrow\uparrow\right\rangle )/\sqrt{2}\}\) subspaces. Transforming from the former to the latter basis is done via \[U=\begin{bmatrix}1&0&0&0\\ 0&0&2^{-1/2}&2^{-1/2}\\ 0&0&2^{-1/2}&-2^{-1/2}\\ 0&1&0&0\end{bmatrix}. \tag{36}\] Expressing the transfer matrix in the triplet-singlet basis yields \[\tilde{V}=U^{\dagger}VU=\begin{bmatrix}e^{+3\beta|J|/4}&e^{-\beta|J|/4}&\sqrt {2}&0\\ e^{-\beta|J|/4}&e^{+3\beta|J|/4}&\sqrt{2}&0\\ \sqrt{2}&\sqrt{2}&e^{+\beta|J|/4}+e^{-3\beta|J|/4}&0\\ 0&0&0&e^{+\beta|J|/4}-e^{-3\beta|J|/4}.\end{bmatrix} \tag{37}\] Diagonalizing the remaining \(3\times 3\) matrix, the eigenvectors are \[\left|v_{i}\right\rangle=\frac{1}{\sqrt{A_{i}}}\begin{bmatrix}\frac{v_{i}-(e^{ \beta|J|/4}+e^{-3\beta|J|/4})}{\frac{v_{i}-(e^{\beta|J|/4}+e^{-3\beta|J|/4})}{ 2\sqrt{2}}}\\ 0\end{bmatrix},\;i=1,2\] \[\left|v_{3}\right\rangle=\frac{1}{\sqrt{2}}\begin{bmatrix}1\\ -1\\ 0\\ 0\end{bmatrix},\left|v_{4}\right\rangle=\begin{bmatrix}0\\ 0\\ 0\\ 1\end{bmatrix}, \tag{38}\] with \(A_{i}=([v_{i}-(e^{\beta|J|/4}+e^{-3\beta|J|/4})]^{2}+4)/4\). The eigenvalues are \[v_{1} = 2\cosh\frac{1}{2}\beta|J|\,\cosh\frac{1}{4}\beta|J| \tag{39}\] \[\quad+2\sqrt{\left(\cosh\frac{1}{2}\beta|J|\,\cosh\frac{1}{4} \beta|J|\right)^{2}-\sinh^{2}\frac{1}{2}\beta|J|},\] \[v_{2} = 2\cosh\frac{1}{2}\beta|J|\,\cosh\frac{1}{4}\beta|J|\] \[\quad-2\sqrt{\left(\cosh\frac{1}{2}\beta|J|\,\cosh\frac{1}{4} \beta|J|\right)^{2}-\sinh^{2}\frac{1}{2}\beta|J|},\] \[v_{3} = 2e^{\beta|J|/4}\sinh\frac{1}{2}\beta|J|,v_{4}=2e^{-\beta|J|/4} \sinh\frac{1}{2}\beta|J|. \tag{40}\] I find that \(v_{1}\) is the largest eigenvalue for any value of \(\beta J\). The free energy of the system \(F_{0}=-\frac{1}{2\beta}\ln v_{1}\), hereby, correctly approaches the ground state energy \(-3|J|/8\), at zero temperature: \(\beta|J|\rightarrow\infty\). Finally, we need the spin-spin correlation function \[C_{z}(d)=4\,\langle\hat{S}_{1,1}^{(z)}\hat{S}_{1,1+d}^{(z)}\rangle \tag{112}\] to compare with the localization length of the hole. I use a similar method to the above to find an analytic solution. First, note that \[Z_{0}C_{z}(d) =\mathrm{tr}\left[4\hat{S}_{1,1}^{(z)}\hat{S}_{1,1+d}^{(z)}e^{- \beta\hat{H}_{J}}\right]\] \[=\mathrm{tr}\left[4\hat{S}_{1,1}^{(z)}\prod_{j=1}^{d-1}4\langle \hat{S}_{1,1+j}^{(z)}\rangle^{2}\hat{S}_{1,1+d}^{(z)}e^{-\beta\hat{H}_{J}}\right]\] \[=\mathrm{tr}\left[\prod_{j=1}^{d}(4\hat{S}_{1,j}^{(z)}\hat{S}_{1, j+1}^{(z)})e^{-\beta\hat{H}_{J}}\right]. \tag{113}\] Here, I use that \((\hat{S}_{1,1+j}^{(z)})^{2}=1/4\) for all the Ising eigenstates. Expressing the above equation in terms of these eigenstates \(|\mathbf{\sigma}_{1},\mathbf{\sigma}_{2}\rangle\), thus, yields \[ZC_{z}(d) =\sum_{\mathbf{\sigma}_{1},\mathbf{\sigma}_{2}}(4\sigma_{1,1}\sigma_{1,2 })V_{\sigma_{1,1},\sigma_{1,2}}^{\sigma_{2,1},\sigma_{2,2}}\times\ldots\] \[\quad\times(4\sigma_{1,d}\sigma_{1,d+1})V_{\sigma_{1,d},\sigma_{1,d+1}}^{\sigma_{2,d},\sigma_{2,d+1}}\times V_{\sigma_{1,d+1},\sigma_{1,d+2}}^{ \sigma_{2,d+1},\sigma_{2,d+2}}\times\] \[\quad\cdots\times V_{\sigma_{1,N},\sigma_{1,1}}^{\sigma_{2,N}, \sigma_{2,1}}=\mathrm{tr}[C^{d}V^{N-d}] \tag{114}\] In the last equality, I let \(C_{\sigma_{1,1},\sigma_{1,2}}^{\sigma_{2,1},\sigma_{2,2}}=(4\sigma_{1,1} \sigma_{1,2})V_{\sigma_{1,1},\sigma_{1,2}}^{\sigma_{2,1},\sigma_{2,2}}\). The correlator matrix \[C=\begin{bmatrix}e^{+3\beta|J|/4}&-1&1&-e^{-\beta|J|/4}\\ -1&e^{+\beta|J|/4}&-e^{-3\beta|J|/4}&1\\ 1&-e^{-3\beta|J|/4}&e^{+\beta|J|/4}&-1\\ -e^{-\beta|J|/4}&1&-1&e^{+3\beta|J|/4}\end{bmatrix}. \tag{115}\] is very similar to the transfer matrix, and simply attains sign flip with respect to \(V\), whenever \(\sigma_{1,1}\) and \(\sigma_{1,2}\) differ in sign. I also transform this matrix to the singlet-triplet basis \[\tilde{C}=U^{\dagger}CU=\begin{bmatrix}e^{+3\beta|J|/4}&-e^{-\beta|J|/4}&0&- \sqrt{2}\\ -e^{-\beta|J|/4}&e^{+3\beta|J|/4}&0&\sqrt{2}\\ 0&0&e^{+\beta|J|/4}-e^{-3\beta|J|/4}&0\\ -\sqrt{2}&\sqrt{2}&0&e^{+\beta|J|/4}+e^{-3\beta|J|/4}\end{bmatrix} \tag{116}\] The eigenvectors of \(\tilde{C}\) are closely tied to those of \(\tilde{V}\). I get \[|c_{i}\rangle =\frac{1}{\sqrt{A_{i}}}\left[-\frac{\frac{v_{i}-(e^{\beta|J|/4}+e ^{-3\beta|J|/4})}{2\sqrt{2}}}{2\sqrt{2}}\right],\;i=1,2\] \[|c_{3}\rangle =\frac{1}{\sqrt{2}}\begin{bmatrix}1\\ 1\\ 0\\ 0\end{bmatrix},|c_{4}\rangle=\begin{bmatrix}0\\ 0\\ 1\\ 0\end{bmatrix}. \tag{117}\] The corresponding eigenvalues are the same as for the transfer matrix: \(c_{i}=v_{i}\) for \(i=1,2,3,4\). I am now ready to calculate the spin-spin correlation function. I get \[Z_{0}C_{z}(d) =\mathrm{tr}[C^{d}V^{N-d}]=\mathrm{tr}[\tilde{C}^{d}\tilde{V}^{N- d}]\] \[=\sum_{i}\left\langle v_{i}\right|\tilde{C}^{d}\tilde{V}^{N-d} \left|v_{i}\right\rangle\] \[=\sum_{i,j}\left\langle v_{i}\right|\tilde{C}^{d}\left|c_{j} \right\rangle\left\langle c_{j}\right|\tilde{V}^{N-d}\left|v_{i}\right\rangle\] \[=\sum_{i,j}c_{j}^{d}v_{i}^{N-d}|\left\langle v_{i}|c_{j}\right\rangle |^{2}\] \[\to v_{1}^{N-d}\sum_{j}c_{j}^{d}|\left\langle v_{1}|c_{j} \right\rangle|^{2}. \tag{118}\] In the last expression, I use that for \(d\ll N\), the largest eigenvalue of \(V\), i.e. \(v_{1}\), will completely dominate. Now, we simply need to get the overlaps \(|\left\langle v_{1}|c_{j}\right\rangle|^{2}\). It turns out that only \(\left\langle v_{1}|c_{3}\right\rangle\) and \(\left\langle v_{1}|c_{4}\right\rangle\) are nonzero. These yield \[C_{z}^{(1)} =|\left\langle v_{1}|c_{3}\right\rangle|^{2}=\frac{[v_{1}-(e^{ \beta|J|/4}+e^{-3\beta|J|/4})]^{2}}{[v_{1}-(e^{\beta|J|/4}+e^{-3\beta|J|/4})]^ {2}+4},\] \[C_{z}^{(2)} =|\left\langle v_{1}|c_{4}\right\rangle|^{2}=\frac{4}{[v_{1}-(e^{ \beta|J|/4}+e^{-3\beta|J|/4})]^{2}+4}. \tag{119}\] Since \(Z_{0}=v_{1}^{N}\), we finally get \[C_{z}(d) =v_{1}^{-d}\sum_{j}c_{j}^{d}|\left\langle v_{1}|c_{j}\right\rangle| ^{2}\] \[=\frac{[v_{1}-(e^{\beta|J|/4}+e^{-3\beta|J|/4})]^{2}}{[v_{1}-(e^{ \beta|J|/4}+e^{-3\beta|J|/4})]^{2}+4}\left[\frac{v_{3}}{v_{1}}\right]^{d}\] \[+\frac{4}{[v_{1}-(e^{\beta|J|/4}+e^{-3\beta|J|/4})]^{2}+4}\left[ \frac{v_{4}}{v_{1}}\right]^{d}\] \[=C_{z}^{(1)}e^{-d/\xi_{1}(\beta J)}+C_{z}^{(2)}e^{-d/\xi_{2}(\beta J)} \tag{120}\] giving a sum of two exponentially decaying terms. The correlation lengths are \[\xi_{1}(\beta J) = \left[\ln\left(\frac{v_{1}}{v_{3}}\right)\right]^{-1}\] \[\xi_{2}(\beta J) = \left[\ln\left(\frac{v_{1}}{v_{4}}\right)\right]^{-1}. \tag{101}\] I note that \(\xi_{1}>\xi_{2}\) for any temperature. Inserting \(v_{1},v_{3}\) in the upper line leads to the expression in Eq. (5) of the main text. ## Appendix B Disordered potential at infinite temperature Here, I derive the probability distribution of the magnetic potential \(V(x)\). After \(|x|\) hops, the possible values of the _transpose_ potential are \[V_{\perp}(x)=n\frac{J}{2},\ n\in\{-|x|,-|x|+1,\ldots,|x|\}. \tag{102}\] I want to calculate what the probabilities \(P(V_{\perp}(x)=nJ/2)\) are. To do so, the change in the potential may be described by the transition operator \[T=\sum_{n}\left[\frac{1}{2}\left|n\right>\left<n\right|+\frac{1}{4}\left|n+1 \right>\left<n\right|+\frac{1}{4}\left|n-1\right>\left<n\right|\right]. \tag{103}\] Here, \(\left|n\right>\) denotes the outcome \(nJ/2\). The probability of \(nJ/2\) after \(|x|\) hops is, therefore, \[P\left(V_{\perp}(x)=n\frac{J}{2}\right)=\left<n\right|T_{\perp}^{|x|}\left|0 \right>. \tag{104}\] To calculate this transition element, it is beneficial to use the eigenvectors of \(T\). In particular, we let \[\left|n\right>=\frac{1}{\sqrt{N}}\sum_{k}e^{ikn}\left|k\right>. \tag{105}\] Here, \(k\in(-\pi,\pi]\). The transition operator is diagonal in these vectors \[T_{\perp}=\sum_{k}t_{k}\left|k\right>\left<k\right|, \tag{106}\] with \(t_{k}=[1+\cos(k)]/2=\cos^{2}(k/2)\). Now, \[P\left(V_{\perp}(x)=n\frac{J}{2}\right)=\left<n\right|T_{\perp}^ {|x|}\left|0\right>\] \[=\sum_{k,q}\left<n\right|q\right>\left<q\right|T^{|x|}\left|k \right>\left<k|0\right>\] \[=\sum_{k}\left<n\right|k\right>t_{k}^{|x|}\left<k|0\right>= \frac{1}{N}\sum_{k}e^{-ikn}t_{k}^{|x|}. \tag{107}\] We may turn this into an integral, yielding \[P\left(V_{\perp}(x)=n\frac{J}{2}\right)=\frac{1}{N}\sum_{k}e^{- ikn}t_{k}^{|x|}\] \[\rightarrow\int_{-\pi}^{\pi}\frac{dk}{2\pi}e^{-ikn}t_{k}^{|x|}= \int_{0}^{\pi}\frac{dk}{\pi}\cos(kn)t_{k}^{|x|}. \tag{108}\] For any value of \(n\) and \(x\) this allows us to get the probabilities. Furthermore, we also compute the variance of the potential. Explicitly, \[\mathrm{Var}[V_{\perp}(x)] = \sum_{-|x|}^{|x|}P\left(V_{\perp}(x)=n\frac{J}{2}\right)\left(n \frac{J}{2}\right)^{2} \tag{109}\] \[= \frac{J^{2}}{4}\int_{0}^{\pi}\frac{dk}{\pi}\left[\sum_{-|x|}^{|x| }n^{2}\cos(kn)\right]t_{k}^{|x|}.\] The sum may be evaluated using Wolfram Alpha to yield \[\sum_{-|x|}^{|x|}n^{2}\cos(kn)=|x|\cos(k|x|)\left[\cot^{2}(k/2)+x+1\right]\] \[-\frac{1}{2}\cot(k/2)\sin(k|x|)\left[\cot^{2}(k/2)-2x^{2}+1\right]. \tag{110}\] Inserting this above gives the very simple result \[\mathrm{Var}[V_{\perp}(x)]=\frac{J^{2}}{8}|x|. \tag{111}\] Since the frustation potential performs a (classical) random walk, the variance scales linearly in \(|x|\). Let us, equivalently, determine the probability distribution for the intra-leg potential \(V_{\parallel}(x)\). I note that the change in this potential from site to site is equivalent to the transfer matrix \[T_{\parallel}=\frac{1}{4}\left[\left|+1\right>\left<0\right|+\left|-1\right> \left<0\right|\right]+\frac{1}{2}\sum_{n=-1}^{+1}\left|n\right>\left<n\right|. \tag{112}\] To determine the probability distribution in this case, given by \(\left<n\right|T_{\parallel}^{|x|}\left|0\right>\), we note that \[T_{\parallel}^{|x|}\left|0\right>=\frac{1}{2}\left|0\right>+\frac{1}{4}\left[ \left|+1\right>+\left|-1\right>\right], \tag{113}\] for any integer \(x\neq 0\). So \(P(V_{\parallel}(x)=0)=1/2\), and \(P(V_{\parallel}(x)=\pm J/2)=1/4\) for any \(x\neq 0\). The variance is then simply \[\mathrm{Var}[V_{\parallel}(x)]=\sum_{n=-1}^{+1}P\left(V_{\parallel}(x)=n\frac{ J}{2}\right)\left(n\frac{J}{2}\right)^{2}=\frac{J^{2}}{8}. \tag{114}\] Since the trans- and intraleg potentials are uncorrelated, their variances add \[\mathrm{Var}[V(x)]=\frac{J^{2}}{8}\left[\left|x\right|+1\right]. \tag{115}\] This shows very explicitly that \(V_{\perp}(x)\) dominates the distribution for large \(|x|\). ## Appendix C Bias of magnetic potential In this appendix, I briefly analyze the mean value of the magnetic potential, and show that it is biased to positive values. I also characterize its temperature dependency and show that it varies non-monotonically with temperature, vanishing at zero _as well_ as at infinite temperature. The trans-leg potential has a linearly growing mean value with distance \[\langle V_{\mathbf{\sigma},\perp}(x)\rangle = |J||x|\big{[}\langle\sigma_{1,0}\sigma_{2,j}\rangle-\langle\sigma_{ 1,0}\sigma_{2,-1}\rangle\big{]},J<0,\] \[\langle V_{\mathbf{\sigma},\perp}(x)\rangle = J|x|\big{[}|\langle\sigma_{1,0}\sigma_{2,-1}\rangle|+|\langle \sigma_{1,0}\sigma_{2,j}\rangle|\big{]},J>0. \tag{10}\] In the first line (FM couplings), I use that \(\langle\sigma_{1,0}\sigma_{2,j}\rangle>\langle\sigma_{1,0}\sigma_{2,-1} \rangle>0\), since neighboring spins are more correlated than nearest neighbors. In the second line (AFM couplings), I use that the nearest and next-nearest neighbor correlators are negative and positive, respectively. While this rather simple effect gives rise to an overall localizing effect, it does not explain the monotonic behavior of the temperature dependence of the localization observed in Fig. 5. For ferromagnetic couplings in particular, this mean value vanishes _both_ at zero and infinite temperatures, and, therefore, shows a non-monotonic behavior.
2304.02716
Dynamic Optimization and Optimal Control of Hydrogen Blending Operations in Natural Gas Networks
We present a dynamic model for the optimal control problem (OCP) of hydrogen blending into natural gas pipeline networks subject to inequality constraints. The dynamic model is derived using the first principles partial differential equations (PDEs) for the transport of heterogeneous gas mixtures through long distance pipes. Hydrogen concentration is tracked together with the pressure and mass flow dynamics within the pipelines, as well as mixing and compatibility conditions at nodes, actuation by compressors, and injection of hydrogen or natural gas into the system or withdrawal of the mixture from the network. We implement a lumped parameter approximation to reduce the full PDE model to a differential algebraic equation (DAE) system that can be easily discretized and solved using nonlinear optimization or programming (NLP) solvers. We examine a temporal discretization that is advantageous for time-periodic boundary conditions, parameters, and inequality constraint bound values. The method is applied to solve case studies for a single pipe and a multi-pipe network with time-varying parameters in order to explore how mixing of heterogeneous gases affects pipeline transient optimization.
Saif R. Kazi, Kaarthik Sundar, Anatoly Zlotnik
2023-04-05T19:34:00Z
http://arxiv.org/abs/2304.02716v3
# Dynamic Optimization and Optimal Control of Hydrogen Blending Operations in Natural Gas Networks ###### Abstract We present a dynamic model for the optimal control of hydrogen blending into natural gas pipeline networks subject to inequality constraints. The dynamic model is derived using the first principles partial differential equations (PDEs) for the transport of heterogeneous gas mixtures through long distance pipes. Hydrogen concentration is tracked together with the pressure and mass flow dynamics within the pipelines, as well as mixing and compatibility conditions at nodes, actuation by compressors, and injection of hydrogen or natural gas into the system or withdrawal of the mixture from the network. We implement a lumped parameter approximation to reduce the full PDE model to a differential algebraic equation (DAE) system that can be easily discretized and solved using nonlinear optimization or programming (NLP) solvers. We examine a temporal discretization that is advantageous for time-periodic boundary conditions, parameters, and inequality constraint bound values. The method is applied to solve case studies for a single pipe and a multi-pipe network with time-varying parameters in order to explore how mixing of heterogeneous gases affects pipeline transient optimization. ## I Introduction The transition away from reliance on fossil fuels towards renewable and sustainable energy systems has inspired the development of new technologies for the production, transport, and utilization of energy. At present, a large fraction of primary energy consumption for heating and power is still sustained by the extraction, pipeline transport, and combustion of natural gas. Because hydrogen has high heating value per mass, can be produced using electrolysis powered by renewable electricity, and results in no emissions other than water when burned, efforts are underway to blend hydrogen into existing natural gas pipelines so these systems can also support the energy transition. The physical and chemical differences between hydrogen and natural gas significantly affect pipeline flow transients [1], energy capacity [2], and economics [3]. With hydrogen blending, time-varying injections are made into a pipeline that mainly carries natural gas to consumers with variable consumption [4]. The gas composition must be monitored and optimized to account for energy content and pipeline operating efficiency, and optimal control is needed to handle the amplified transients caused by complex flow physics [5]. Previous academic and industrial research found challenges for hydrogen injection into natural gas pipelines [6], for both pipes [7] and compressor turbomachinery [8]. Recent studies focus on simulating the complex dynamics that arise [9, 10]. The development of optimal control methods for managing the mixing of heterogeneous physical flows on networks defined by nonlinear PDEs has received less attention, particularly in the case of inequality constraints. In this study, we redevelop optimal control methods created for gas pipeline systems [11] to account for injection of hydrogen and natural gas (H\({}_{2}\)-NG) throughout a network with complex time-dependent boundary conditions and inequality constraints. In particular, we free certain boundary conditions for flow and maximize an objective that represents the economic value created by transporting the gas mixture from suppliers to consumers. We describe methods for approximating the PDE system with a DAE system and then discretizing the latter to a nonlinear program. The control scheme is applied to manage simple time-varying boundary concentration changes in a single pipe subject to constant inequalities on state variables, resulting in highly non-trivial solutions. The problem is also extended to a test pipeline network with three compressor stations and a loop. The rest of the paper is as follows. In Section II, we describe the physical modeling of natural gas and hydrogen mixtures flowing through a pipe, as well as handling of boundary conditions, approximation by lumped parameters, and time discretization. In Section III the modeling is extended to mixing throughout a pipeline network, including discussion of spatial discretization of the graph, nodal compatibility conditions, and energy content. The optimal control problem is defined in Section IV by adding an objective function that accounts for the value created by the flow allocation and the cost of operations. Section V provides implementation details and displays the results of case studies for a single pipe and a small test network, and we conclude in Section VI. ## II Dynamic Heterogeneous Flow in a Pipe ### _PDE System_ Isothermal Transport of an H\({}_{2}\)-NG mixture in a pipe without transients that create waves or shocks can be ap proximated by the system of partial differential equations \[\frac{\partial\rho_{H_{2}}}{\partial t}+\frac{\partial\phi_{H_{2}}}{ \partial x}=0, \tag{1a}\] \[\frac{\partial\rho_{NG}}{\partial t}+\frac{\partial\phi_{NG}}{ \partial x}=0,\] (1b) \[\frac{\partial\phi}{\partial t}+\frac{\partial P}{\partial x}=- \frac{\lambda}{2D}\frac{\phi|\phi|}{\rho}, \tag{1c}\] where \(\rho_{H_{2}}\) and \(\rho_{NG}\) denote the partial densities, and \(\phi_{H_{2}}\) and \(\phi_{NG}\) denote the partial mass flux, of H\({}_{2}\) and NG respectively [1]. Here \(P\) denotes the total pressure of the mixture and \(\lambda\) and \(D\) are friction factor and pipe diameter parameters. ### _Assumptions_ The total density \(\rho\) and flux \(\phi\) are assumed to be sums of component-wise partial densities \(\rho_{k}\) and fluxes \(\phi_{k}\): \[\rho_{H_{2}}+\rho_{NG}=\rho, \tag{2a}\] \[\phi_{H_{2}}+\phi_{NG}=\phi. \tag{2b}\] This holds true for ideal gas mixtures with additive partial pressures, and enables simplified PDE system modeling. The ideal gas assumption relates the total pressure in terms of partial densities as: \[P =p_{H_{2}}+p_{NG}, \tag{2c}\] \[P =a_{H_{2}}^{2}p_{H_{2}}+a_{NG}^{2}\rho_{NG}, \tag{2d}\] where \(a_{H_{2}}\) and \(a_{NG}\) are speed(s) of sound in pure H\({}_{2}\) and NG respectively. For ideal gases, the speed of sound is given by \(a=\sqrt{\frac{RT}{M}}\) where \(R\), \(T\), and \(M\) are universal gas constant, temperature, and molecular mass. The equation of state is \[P=\left(a_{H_{2}}^{2}\frac{\rho_{H_{2}}}{\rho_{H_{2}}+\rho_{NG}}+a_{NG}^{2} \frac{\rho_{NG}}{\rho_{H_{2}}+\rho_{NG}}\right)\left(\rho_{H_{2}}+\rho_{NG} \right). \tag{2e}\] The ratio of partial density to the total density, \(\rho_{H_{2}}/(\rho_{H_{2}}+\rho_{NG})\), is the mass fraction of H\({}_{2}\) in the gas mixture, which we denote by \(\eta\), and write equation (2e) as \[P=\left(a_{H_{2}}^{2}\eta+a_{NG}^{2}(1-\eta)\right)\rho=a^{2}\rho. \tag{2f}\] We make another assumption regarding the flux derivative \(\partial\phi/\partial t\) to provide for numerical stability. In most practical cases, this term is negligible compared to the pressure gradient term \(\partial P/\partial x\). For ideal gas mixtures, the ratio of these two terms is approximately \(\mathcal{O}(u/a)\) where \(u/a\) is the ratio of gas velocity and sound speed in the mixture. Typical gas velocity values of \(u\in[1,10]\) m/s are much smaller than the speed of sound \(a\in[350,1000]\) m/s. ### _Simplified PDE System_ Using the assumptions, we simplify the original PDE system (1) to the system \[\frac{\partial\rho_{H_{2}}}{\partial t}+\frac{\partial}{\partial x }(\eta\phi)=0 \tag{3a}\] \[\frac{\partial\rho_{NG}}{\partial t}+\frac{\partial}{\partial x }((1-\eta)\phi)=0\] (3b) \[\frac{da^{2}\rho}{dx}=-\frac{\lambda}{2D}\frac{\phi|\phi|}{\rho} \tag{3c}\] ### _Lumped Model_ For a short pipe of length \(L\), the system can be integrated over the length variable from \(x=0\) to \(x=L\) to yield: \[L\frac{d\rho_{H_{2}}}{dt}+\eta^{L}\phi^{L}-\eta^{0}\phi^{0}=0 \tag{4a}\] \[L\frac{d\bar{\rho}_{NG}}{dt}+(1-\eta^{L})\phi^{L}-(1-\eta^{0}) \phi^{0}=0\] (4b) \[a^{2}(L)\rho(L)-a^{2}(0)\rho(0)=-\frac{\lambda L}{2D}\frac{\bar{ \phi}|\bar{\phi}|}{\bar{\rho}} \tag{4c}\] Note that the partial densities and the sound speed are evaluated at the nodes whereas the flux and concentration variables are defined at the end points of the pipe edge. The right side of equation (1c) is approximated using the average values for the variables \(\bar{\rho}=(\rho(L)+\rho(0))/2\) and \(\bar{\phi}=(\phi^{L}+\phi^{0})/2\) denote the average values for density and flux respectively. ### _Time Discretization_ A simple first order forward finite difference formula is used for time discretization on the interval \([0,T^{f}]\) using \(N\) points \(T^{S}=\{t_{n}\}_{n=1}^{N}\) defined by \(t_{n}=\Delta T(n-1)\) for \(n=1,2,\ldots,N\). The resulting approximation is \[\frac{d\rho_{H_{2}}}{dt}\bigg{|}_{t_{n}} \approx\frac{\left(\rho_{H_{2}}^{L}\Big{|}_{t_{n+1}}+\rho_{H_{2}}^ {0}\Big{|}_{t_{n+1}}\right)-\left(\rho_{H_{2}}^{L}\Big{|}_{t_{n}}+\rho_{H_{2}} ^{0}\Big{|}_{t_{n}}\right)}{2\Delta T} \tag{5a}\] \[\frac{d\bar{\rho}_{NG}}{dt}\bigg{|}_{t_{n}} \approx\frac{\left(\rho_{NG}^{L}\big{|}_{t_{n+1}}+\rho_{NG}^{0} \big{|}_{t_{n+1}}\right)-\left(\rho_{NG}^{L}\big{|}_{t_{n}}+\rho_{NG}^{0} \big{|}_{t_{n}}\right)}{2\Delta T} \tag{5b}\] ### _Boundary Conditions_ Instead of specifying initial boundary conditions for the differential equation system, we impose cyclic or periodic boundary conditions on the state variables, of form \[x(T^{f})-x(0)=0, \tag{6}\] where \(x\) denotes any of the partial densities \(\rho_{H_{2}}\) or \(\rho_{NG}\) and \(T^{f}\) is the final time point. Rather that explicitly enforcing the boundary condition equation (6), we rewrite the time derivative for the final time step as \[\frac{d\bar{x}}{dt}\bigg{|}_{t_{N}}\approx\frac{x(0)-x(t_{N})}{\Delta T} \tag{7}\] This formulation implicitly includes the cyclic condition and reduces the number of variables and constraints, which improves the overall numerical conditioning of the resulting nonlinear program. ### _Non-dimensional system of equations_ As is common practice [12], the variables and equations in the model are non-dimensionalized for numerical purposes. Standard or nominal values for length, pressure, and velocity are chosen and denoted by \(l_{0},p_{0}\) and \(\eta_{0}=1\), respectively. Nominal values are also derived for wave speed, \(a_{0}=\sqrt{a_{H_{2}}\cdot a_{NG}}\approx 672\) m s\({}^{-1}\), and flow speed, \(v_{0}=a_{0}/M\), where \(M\) denotes the mach value for gas velocity. For the purpose of this study, we use a nominal value of \(M=1/300\), which results in \(v_{0}\approx 2.24\) m s\({}^{-1}\). Nominal density is chosen as \(\rho_{0}=p_{0}/a_{0}^{2}\) and nominal flow and flux are \(f_{0}=\rho_{0}A_{0}v_{0}\) and \(\phi_{0}=\rho_{0}v_{0}\), where nominal pipe cross section area is \(A_{0}=1\). Setting \(\hat{\rho}=\rho/\rho_{0},\hat{a}=a/a_{0},\hat{L}=L/l_{0},\hat{D}=D/l_{0}\) and \(\hat{\phi}=\phi/\phi_{0}\), we find that equation (4) reduces to the system \[L\frac{d\hat{\rho}_{H_{2}}}{dt}+\frac{\eta_{0}\phi_{0}}{\rho_{0}l _{0}}\left(\hat{\eta}^{L}\hat{\phi}^{L}-\hat{\eta}^{0}\hat{\phi}^{0}\right)=0, \tag{8a}\] \[\hat{L}\frac{d\hat{\rho}_{NG}}{dt}+\frac{\eta_{0}\phi_{0}}{\rho_{ 0}l_{0}}\left((1-\hat{\eta}^{L})\hat{\phi}^{L}-(1-\hat{\eta}^{0})\hat{\phi}^{0 }\right)=0,\] (8b) \[\hat{a}^{2}(\hat{L})\hat{\rho}(\hat{L})-\hat{a}^{2}(0)\hat{\rho} (0)=-\left(\frac{\phi_{0}^{2}}{a_{0}^{2}\rho_{0}^{2}}\right)\left(\frac{ \lambda\hat{L}}{2\hat{D}}\right)\frac{\hat{\phi}|\hat{\phi}|}{\hat{\rho}}. \tag{8c}\] By defining \[\kappa\triangleq\frac{\eta_{0}\phi_{0}}{\rho_{0}l_{0}}=\frac{v_{0}}{l_{0}} \text{ and }\beta\triangleq\left(\frac{\phi_{0}^{2}}{a_{0}^{2}\rho_{0}^{2}} \cdot\frac{\lambda\hat{L}}{2\hat{D}}\right)=\frac{1}{M^{2}}\left(\frac{\lambda \hat{L}}{2\hat{D}}\right), \tag{9}\] the system is rewritten in terms of the flow \(\hat{f}=\hat{\phi}\hat{A}\) as \[\frac{d\hat{\rho}_{H_{2}}^{\hat{\rho}_{H_{2}}}}{dt}+\frac{\kappa}{ \hat{L}\hat{A}}\left(\hat{\eta}^{L}\hat{f}^{L}-\hat{\eta}^{0}\hat{f}^{0}\right) =0, \tag{10a}\] \[\frac{d\hat{\rho}_{NG}^{\hat{\rho}_{NG}}}{dt}+\frac{\kappa}{\hat{ L}\hat{A}}\left((1-\hat{\eta}^{L})\hat{f}^{L}-(1-\hat{\eta}^{0})\hat{f}^{0} \right)=0,\] (10b) \[\hat{a}^{2}(\hat{L})\hat{\rho}(\hat{L})-\hat{a}^{2}(0)\hat{\rho} (0)=-\beta\frac{\hat{\phi}|\hat{\phi}|}{\hat{\rho}}. \tag{10c}\] _In subsequent discussions, for ease of presentation, we omit the hat symbol that designates non-dimensional quantities with the understanding that all quantities are dimensionless._ ## III Dynamic Heterogeneous Flow in a Network The network model consists of pipes and compressors that connect nodes that are subject to injection and withdrawal flows, or non-dispatchable nodes. The following graph-based notations are used to formulate the network-wide dynamic model for optimal transport of mixed gas. **Sets:** * graph of a pipeline network * sets of nodes, pipes, compressors, respectively * sets of slack, injection and withdrawal nodes * the pipe connecting nodes \(i\) and \(j\) * the compressor between nodes \(i\) and \(j\) **Variables:** _Node:_ * concentration of H\({}_{2}\) (after mixing) at node \(i\) * speed of sound in gas at node \(i\) * partial density of H\({}_{2}\) or NG at node \(i\) * supply and withdrawal flows at injection \(i\in\mathbb{I}\) and withdrawal \(i\in\mathbb{W}\) nodes, respectively * energy delivered to withdrawal node \(i\in\mathbb{W}\) _Edge:_ * flow through pipe \((i,j)\) at end points * concentration of H\({}_{2}\) in pipe \((i,j)\) at end points * flow through a compressor \((i,j)\) * compressor ratio in compressor \((i,j)\) **Parameters:** * friction factor of pipe \((i,j)\) * cross-sectional area of pipe \((i,j)\) * length and diameter of pipe \((i,j)\) * effective resistance of pipe \((i,j)\) * concentration of injection at node \(i\in\mathbb{I}\cup\mathbb{S}\) ### _Spatial Discretization_ Because the lumped model formulation equation (4) is valid for short pipes (\(<\approx\)10km) and pipelines in gas networks usually span 10 to 100 km, the pipelines in the network are discretized uniformly into short pipes of fixed length (\(\Delta L\)) as shown in Fig. 1. ### _Node Balance_ For \(j\in\mathbb{N},t\in T^{s}\), we enforce flow balance for hydrogen and natural gas according to \[\sum_{k\in\partial_{j}^{-}}\gamma_{ij}^{0}f_{jk}-\sum_{i\in \partial_{j}^{+}}\gamma_{ij}^{L}f_{ij}=\sum_{m\in\partial_{j}^{g}}\eta_{j}^{s}q_{ j}^{s}-\sum_{m\in\partial_{j}^{g}}\eta_{j}q_{j}^{w}, \tag{11a}\] \[\sum_{k\in\partial_{j}^{-}}(1-\gamma_{ij}^{0})f_{jk}-\sum_{i\in \partial_{j}^{+}}(1-\gamma_{ij}^{L})f_{ij}\] \[=\sum_{m\in\partial_{j}^{g}}(1-\eta_{j}^{s})q_{j}^{s}-\sum_{m\in \partial_{j}^{g}}(1-\eta_{j})q_{j}^{w}, \tag{11b}\] Here \(\partial_{j}^{+}\) and \(\partial_{j}^{-}\) are the sets of nodes connected to node \(j\) by outgoing and incoming edges, respectively. Adding together equations (11a) and (11b) yields an equation for the total nodal mass balance. ### _Energy Demand_ We define a variable (\(g^{E}\)) for the energy content of the withdrawal gas flow. The energy value of the withdrawal flow is determined using the expression \[g_{i}^{E}=(\eta_{i}R_{H_{2}}+(1-\eta_{i})R_{NG})q_{i}^{w},\quad\forall i\in \mathbb{W}, \tag{12}\] where \(R_{H_{2}}\) and \(R_{NG}\) are calorific values for H\({}_{2}\) and NG, respectively. Instead of specifying a fixed value or profile for withdrawal demand flow (\(q^{w}\)), we either specify the withdrawal energy demand (\(\bar{g}^{E}\)) or impose an upper bound on the non-negative variable as maximum energy demand by \[0\leq g_{i}^{E}\leq g_{i}^{E,max}\quad\forall i\in\mathbb{W} \tag{13}\] ### _Additional Equations_ We impose additional compatibility equations for all \(t\in T^{s}\) that enforce compatibility conditions, determine the local sound speed, and delimit pressure within acceptable levels. These are of form Fig. 1: Single Discretized Pipe **Node Concentration:** For \(i\in\mathbb{N}\) \[\eta_{i}=\frac{\rho_{H_{2},i}}{(\rho_{H_{2},i}+\rho_{NG,i})}, \tag{14a}\] **Node-Edge Continuity:** For \(ij\in\mathbb{P}\cup\mathbb{C}\) \[\gamma_{ij}^{0}-\eta_{i}=0, \tag{14b}\] **Node Sound Speed:** For \(i\in\mathbb{N}\) \[a_{i}^{2}=a_{H_{2}}^{2}\eta_{i}+a_{NG}^{2}(1-\eta_{i}), \tag{14c}\] **Slack Pressure:** For \(i\in\mathbb{S}\) \[a_{i}^{2}(\rho_{H_{2},i}+\rho_{NG,i})=p_{i}^{s}(\text{fixed}), \tag{14d}\] **Pressure Bounds:** For \(i\in\mathbb{N}\) \[p_{i}^{min}\leq a_{i}^{2}(\rho_{H_{2},i}+\rho_{NG,i})\leq p_{i}^{max}. \tag{14e}\] ## IV Optimal Control Problem ### _Objective Cost Function_ The objective function for the optimal control problem (OCP) consists of the economic value generated by the pipeline operator through sales of energy and purchases of gas constituents, minus the operating cost of compressors. The objective function is of the form \[R=\sum_{t\in T_{s}}\left(\sum_{i\in I}(\eta_{i}^{s}c_{H_{2},i}+(1-\eta_{i}^{s} )c_{NG,i})q_{s}^{j,t}-\sum_{i\in W}C^{E}g_{i,I}^{E}\right),\] (15a) where \[c_{H_{2}}\] and \[c_{NG}\] denote supplier offer prices for H 2 and NG, and \[C^{E}\] denotes the price that consumers bid for energy deliverd through the mixed gas in $/MW. The operating cost for each compressor is determined using an adiabatic compression work type expression [13], of form \[W_{c}=\left(\frac{286.76\cdot\kappa_{ij}\cdot T}{G_{ij}(\kappa_{j}-1)}\right) \left(\alpha_{ij}^{m}-1\right)\left|f_{ij}^{c}\right|. \tag{15b}\] We make an approximation that the terms in the left bracket, comprised of physical parameters such as specific heat capacity ratio and specific gravity, remain constant, and denote this value by \(k\). The compressor cost is obtained by multiplying the total compressor work by a constant electricity price of \(\zeta=0.07\) $/\(k\)Wh, resulting in \[R_{c}=K\cdot\zeta\cdot\sum_{t_{k}}\sum_{ij\in C}f_{ij}^{c,t}\left(\sqrt{ \alpha_{ij}^{t}}-1\right). \tag{15c}\] The total objective function is a weighted sum of the economic cost and the operating cost with a weight \(\xi\), of form \[\text{min}\ \ \mathbb{T}^{\text{cost}}=\xi R+(1-\xi)R_{c}. \tag{15d}\] The scaling parameter can be used to prioritize on maximizing gas delivery or minimizing the operating cost. For \(\xi=0.5\), the objective reduces to the sum of the two terms. ### _Problem Formulation_ The complete optimization problem is formulated as: \[\texttt{OPT} \triangleq \text{min}\ \ \ \mathbb{T}^{\text{cost}}\ \text{in}\ \eqref{eq:constraint},\ \ \text{- subject to:}\] \[\text{equation }\eqref{eq:constraint}-\text{transport equations}\] \[\text{equation }\eqref{eq:constraint},\ \text{equation }\eqref{eq:constraint}-\text{(energy calculation)}-\text{(H}_{2}\text{ and NG nodal balance)}\] \[\text{equation }\eqref{eq:constraint}-\text{(network equations)}\] ## V Results The dynamic model is used to solve examples for a single pipe and an 8-node case network subject to variable boundary conditions and/or constraint bounds. ### _Implementation Details_ Problem OPT is solved for both test cases using a NLP solver such as KNITRO [14] in a Julia based modeling language JuMP [15]. The problems are solved for a 24-hour time period i.e. \(T_{f}=24\) hour in a sequential way. At first, the steady-state problem is solved by setting \(\Delta T=T_{f}\) reducing the time discretization to single point. Thereafter, the steady-state solution is used as the initial guess for the full time discretization (\(\Delta T=0.5\) or \(1.0\) hour) problem. ### _Single pipe_ Consider a single pipe of length \(L=30\) km, diameter \(D=0.9144m\), and friction factor \(f=0.01\) connected with a compressor near the injection node as shown in Fig. 1. We choose a discretization length of \(\Delta L=10\) km to formulate the lumped element reduced model. The injection concentration \(\eta^{s}\) is specified as a time varying sinusoidal function \[\eta^{s}=\eta_{0}^{s}+\delta sin(2\pi\nu/T), \tag{16}\] with \(\eta_{0}^{s}=0.1\), \(\delta=0.05\), and \(\nu=2\). The function in equation (16) is shown as the node 1 concentration in Fig. 2a. Pressure at the slack injection node 1 is fixed at \(p^{s}=4.337\) MPa. The energy limit at the withdrawal is set at \(g^{E,max}=8000\) MJ/s, and we bound the flow through the compressor at \(f^{c}\leq 150\) kg/s. The pressure limits at the nodes are set at \(p^{min}=3.0\) MPa and \(p^{max}=6.0\) MPa. The results from the optimization are plotted in Fig. 2 with profiles for density, concentration, compressor ratio, and nodal pressure. We first observe that the profiles are periodic with two cycles (\(\nu=2.0\)) in the 24 hour time period, and therefore discuss the trends in the first cycle (\(T=0\) to 12 hours) only. In Fig. 2a, the H\({}_{2}\) concentrations at the three main nodes are plotted with time. The concentrations at node 1 (green) and 2 overlap because they are only connected by a compressor, whereas the concentration at the withdrawal node 3 (orange) is shifted to the right or lags behind because of the advective transport effect over the long pipe. The nodal density \(\rho\) plotted in Fig. 2b is the sum of partial densities of H\({}_{2}\) and NG. The density at node 1 (green) follows a smooth sinusoid similar to the injection node concentration because the pressure at node 1 (slack) is fixed and the two quantities are related by equation (2f). Nodal density depends on both the concentration and pressure at the node, which we observe in Figures (a)a, (b)b, and (d)d where there is a sharp increase in compressor pressure outlet (node 2) around \(t=3.5\) hours, which increases until it reaches the upper bound \(p=6.0\) MPa at \(t=5.5\) hours and then begins to decrease. In contrast to that, the densities at compressor outlet and withdrawal node vary slowly, and reached their maxima at \(t=7.5\) hours (see Fig. (b)b). Observe also that the density at the withdrawal node (node 3) has a smaller local maximum near \(t=9.5\) hours that results because the minimum in nodal concentration at the withdrawal node occurs at that time. The pressure drop across the pipe seen in Fig. (d)d appears higher when the node concentrations are high, such as at \(t=2.5\) hours or when injection and withdrawal flows are high, as at \(t=10.5\) hours. Fig. 3 shows the variation in injection and withdrawal flow along with the energy delivered at the withdrawal node. Because the optimization solver seeks to maximize the energy delivered at the withdrawal node (see equation (d)d), the amount of energy delivered is equal to its upper bound limit at 8000 MJ/s, except for a short duration between \(T=9.0\) to 10.5 hours. This is because the H\({}_{2}\) injection concentration in Fig. (a)a is so low that the energy content of the gas cannot meet the desired value of 8000 MJ/s without violating the compressor flow bound of 150 kg/s (see Fig. (c)c). Notice that the injection flow and energy flow both increase sharply at around \(T=2.5\) hours, but the energy injection begins to decrease subsequently because of the decrease in injection concentration. This is consistent with the pressure and node concentration profiles in Fig. (d)d and Fig. (a)a, respectively. Similarly, the withdrawal flow in Fig. (b)b is lower when nodal H\({}_{2}\) concentration is high and increases when the nodal concentrations begins to decrease Fig. 3: Results for single pipe optimization - II Fig. 2: Results for single pipe optimization - I at \(T=3.5\) hours, until it is unable to match the maximum energy demand at \(T=9.0\) hours, when it sharply decreases to minimize pressure drop, resulting in the compressor activity shown in Fig. (c)c. The withdrawal flow subsequently begins to slowly increase again to maximize the energy supply when the withdrawal node concentration begins increasing at \(T=10.0\) hours. Another important observation is that the maximum energy demand at the withdrawal node is met more regularly than the amount of time the energy flowing into node 1 is above the desired delivery rate at node 3. ### _8-node network_ Next, we demonstrate our model using an 8-node gas network example that features a loop, as shown in Figure 4. The network has one slack node (J1), two injections (J1 and J7), and two withdrawals (J3 and J5) along with three compressors. The pressure and injection concentration at slack node J1 are fixed at \(p=5.0\) MPa and \(\eta^{s}=0.8\). Meanwhile, the injection concentration at node J7 is varied as a sinusoid of the form in equation (16), similar to the single-pipe case, with \(\eta_{0}^{s}=0.05\), \(\delta=0.01\), and \(\nu=1\). We use a coarser time discretization of \(\Delta T=1.0\) hour in this example, which results in 7536 variables and 7392 equality constraints. The compressor flow bounds are set at \(f^{c,1}\leq 275\) kg/s, \(f^{c,2}\leq 260\) kg/s and \(f^{c,3}\leq 140\) kg/s. The pressure bounds and maximum energy demand are the same as in the single pipe example. The optimized variables in the model are plotted in Fig. 5 and Fig. 6. Unlike in the single pipe case, the profiles are less periodic and more non-uniform. A plausible explanation is that the nonlinearity of the nodal balance constraints equation (11) and the presence of two sources with different H\({}_{2}\) injection concentrations cause complex interactions with delays. Nonetheless, we do observe that there is no gas flow through compressor C2 in the optimal solution, and that compressor C3 is inactive in the optimal solution since the pressure at the withdrawal node J5 is at its lower bound. We also observe that the binding constraint in the network which is always active is the upper bound on the compressor flow C3 (\(f^{sc,3}=140\) kg/s). The pressure at J6 and J2 follow a similar trend where the compressor C1 is used to increase the pressure of injected gas from J1 (see Fig. (d)d and Fig. (c)c). The H\({}_{2}\) concentration at node J4 (cyan) is determined by the nodal mixing of gas with fixed concentration from node J2 (yellow) and gas with varying concentration from node J3 (pink) as seen in Fig. (a)a. In Fig. (b)b, the withdrawal flow at node J3 is sinusoidal, which balances the sinusoidal node concentration in Fig. (a)a and results in constant withdrawal energy at 8000 MJ/s. ## VI Conclusion and Future Work We examined the challenging problem of model-predictive optimal control of hydrogen blending into natural gas pipeline networks subject to inequality constraints. Our numerical method employs a lumped parameter approximation to reduce the associated partial differential equations to a differential algebraic equation system that can be easily dis Fig. 4: 8-node gas network example Fig. 5: Results for 8-node network - I cretized and solved using nonlinear optimization solvers. We employ a circular time-discretization that is advantageous for time-periodic boundary conditions, parameters, and inequality constraint bound values. The key advance in this study is the optimization of time-varying withdrawals from the network, where the running objective function is formulated in terms of energy delivery to consumers and mass injection by suppliers of natural gas or hydrogen. This leads to a conceptually and computationally well-posed formulation, which enables numerical resolution of the complex behavior that arises from nonlinear interactions of inhomogeneous gas flows mixing throughout a pipeline network.
2306.13062
Named entity recognition in resumes
Named entity recognition (NER) is used to extract information from various documents and texts such as names and dates. It is important to extract education and work experience information from resumes in order to filter them. Considering the fact that all information in a resume has to be entered to the companys system manually, automatizing this process will save time of the companies. In this study, a deep learning-based semi-automatic named entity recognition system has been implemented with a focus on resumes in the field of IT. Firstly, resumes of employees from five different IT related fields has been annotated. Six transformer based pre-trained models have been adapted to named entity recognition problem using the annotated data. These models have been selected among popular models in the natural language processing field. The obtained system can recognize eight different entity types which are city, date, degree, diploma major, job title, language, country and skill. Models used in the experiments are compared using micro, macro and weighted F1 scores and the performance of the methods was evaluated. Taking these scores into account for test set the best micro and weighted F1 score is obtained by RoBERTa and the best macro F1 score is obtained by Electra model.
Ege Kesim, Aysu Deliahmetoglu
2023-06-22T17:30:37Z
http://arxiv.org/abs/2306.13062v1
# Ozgeemislerde varlik isimlerinin tannmass1 Named entity recognition in resumes ###### Abstract Named entity recognition (NER) is used to extract information from various documents and texts such as names and dates. It is important to extract education and work experience information from resumes in order to filter them. Considering the fact that all information in a resume has to be entered to the company's system manually, automatizing this process will save time of the companies. In this study, a deep learning-based semit-automatic named entity recognition system has been implemented with a focus on resumes in the field of IT. Firstly, resumes of employees from five different IT related fields has been annotated. Six transformer based pre-trained models have been adapted to named entity recognition problem using the annotated data. These models have been selected among popular models in the natural language processing field. The obtained system can recognize eight different entity types which are city, date, degree, diploma major, job title, language, country and skill. Models used in the experiments are compared using micro, macro and weighted F1 scores and the performance of the methods was evaluated. Taking these scores into account for test set the best micro and weighted F1 score is obtained by RoBERTa and the best macro F1 score is obtained by Electra model. _Ozete_--Varlik Ismi tannma (VIT) cesitli bege v metinlerden isim, tarin gibi cesitli bligilerin cikarmlmasinda kullanlmakatadir. Ozegemislerden isiklerin gitim seviveleri ve daha danceki calsma dencjimi ile igligili biligirin cikarmlmas zogecmislerin filterltenobilmess lcim onen arz ettektedir. Her ozegemislerki biligigir isrketernis sistemlermen manet olarak girilecogi goz onen ainmanca bunn otomatik bir sekilde yapilmass sirkethere vakkit kazanc saglamaktadir. Bu calsma, bilisim alanndaki zogecmislerz ozelinde derin ogrenne tabanh yar-otomatik bir varh kimi tamna sistemi gercklestirilmistir. Il al alarak bilisim sekciftandekis bez farkli alanda basyuran adaylam zogecmisleri etikethemistir. Da onecden etitibliss mit farkli donstitrucen tabanh model varkli ismi tannma icin etiketlenn verive adapte edlmistir. Bu modeller farkli dogal dil isleme problemlerinde kullanlan populer modeller arasndan seclmistir. Elde edilen sistem sehir, tarinh diploma, diploma bblum, is unvam, lisan, alike veetecel olmak lizere sekiz farkli varh ktirini tanybillincktori. Yapilan dencjorke kullanlam modeler mixo, makro ve agmikli F1 skorlari kullanlark karylastitrimus ve gometlernin basarns degerlendirlimistir. Bu skorlar dikkate alindigunda test kulmesi lein en iyi mikro ve agmikli F1 skorunu RoBERTa, en iyi makro F1 skorunu Electra modeli elektetdir. _Anahtar Keimeler_ --_Varlik ismi tanna, dogal dil isleme, insan kaynaklari, BERT, donnistrucen tabanh dil modelleri_ _Keywords -- Named entity recognition, natural language processing, human resources, BERT, transformer based language models_ ## I Giris Varlik ismi tannma metinlerde gecen adres, sahs ve zirket isimleri gibi cesitli varhklara ait isimlerin buhummass problemdir. Varlik ismi tannma dogal dil islemede sklikla, kullanlmalaka olup ozellike belge ve dokumanlarda gecen cesitli kellemlerin buhumasma kullanlmalaktadir. BILSTM-CRF modelleri varh kimi tanma problemides skliklik kullanlmalaktadir. BILSTM-CRF modeli varhk ismi tanna probleminde basarli sonuclar vermistir [1,2]. Ayraca BERT modelinin dondurulup sadece bir keime temsil modeli olarak kullanldigi [2] ya dinc ayar yapilarak varh kim tannaya adapte edildigi calsmalar da mevecuturur [3,4]. Varlik ismi tanna insan kaynaklari sektorunde dekullanlmalaktadir. Sirketlerin cok sayida is basyurusu almasndan dolaylaylayn adaylarn ozegemislerinin islemesi veaklanmasi onemli problemlerdir. Varlik ismi tanna, kullanlark azogecmislerdeki onemli biligiler eikarldiktan sonnak bu biligilere gore filtreleme yapilabilir ya da ozegemisin tanna yyerine sadece bu gereki biligiler sistemlere aktarlib saklanabilir. Varlik ismi tanna populer bir problem olmasna ragmen açk kayaknakl ozegemis veri kumelerinin ksttl omas sebebiyle bu alanadaki varlik ismi tanna calsmalar kisthldr. Cogulas amtat netern eizernden ozegemislert topnap kullanlmalaktadir. Gaur ve diglerleri [5] calsmalmalarada gzegemislerin gitim bolum kullanlark 3 farkli varh ktiruni tantayabilen CNN-BILSTM tabanh bir model gelistrimistir. Zue v Wang [6] 5000 ozegemis kullanarak 19 farkli varhk tintunstanyabilen CNN-BILSTM-CRF tabanh bir model gelistrimis ve bu modelin sadece BILSTM-CRF kullanmaktan daha jvi oldugunu gostermistir. Bunlarn dbsunda literatitude Cince ozegemislerden olusan veri kumesi bulumasmandan dolayl Cince ozegemislerle yapilan calsmalar da mevecutur [7,8]. Bu calsmalada Ingilizice ozegemisler kullanlarak varlik ismi tamma yapilmstr. Kurulusun ik veritabanlarndta tutulan ozegemis verilerinden secilen orenkler anonimlestirilmis vegenellestirilmis; IT alannda ozgun bir veri kumesi olusturulmustur. Bilisim sektorunde bes farkli alanda calsman
2302.01191
Noncommutative $C^*$-algebra Net: Learning Neural Networks with Powerful Product Structure in $C^*$-algebra
We propose a new generalization of neural network parameter spaces with noncommutative $C^*$-algebra, which possesses a rich noncommutative structure of products. We show that this noncommutative structure induces powerful effects in learning neural networks. Our framework has a wide range of applications, such as learning multiple related neural networks simultaneously with interactions and learning equivariant features with respect to group actions. Numerical experiments illustrate the validity of our framework and its potential power.
Ryuichiro Hataya, Yuka Hashimoto
2023-01-26T14:35:37Z
http://arxiv.org/abs/2302.01191v2
Noncommutative \(C^{*}\)-algebra Net: Learning Neural Networks with Powerful Product Structure in \(C^{*}\)-algebra ###### Abstract We propose a new generalization of neural networks with noncommutative \(C^{*}\)-algebra. An important feature of \(C^{*}\)-algebras is their noncommutative structure of products, but the existing \(C^{*}\)-algebra net frameworks have only considered commutative \(C^{*}\)-algebras. We show that this noncommutative structure of \(C^{*}\)-algebras induces powerful effects in learning neural networks. Our framework has a wide range of applications, such as learning multiple related neural networks simultaneously with interactions and learning invariant features with respect to group actions. We also show the validity of our framework numerically, which illustrates its potential power. ## 1 Introduction Generalization of the parameter space of neural networks from real numbers to others has attracted researchers for decades. Although using real-valued parameters is standard and straightforward for real-valued data, it may be more suitable to adopt parameters of complex numbers (Hirose, 1992; Nishikawa et al., 2005; Amin et al., 2008; Yadav et al., 2005; Trabelsi et al., 2018; Lee et al., 2022) or quaternion numbers Nitta (1995); Arena et al. (1997); Zhu et al. (2018); Gaudet and Maida (2018) for data in signal processing, computer vision, and robotics domains, among others. Clifford-algebra, the generalization of these numbers, allows more flexible geometrical processing of data, and thus is applied to neural networks Pearson and Bisset (1994); Buchholz (2005); Buchholz and Sommer (2008) to handle rich geometric relationships in data (Rivera-Rovelo et al., 2010; Zang et al., 2022; Brandstetter et al., 2022). Different from these approaches focusing on the geometric perspective of parameter values, another direction of generalization is to use function-valued parameters Rossi and Conan-Guez (2005); Thind et al. (2022), which broadens the applications of neural networks to functional data. Hashimoto et al. (2022) proposed generalizing neural network parameters to \(C^{*}\)-algebra, which is called \(C^{*}\)-algebra net. They showed that multiple related neural networks can be continuously combined into a single \(C^{*}\)-algebra net. For example, networks for the same task with different training datasets or different initial parameters can be combined continuously, which enables the construction of infinitely many networks and efficient learning using shared information among them. Such interaction among networks is also applicable to learning from related tasks, such as ensemble learning (Dong et al., 2020; Ganaie et al., 2022) and multitask learning (Zhang and Yang, 2022). However, because the product structure in the \(C^{*}\)-algebra that Hashimoto et al. (2021) focused on is commutative, they needed specially designed loss functions to induce the interaction. In this paper, we propose a new generalization of neural networks with noncommutative \(C^{*}\)-algebra, which also generalizes the framework of Hashimoto et al. (2022). \(C^{*}\)-algebra is a generalization of the space of complex numbers Murphy (1990); Hashimoto et al. (2021). Typical examples of \(C^{*}\)-algebras include the space of diagonal matrices and the space of (not necessarily diagonal) matrices. An important feature of \(C^{*}\)-algebra is the product structure. Analogous to the case of complex numbers, for any two elements \(a\) and \(b\) in a \(C^{*}\)-algebra, we can calculate the product \(a\cdot b\). However, unlike the case of complex numbers, the product is not always commutative, i.e., \(a\cdot b=b\cdot a\) is not always satisfied. For example, whereas the product of two diagonal matrices commutes, the product of two nondiagonal matrices does not commute. In the case of diagonal matrices, the product is just described by the product of each diagonal element, and there are no interactions with the other elements. On the other hand, the product of two nondiagonal matrices is described by the sum of the products between different elements in the matrices, which induces interactions with the other elements. For neural networks, the interactions induced by the noncommutative \(C^{*}\)-algebras correspond to interactions among multiple neural networks. Because the interactions are encoded in the structure of the network, we do not need to design loss functions to make the networks interact. Instead, the interactions required for them are implicitly and automatically learned through the noncommutative product structure in \(C^{*}\)-algebras. Therefore, our framework, which takes advantage of noncommutative \(C^{*}\)-algebras enables us to go beyond existing frameworks of neural networks. Our framework of the noncommutative \(C^{*}\)-algebra net is general and has a wide range of applications, not limited to the above case of matrices. For example, by setting the \(C^{*}\)-algebra as a group \(C^{*}\)-algebra, we can construct a group equivariant neural network. In this case, we can naturally generalize real-valued parameters of neural networks to those inducing group convolution. We can also set the \(C^{*}\)-algebra as the \(C^{*}\)-algebra of bounded linear operators on a function space, which can be applied to analyzing functional data. Our main contributions are summarized as follows: * We generalize the commutative \(C^{*}\)-algebra net proposed by Hashimoto et al. [2022] to noncommutative \(C^{*}\)-algebra, which enables us to take advantage of the noncommutative product structure in the \(C^{*}\)-algebra when learning neural networks. * We show a wide range of applications of our framework, inducing interactions among networks and learning invariant features with respect to group actions. * We empirically illustrate the validity of noncommutative \(C^{*}\)-algebra nets, including interactions among neural networks. We emphasize that \(C^{*}\)-algebra is a powerful tool for neural networks, and our work provides a lot of important perspectives about its application. ## 2 Background In this section, we review the mathematical background of \(C^{*}\)-algebra required for this paper and the existing \(C^{*}\)-algebra net. For more theoretical details of the \(C^{*}\)-algebra, see, for example, Murphy [1990]. ### \(C^{*}\)-algebra \(C^{*}\)-algebra is a generalization of the space of complex values. It has structures of the product, involution \({}^{*}\), and norm. **Definition 1** (\(C^{*}\)-algebra): _A set \(\mathcal{A}\) is called a \(C^{*}\)-algebra if it satisfies the following conditions: 1. \(\mathcal{A}\) is an algebra over \(\mathbb{C}\) and equipped with a bijection \((\cdot)^{*}:\mathcal{A}\rightarrow\mathcal{A}\) that satisfies the following conditions for \(\alpha,\beta\in\mathbb{C}\) and \(c,d\in\mathcal{A}\):_ * \((\alpha c+\beta d)^{*}=\overline{\alpha}c^{*}+\overline{\beta}d^{*}\)_,_ * \((cd)^{*}=d^{*}c^{*}\)_,_ * \((c^{*})^{*}=c\)_._ 2. \(\mathcal{A}\) _is a normed space with_ \(\|\cdot\|\)_, and for_ \(c,d\in\mathcal{A}\)_,_ \(\|cd\|\leq\|c\|\,\|d\|\) _holds. In addition,_ \(\mathcal{A}\) _is complete with respect to_ \(\|\cdot\|\)_._ 3. _For_ \(c\in\mathcal{A}\)_,_ \(\|c^{*}c\|=\|c\|^{2}\) _holds._ The product structure in \(C^{*}\)-algebras can be both commutative and noncommutative. **Example 1** (Commutative \(C^{*}\)-algebra): _Let \(\mathcal{A}\) be the space of continuous functions on a compact Hausdorff space \(\mathcal{Z}\). We can regard \(\mathcal{A}\) as a \(C^{*}\)-algebra by setting_ * _Product: Pointwise product of two functions, i.e., for_ \(a_{1},a_{2}\in\mathcal{A}\)_,_ \(a_{1}a_{2}(z)=a_{1}(z)a_{2}(z)\)_._ * _Involution: Pointwise complex conjugate, i.e., for_ \(a\in\mathcal{A}\)_,_ \(a^{*}(z)=\overline{a(z)}\)_._ * _Norm: Sup norm, i.e., for_ \(a\in\mathcal{A}\)_,_ \(\|a\|=\sup_{z\in\mathcal{Z}}|a(z)|\)_._ _In this case, the product in \(\mathcal{A}\) is commutative._ **Example 2** (Noncommutative \(C^{*}\)-algebra): _Let \(\mathcal{A}\) be the space of bounded linear operators on a Hilbert space \(\mathcal{H}\), which is denoted by \(\mathcal{B}(\mathcal{H})\). We can regard \(\mathcal{A}\) as a \(C^{*}\)-algebra by setting_ * _Product: Composition of two operators,_ * _Involution: Adjoint of an operator,_ * _Norm: Operator norm of an operator, i.e., for_ \(a\in\mathcal{A}\)_,_ \(\|a\|=\sup_{v\in\mathcal{H},\|v\|_{\mathcal{H}}=1}\|av\|_{\mathcal{H}}\)_. Here,_ \(\|\cdot\|_{\mathcal{H}}\) _is the norm in_ \(\mathcal{H}\)_. In this case, the product in_ \(\mathcal{A}\) _is noncommutative. Note that if_ \(\mathcal{H}\) _is a_ \(d\)_-dimensional space for a finite natural number_ \(d\)_, then elements in_ \(\mathcal{A}\) _are_ \(d\) _by_ \(d\) _matrices._ **Example 3** (Group \(C^{*}\)-algebra): _The group \(C^{*}\)-algebra on a group \(G\), which is denoted as \(C^{*}(G)\), is the set of maps from \(G\) to \(\mathbb{C}\) equipped with the following product, involution, and norm:_ * _Product:_ \((a\cdot b)(g)=\int_{G}a(h)b(h^{-1}g)\mathrm{d}\lambda(h)\) _for_ \(g\in G\)_,_ * _Involution:_ \(a^{*}(g)=\Delta(g^{-1})\overline{a(g^{-1})}\) _for_ \(g\in G\)_,_ * _Norm:_ \(\|a\|=\sup_{[\pi]\in G}\|\pi(a)\|\)_,_ _where \(\Delta(g)\) is a positive number satisfying \(\lambda(Eg)=\Delta(g)\lambda(E)\) for the Haar measure \(\lambda\) on \(G\). In addition, \(\hat{G}\) is the set of equivalence classes of irreducible unitary representations of \(G\). Note that if \(G\) is discrete, then \(\lambda\) is the counting measure on \(G\). In this paper, we focus mainly on the product structure of \(C^{*}(G)\). For details of the Haar measure and representations of groups, see Kirillov (1976). If \(G=\mathbb{Z}/p\mathbb{Z}\), then \(C^{*}(G)\) is \(C^{*}\)-isomorphic to the \(C^{*}\)-algebra of circulant matrices (Hashimoto et al., 2023). Note also that if \(G\) is noncommutative, then \(C^{*}(G)\) can also be noncommutative._ ### \(C^{*}\)-algebra net Hashimoto et al. (2022) proposed generalizing real-valued neural network parameters to commutative \(C^{*}\)-algebra-valued ones. Here, we briefly review the existing (commutative) \(C^{*}\)-algebra net. Let \(\mathcal{A}=C(\mathcal{Z})\), the commutative \(C^{*}\)-algebra of continuous functions on a compact Hausdorff space \(\mathcal{Z}\). Let \(H\) be the depth of the network and \(N_{0},\ldots,N_{H}\) be the width of each layer. For \(i=1,\ldots,H\), set \(W_{i}:\mathcal{A}^{N_{i-1}}\to\mathcal{A}^{N_{i}}\) as an Affine transformation defined with an \(N_{i}\times N_{i-1}\)\(\mathcal{A}\)-valued matrix and an \(\mathcal{A}\)-valued bias vector in \(\mathcal{A}^{N_{i}}\). In addition, set a nonlinear activation function \(\sigma_{i}:\mathcal{A}^{N_{i}}\to\mathcal{A}^{N_{i}}\). The commutative \(C^{*}\)-algebra net \(f:\mathcal{A}^{N_{0}}\to\mathcal{A}^{N_{H}}\) is defined as \[f=\sigma_{H}\circ W_{H}\circ\cdots\circ\sigma_{1}\circ W_{1}. \tag{1}\] By generalizing neural network parameters to functions, we can combine multiple standard (real-valued) neural networks continuously, which enables them to learn efficiently. We show an example of commutative \(C^{*}\)-nets below. To simplify the notation, we focus on the case where the network does not have biases. However, the same arguments are valid for the case where the network has biases. #### 2.2.1 The case of diagonal matrices If \(\mathcal{Z}\) is a finite set, then \(\mathcal{A}=\{a\in\mathbb{C}^{d\times d}\;\mid\;a\text{ is a diagonal matrix}\}\). The \(C^{*}\)-algebra net \(f\) on \(\mathcal{A}\) corresponds to \(d\) separate real or complex-valued sub-models. Indeed, denote by \(x^{j}\) the vector composed of the \(j\)th diagonal elements of \(x\in\mathcal{A}^{N}\), which is defined as the vector in \(\mathbb{C}^{N}\) whose \(k\)th element is the \(j\)th diagonal element of the \(\mathcal{A}\)-valued \(k\)th element of \(x\). Assume the activation function \(\sigma_{i}:\mathcal{A}^{N}\to\mathcal{A}^{N}\) is defined as \(\sigma_{i}(x)^{j}=\tilde{\sigma}_{i}(x^{j})\) for some \(\tilde{\sigma}_{i}:\mathbb{C}^{N}\to\mathbb{C}^{N}\). Since the \(j\)th diagonal element of \(a_{1}a_{2}\) for \(a_{1},a_{2}\in\mathcal{A}\) is the product of the \(j\)th element if \(a_{1}\) and \(a_{2}\), we have \[f(x)^{j}=\tilde{\sigma}_{H}\circ W_{H}^{j}\circ\cdots\circ\tilde{\sigma}_{1} \circ W_{1}^{j}, \tag{2}\] where \(W_{i}^{j}\in\mathbb{C}^{N_{i}\times N_{i-1}}\) is the matrix whose \((k,l)\)-entry is the \(j\)th diagonal of the \((k,l)\)-entry of \(W_{i}\in\mathcal{A}^{N_{i}\times N_{i-1}}\). Figure 1 (a) schematically shows the \(C^{*}\)-algebra net over diagonal matrices. Figure 1: Difference between commutative and noncommutative \(C^{*}\)-algebra nets. Noncommutative \(C^{*}\)-algebra Net Although the existing \(C^{*}\)-algebra net provides a framework for applying \(C^{*}\)-algebra to neural networks, it focuses on commutative \(C^{*}\)-algebras, whose product structure is simple. Therefore, we generalize the existing commutative \(C^{*}\)-algebra net to noncommutative \(C^{*}\)-algebra. Since the product structures in noncommutative \(C^{*}\)-algebras are more complicated than those in commutative \(C^{*}\)-algebras, they enable neural networks to learn features of data more efficiently. For example, if we focus on the \(C^{*}\)-algebra of matrices, then the neural network parameters describe interactions between multiple real-valued sub-models (see Section 3.1.1). Let \(\mathcal{A}\) be a general \(C^{*}\)-algebra and consider the network \(f\) in the same form as Equation (1). We emphasize that in our framework, the choice of \(\mathcal{A}\) is not restricted to a commutative \(C^{*}\)-algebra. We list examples of \(\mathcal{A}\) and their validity for learning neural networks below. ### Examples of \(C^{*}\)-algebras for neural networks As mentioned in the previous section, we focus on the case where the network does not have biases for simplification in this subsection. #### 3.1.1 Nondiagonal matrices Let \(\mathcal{A}=\mathbb{C}^{d\times d}\). Note that \(\mathcal{A}\) is a noncommutative \(C^{*}\)-algebra. In this case, unlike the network (2), the \(j\)th diagonal element of \(a_{1}a_{2}a_{3}\) for \(a_{1},a_{2},a_{3}\in\mathcal{A}\) depends not only on the \(j\)th diagonal element of \(a_{2}\), but also the other diagonal elements of \(a_{2}\). Thus, \(f(x)^{j}\) depends not only on the sub-model corresponding to \(j\)th diagonal element discussed in Section 2.2.1, but also on other sub-models. The nondiagonal elements in \(\mathcal{A}\) induce interactions between \(d\) real or complex-valued sub-models. In practice, to regard the nondiagonal elements as factors of interactions, their values should be small compared to the diagonal elements. We will see the effect of the nondiagonal elements in \(\mathcal{A}\) numerically in Section 4.1. Figure 1 (b) schematically shows the \(C^{*}\)-algebra net over nondiagonal matrices. #### 3.1.2 Block diagonal matrices Let \(\mathcal{A}=\{a\in\mathbb{C}^{d\times d}\,\mid\,a=\mathrm{diag}(\mathbf{a}_{ 1},\ldots,\mathbf{a}_{m}),\;\mathbf{a}_{i}\in\mathbb{C}^{d_{i}\times d_{i}}\}\). The product of two block diagonal matrices \(a=\mathrm{diag}(\mathbf{a}_{1},\ldots,\mathbf{a}_{m})\) and \(b=\mathrm{diag}(\mathbf{b}_{1},\ldots,\mathbf{b}_{m})\) can be written as \[ab=\mathrm{diag}(\mathbf{a}_{1}\mathbf{b}_{1},\ldots,\mathbf{a}_{m}\mathbf{b} _{m}).\] In a similar manner to Section 2.2.1, we denote by \(\mathbf{x}^{j}\) the \(N\) by \(d_{j}\) matrix composed of the \(j\)th diagonal blocks of \(x\in\mathcal{A}^{N}\). Assume the activation function \(\sigma_{i}:\mathcal{A}^{N}\to\mathcal{A}^{N}\) is defined as \(\sigma_{i}(x)=\mathrm{diag}(\tilde{\boldsymbol{\sigma}}_{i}^{1}(\mathbf{x}^{1} ),\ldots,\tilde{\boldsymbol{\sigma}}_{i}^{m}(\mathbf{x}^{m}))\) for some \(\tilde{\boldsymbol{\sigma}}_{i,j}:\mathbb{C}^{N\times d_{j}}\to\mathbb{C}^{N \times d_{j}}\). Then, we have \[\mathbf{f}(\mathbf{x})^{j}=\tilde{\boldsymbol{\sigma}}_{H}^{j}\circ\mathbf{W} _{H}^{j}\circ\cdots\circ\tilde{\boldsymbol{\sigma}}_{1}^{j}\circ\mathbf{W}_{1} ^{j}, \tag{3}\] where \(\mathbf{W}_{i}^{j}\in(\mathbb{C}^{d_{j}\times d_{j}})^{N_{i}\times N_{i-1}}\) is the block matrix whose \((k,l)\)-entry is the \(j\)th block diagonal of the \((k,l)\)-entry of \(W_{i}\in\mathcal{A}^{N_{i}\times N_{i-1}}\). In this case, we have \(m\) groups of sub-models, each of which is composed of interacting \(d_{j}\) sub-models mentioned in Section 3.1.1. Indeed, the block diagonal case generalizes the diagonal and nondiagonal cases stated in Sections 2.2.1 and 3.1.1. If \(d_{j}=1\) for all \(j=1,\ldots,m\), then the network (3) is reduced to the network (2) with diagonal matrices. If \(m=1\) and \(d_{1}=d\), then the network (3) is reduced to the network with \(d\) by \(d\) nondiagonal matrices. #### 3.1.3 Circulant matrices Let \(\mathcal{A}=\{a\in\mathbb{C}^{d\times d}\,\mid\,a\text{ is a circulant matrix}\}\). Here, a circulant matrix \(a\) is the matrix represented as \[a=\begin{bmatrix}a_{1}&a_{d}&\cdots&a_{2}\\ a_{2}&a_{1}&\cdots&a_{3}\\ &\ddots&\ddots&\\ a_{d}&a_{d-1}&\cdots&a_{1}\end{bmatrix}\] for \(a_{1},\ldots,a_{d}\in\mathbb{C}\). Note that in this case, \(\mathcal{A}\) is commutative. Circulant matrices are diagonalized by the discrete Fourier matrix as follows [10]. We denote by \(F\) the discrete Fourier transform matrix, whose \((i,j)\)-entry is \(\omega^{(i-1)(j-1)}/\sqrt{p}\), where \(\omega=\mathrm{e}^{2\pi\sqrt{-1}/d}\). **Lemma 1**: _Any circulant matrix \(a\) is decomposed as \(a=F\Lambda_{a}F^{*}\), where_ \[\Lambda_{a}=\mathrm{diag}\,\bigg{(}\sum_{i=1}^{d}a_{i}\omega^{(i-1)\cdot 0}, \ldots,\sum_{i=1}^{d}a_{i}\omega^{(i-1)(d-1)}\bigg{)}.\] Since \(ab=F\Lambda_{a}\Lambda_{b}F^{*}\) for \(a,b\in\mathcal{A}\), the product of \(a\) and \(b\) corresponds to the multiplication of each Fourier component of \(a\) and \(b\). Assume the activation function \(\sigma_{i}:\mathcal{A}^{N}\to\mathcal{A}^{N}\) is defined such that \((F^{*}\sigma_{i}(x)F)^{j}\) equals to \(\hat{\hat{\sigma}}_{i}((FxF^{*})^{j})\) for some \(\hat{\hat{\sigma}}_{i}:\mathbb{C}^{N}\to\mathbb{C}^{N}\). Then, we obtain the network \[(F^{*}f(x)F)^{j}=\hat{\hat{\sigma}}_{H}\circ\hat{W}_{H}^{\,j}\circ\cdots\circ \hat{\hat{\sigma}}_{1}\circ\hat{W}_{1}^{j}, \tag{4}\] where \(\hat{W}_{i}^{j}\in\mathbb{C}^{N_{i}\times N_{i-1}}\) is the matrix whose \((k,l)\)-entry is \((Fw_{i,k,l}F^{*})^{j}\), where \(w_{i,k,l}\) is the the \((k,l)\)-entry of \(W_{i}\in\mathcal{A}^{N_{i}\times N_{i-1}}\). The \(j\)th sub-model of the network (4) corresponds to the network of the \(j\)th Fourier component. **Remark 1**: _The \(j\)th sub-model of the network (4) does not interact with those of other Fourier components than the \(j\)th component. This fact corresponds to the fact that \(\mathcal{A}\) is commutative in this case. Analogous to the case in Section 3.1.1, if we set \(\mathcal{A}\) as noncirculant matrices, then we obtain interactions between sub-models corresponding to different Fourier components._ #### 3.1.4 Group \(C^{*}\)-algebra on a symmetric group Let \(G\) be the symmetric group on the set \(\{1,\ldots,d\}\) and let \(\mathcal{A}=C^{*}(G)\). Note that since \(G\) is noncomuutative, \(C^{*}(G)\) is also noncommutative. Then, the output \(f(x)\in\mathcal{A}^{N_{H}}\) is the \(\mathbb{C}^{N_{H}}\)-valued map on \(G\). Using the product structure introduced in Example 3, we can construct a network that takes the permutation of data into account. Indeed, an element \(w\in\mathcal{A}\) of a weight matrix \(W\in\mathcal{A}^{N_{i-1}\times N_{i}}\) is a function on \(G\). Thus, \(w(g)\) describes the weight corresponding to the permutation \(g\in G\). Since the product of \(x\in C^{*}(G)\) and \(w\) is defined as \(wx(g)=\sum_{h\in G}w(h)x(h^{-1}g)\), by applying \(W\), all the weights corresponding to the permutations affect the input. For example, let \(z\in\mathbb{R}^{d}\) and set \(x\in C^{*}(G)\) as \(x(g)=g\cdot z\), where \(g\cdot z\) is the action of \(g\) on \(z\), i.e., the permutation of \(z\) with respect to \(g\). Then, we can input all the patterns of permutations of \(z\) simultaneously, and by virtue of the product structure in \(C^{*}(G)\), the network is learned with the interaction among these permutations. Regarding the output, if the network is learned so that the outputs \(y\) become constant functions on \(G\), i.e., \(y(g)=c\) for some constant \(c\), then it means that \(c\) is invariant with respect to \(g\), i.e., invariant with respect to the permutation. We will numerically investigate the application of the group \(C^{*}\)-algebra net to permutation invariant problems in Section 4.2. **Remark 2**: _If the activation function \(\sigma\) is defined as \(\sigma(x)(g)=\sigma(x(g))\), i.e., applied elementwisely to \(x\), then the network \(f\) is permutation equivariant. That is, even if the input \(x(g)\) is replaced by \(x(gh)\) for some \(h\in G\), the output \(f(x)(g)\) is replaced by \(f(x)(gh)\). This is because the product in \(C^{*}(G)\) is defined as a convolution. This feature of the convolution has been studied for group equivariant neural networks (Lenssen et al., 2018; Cohen et al., 2019; Sannai et al., 2021; Sonoda et al., 2022). The above setting of the \(C^{*}\)-algebra net provides us with a design of group equivariant networks from the perspective of \(C^{*}\)-algebra._ #### 3.1.5 Bounded linear operators on a Hilbert space For functional data, we can also set \(\mathcal{A}\) as an infinite-dimensional space. Using infinite-dimensional \(C^{*}\)-algebra for analyzing functional data has been proposed (Hashimoto et al., 2021). We can also adopt this idea for neural networks. Let \(\mathcal{A}=\mathcal{B}(L^{2}(\Omega))\) for a measure space \(\Omega\). Set \(\mathcal{A}_{0}=\{a\in\mathcal{A}\mid a\text{ is a multiplication operator}\}\). Here, a multiplication operator \(a\) is a linear operator that is defined as \(av=v\cdot u\) for some \(u\in L^{\infty}(\Omega)\). The space \(\mathcal{A}_{0}\) is a generalization of the space of diagonal matrices to the infinite-dimensional space. If we restrict elements of weight matrices to \(\mathcal{A}_{0}\), then we obtain infinitely many sub-models without interactions. Since outputs are in \(\mathcal{A}_{0}^{N_{H}}\), we can obtain functional data as outputs. Similar to the case of matrices (see Section 3.1.1), by setting elements of weight matrices as elements in \(\mathcal{A}\), we can take advantage of interactions among infinitely many sub-models. ### Approximation of functions with interactions by \(C^{*}\)-algebra net We observe what kind of functions the \(C^{*}\)-algebra net can approximate. We focus on the case of \(\mathcal{A}=\mathbb{C}^{d\times d}\). Consider a shallow network \(f:\mathcal{A}^{N_{0}}\to\mathcal{A}\) defined as \(f(x)=W_{2}^{*}\sigma(W_{1}x+b)\), where \(W_{1}\in\mathcal{A}^{N_{1}\times N_{0}}\), \(W_{2}\in\mathcal{A}^{N_{1}}\), and \(b\in\mathcal{A}^{N_{1}}\). Let \(\tilde{f}:\mathcal{A}^{N_{0}}\to\mathcal{A}\) be the function in the form of \(\tilde{f}(x)=[\sum_{j=1}^{d}f_{kj}(x^{l})]_{kl}\), where \(f_{kj}:\mathbb{C}^{N_{0}d}\to\mathbb{R}\). Here, we abuse the notation and denote by \(x^{l}\in\mathbb{C}^{N_{0}d}\) the \(l\)th column of \(x\) regarded as an \(N_{0}d\) by \(d\) matrix. Assume \(f_{kj}\) is represented as \[f_{kj}(x)=\int_{\mathbb{R}}\int_{\mathbb{R}^{N_{0}d}}T_{kj}(w,b)\sigma(w^{*}x+ b)\mathrm{d}w\,\mathrm{d}b \tag{5}\] for some \(T_{kj}:\mathbb{R}^{N_{0}d}\times\mathbb{R}\to\mathbb{R}\). By the theory of the ridgelet transform, such \(T_{kj}\) exists for most realistic settings [10, 11]. For example, if \(f_{kj}\) and its Fourier transform is in \(L^{1}(\mathbb{R}^{N_{0}d})\) and \(\sigma\) is the ReLU function, then \(f_{kj}\) has a representation of Equation (5). We discretize Equation (5) by replacing the Lebesgue measures with \(\sum_{i=1}^{N_{1}}\delta_{w_{ij}}\) and \(\sum_{i=1}^{N_{1}}\delta_{b_{ij}}\), where \(\delta_{w}\) is the Dirac measure centered at \(w\). Then, the \((k,l)\)-entry of \(\tilde{f}(x)\) is written as \[\sum_{j=1}^{d}\sum_{i=1}^{N_{1}}T_{kj}(w_{ij},b_{ij})\sigma(w_{ij}^{*}x^{l}+b_{ ij}).\] Setting the \(i\)-th element of \(W_{2}\in\mathcal{A}^{N_{1}}\) as \([T_{kj}(w_{ij},b_{ij})]_{kj}\), the \((i,m)\)-entry of \(W_{1}\in\mathcal{A}^{N_{1}\times N_{0}}\) as \([(w_{i,j})_{md+l}]_{jl}\), the \(i\)th element of \(b\in\mathcal{A}^{N_{1}}\) as \([b_{j}]_{jl}\), we obtain \[\sum_{j=1}^{d}\sum_{i=1}^{N_{1}}T_{kj}(w_{ij},b_{ij})\sigma(w_{ij}^{*}x^{l}+b_ {ij})=(W_{2}^{k})^{*}\sigma(W_{1}x^{l}+b^{l}),\] which is the \((k,l)\)-entry of \(f(x)\). **Remark 3**: _As we discussed in Sections 2.2.1 and 3.1.1, a \(C^{*}\)-algebra net over matrices can be regarded as \(d\) interacting sub-models. The above argument insists the \(l\)th column of \(f(x)\) and \(\tilde{f}(x)\) depends only on \(x^{l}\). Thus, in this case, if we input data \(x^{l}\) corresponding to the \(l\)th sub-model, then the output is obtained as the \(l\)th column of the \(\mathcal{A}\)-valued output \(f(x)\). On the other hand, the weight matrices \(W_{1}\) and \(W_{2}\) and the bias \(b\) are used commonly in providing the outputs for any sub-model, i.e., \(W_{1}\), \(W_{2}\), and \(b\) are learned using data corresponding to all the sub-models. Therefore, \(W_{1}\), \(W_{2}\), and \(b\) induce interactions among the sub-models._ ## 4 Experiments In this section, we numerically demonstrate the abilities of noncommutative \(C^{*}\)-algebra nets using nondiagonal \(C^{*}\)-algebra nets over matrices and group \(C^{*}\)-algebra nets. We use \(C^{*}\)-algebra-valued multi-layered perceptrons (MLPs) to simplify the experiments. However, they can be naturally extended to other neural networks, such as convolutional neural networks. The models were implemented with JAX[Bradbury et al., 2018]. Experiments were conducted on an AMD EPYC 7543 CPU and an NVIDIA A-100 GPU with CUDA 11.7. See Section 6.1 for additional information on experiments. ### \(C^{*}\)-algebra nets over matrices In a noncommutative \(C^{*}\)-algebra net over matrices consisting of nondiagonal-matrix parameters, each sub-model is expected to interact with others and thus improve performance compared with its commutative counterpart consisting of diagonal matrices. We demonstrate the effectiveness of such interaction using image classification and neural implicit representation (NIR) tasks. See Section 3.1.1 for the notations. When training the \(j\)th sub-model (\(j=1,2,\ldots,d\)), an original \(N_{0}\)-dimensional input data point \(\mathbf{x}=[\mathbf{x}_{1},\ldots,\mathbf{x}_{N_{0}}]\in\mathbb{R}^{N_{0}}\) is converted to its corresponding representation \(x\in\mathcal{A}^{N_{0}}=\mathbb{R}^{N_{0}\times d\times d}\) such that \(x_{i,j,j}=\mathbf{x}_{i}\) for \(i=1,2,\ldots,N_{0}\) and \(0\) otherwise. The loss to its \(N_{H}\)-dimensional output of a \(C^{*}\)-algebra net \(y\in\mathcal{A}^{N_{H}}\) and the target \(t\in\mathcal{A}^{N_{H}}\) is computed as \(\ell(y_{:,j,j},t_{:,j,j})+\frac{1}{2}\sum_{k,(l\neq j)}(y_{k,j,l}^{2}+y_{k,l, j}^{2})\), where \(\ell\) is a certain loss function; we use the mean squared error (MSE) for image classification and the Huber loss for NIR. The second and third terms suppress the nondiagonal elements of the outputs to \(0\). In both examples, we use leaky-ReLU as an activation function and apply it only to the diagonal elements of pre-activations. #### 4.1.1 Image classification We conduct experiments of image classification tasks using MNIST [Le Cun et al., 1998], Kuzushiji-MNIST [Clanuwat et al., 2018], and Fashion-MNIST [Xiao et al., 2017], which are composed of 10-class \(28\times 28\) gray-scale images. Each sub-model is trained on a mutually exclusive subset sampled from the original training data and then evaluated on the entire test data. Each subset is sampled to be balanced, i.e., each class has the same number of training samples. As a baseline, we use a commutative \(C^{*}\)-algebra net over diagonal matrices, which consists of the same sub-models but cannot interact with other sub-models. Both noncommutative and commutative models share hyperparameters: the number of layers was set to \(4\), the hidden size was set to \(128\), and the models were trained for \(30\) epochs. Table 1 shows the average test accuracy over sub-models. As can be seen, a noncommutative \(C^{*}\)-algebra net consistently outperforms its commutative counterpart, which is significant, particularly when the number of sub-models is \(40\). Note that when the number of sub-models is \(40\), the size of the training dataset for each sub-model is \(40\) times smaller than the original one, and thus, the commutative \(C^{*}\)-algebra net fails to learn. Nevertheless, the noncommutative \(C^{*}\)-algebra net retains performance mostly. These results suggest that sub-models share knowledge through interaction. Additionally, Table 2 illustrates that related tasks help performance improvement through interaction. Specifically, we prepare five sub-models per dataset, one of MNIST, Kuzushiji-MNIST, and Fashion-MNIST, and train a total of 15 sub-models simultaneously. In addition to the commutative \(C^{*}\)-algebra net, where sub-models have no interaction, and the noncommutative \(C^{*}\)-algebra net, where each sub-model can interact with any other sub-models, we use a block-diagonal noncommutative \(C^{*}\)-algebra net (see Section 3.1.2), where each sub-model can only interact with other sub-models trained on the same dataset. Table 2 shows that the fully noncommutative \(C^{*}\)-algebra net surpasses the block-diagonal one on Kuzushiji-MNIST and Fashion-MNIST, implying that not only intra-task interaction but also inter-task interaction helps performance gain. Note that each dataset is subsampled so that every class has the same number of samples, so it is not possible to compare the values of Tables 1 and 2. #### 4.1.2 Neural implicit representation In the next experiment, we use a \(C^{*}\)-algebra net over matrices to learn implicit representations of 2D images that map each pixel coordinate to its RGB colors (Sitzmann et al., 2020; Xie et al., 2022). Specifically, an input coordinate in \([0,1]^{2}\) is transformed into a random Fourier features in \([-1,1]^{320}\) and then converted into its \(C^{*}\)-algebraic representation over matrices as an input to a \(C^{*}\)-algebra net over matrices. Similar to the image classification task, we compare \begin{table} \begin{tabular}{l c c c} \hline \hline Dataset & \# sub-models & Commutative & Noncommutative \\ & & \(C^{*}\)-algebra net & \(C^{*}\)-algebra net \\ & & (baseline) & \\ \hline \multirow{4}{*}{MNIST} & 5 & \(0.963\pm 0.003\) & \(0.970\pm 0.003\) \\ & 10 & \(0.937\pm 0.004\) & \(0.956\pm 0.002\) \\ & 20 & \(0.817\pm 0.018\) & \(0.932\pm 0.003\) \\ & 40 & \(0.107\pm 0.008\) & \(0.858\pm 0.004\) \\ \hline \multirow{4}{*}{Kuzushiji-MNIST} & 5 & \(0.839\pm 0.002\) & \(0.858\pm 0.003\) \\ & 10 & \(0.770\pm 0.006\) & \(0.813\pm 0.006\) \\ \cline{1-1} & 20 & \(0.577\pm 0.024\) & \(0.746\pm 0.008\) \\ \cline{1-1} & 40 & \(0.101\pm 0.004\) & \(0.577\pm 0.010\) \\ \hline \multirow{4}{*}{Fashion-MNIST} & 5 & \(0.861\pm 0.002\) & \(0.868\pm 0.002\) \\ \cline{1-1} & 10 & \(0.837\pm 0.002\) & \(0.852\pm 0.004\) \\ \cline{1-1} & 20 & \(0.740\pm 0.007\) & \(0.829\pm 0.004\) \\ \cline{1-1} & 40 & \(0.103\pm 0.010\) & \(0.782\pm 0.005\) \\ \hline \hline \end{tabular} \end{table} Table 1: Average test accuracy over sub-models of commutative and noncommutative \(C^{*}\)-algebra nets over matrices on test datasets. Interactions between sub-models that the noncommutative \(C^{*}\)-algebra net introduces improve performance significantly when the number of sub-models is 40. noncommutative NIRs with commutative NIRs, using the following hyperparameters: the number of layers to 6 and the hidden dimension to 256. These NIRs learn \(128\times 128\)-pixel images of ukiyo-e pictures from The Metropolitan Museum of Art1 and photographs of cats from the AFHQ dataset [Choi et al., 2020]. Footnote 1: [https://www.metmuseum.org/art/the-collection](https://www.metmuseum.org/art/the-collection) Figure 2 (top) shows the curves of the average PSNR (Peak Signal-to-Noise Ratio) of sub-models corresponding to the image below. Both commutative and noncommutative -algebra nets consist of five sub-models trained on five ukiyo-e pictures (see also Figure 6). PSNR, the quality measure, of the noncommutative NIR grows faster, and correspondingly, it learns the details of ground truth images faster than its commutative version (Figure 2 bottom). Noticeably, the noncommutative representations reproduce colors even at the early stage of learning, although the commutative ones remain monochrome after 500 iterations of training. Along with the similar trends observed in the pictures of cats (Figure 3), these results further emphasize the effectiveness of the interaction. Longer-term results are presented in Figure 7. This INR for 2D images can be extended to represent 3D models. Figure 4 shows synthesized views of 3D implicit representations using the same -algebra MLPs trained on three 3D chairs from the ShapeNet dataset [Chang et al., 2015]. The presented poses are unseen during training. Again, the noncommutative INR reconstructs the chair models with less noisy artifacts, indicating that interaction helps efficient learning. See Sections 6.1 and 6.2 for details and results. ### Group -algebra nets As another experimental example of -algebra nets, we showcase group -algebra nets, which we introduced in Section 3.1.4. The group -algebra nets take functions on a symmetric group as input and return functions on the group as output. \begin{table} \begin{tabular}{l l c c} \hline \hline Dataset & Commutative & Block-diagonal & Noncommutative \\ & -algebra net & noncommutative & -algebra net \\ & & -algebra net & \\ \hline MNIST & -algebra net & -algebra net & -algebra net \\ K-MNIST & -algebra net & -algebra net & -algebra net \\ F-MNIST & -algebra net & -algebra net & -algebra net \\ \hline \hline \end{tabular} \end{table} Table 2: Average test accuracy over five sub-models simultaneously trained on the tree datasets. The (fully) noncommutative -algebra net outperforms block-diagonal the noncommutative -algebra net on Kuzushiji-MNIST (K-MNIST) and Fashion-MNIST (F-MNIST), indicating that the interaction can leverage related tasks. Refer to Section 3.1.4 for notations. A group \(C^{*}\)-algebra net is trained on data \(\{(x,y)\in\mathcal{A}^{N_{0}}\times\mathcal{A}^{N_{H}}\}\), where \(x\) and \(y\) are \(N_{0}\)- and \(N_{H}\)-dimensional vector-valued functions. Practically, such functions may be represented as real tensors, e.g., \(x\in\mathbb{C}^{N_{0}\times\#G}\), where \(\#G\) is the size of \(G\). Using product between functions explained in Section 3.1.4 and element-wise addition, a linear layer, and consequently, an MLP, on \(\mathcal{A}\) can be constructed. Following the \(C^{*}\)-algebra nets over matrices, we use leaky ReLU for activations. One of the simplest tasks for the group \(C^{*}\)-algebra nets is to learn permutation-invariant representations, e.g., predicting the sum of given \(d\) digits. In this case, \(x\) is a function that outputs permutations of features of \(d\) digits, and \(y(g)\) is an identity function that returns their sum for all \(g\in G\). In this experiment, we use features of MNIST digits of a pre-trained CNN in 32-dimensional vectors. Digits are selected so that their sum is less than 10 to simplify the problem, and the model is trained to classify the sum of given digits using cross-entropy loss. We set the number of layers to 4 and the hidden dimension to 32. For comparison, we prepare a permutation-invariant DeepSet model (Zaheer et al., 2017), which uses sum pooling for permutation invariance, containing the same number of parameters as floating-point numbers with the group \(C^{*}\)net. Figure 2: Average PSNR of implicit representations of the image below (top) and reconstructions of the ground truth image at every 100 iterations (bottom). The noncommutative \(C^{*}\)-algebra net learns the geometry and colors of the image faster than its commutative counterpart. Table 3 displays the results of the task with various training dataset sizes when \(d=3\). What stands out in the table is that the group \(C^{*}\)-algebra net consistently outperforms the DeepSet model by large margins, especially when the number of training data is limited. Additionally, as can be found in Figure 5, the group \(C^{*}\)-algebra net converges much faster than the DeepSet model. These results suggest that the inductive biases implanted by the product structure in the group \(C^{*}\)-algebra net are effective. ## 5 Conclusion and Discussion In this paper, we have generalized the space of neural network parameters to noncommutative \(C^{*}\)-algebras. Their rich product structures bring powerful properties to neural networks. For example, a \(C^{*}\)-algebra net over nondiagonal matrices enables its sub-models to interact, and a group \(C^{*}\)-algebra net learns permutation-equivariant features. We have empirically demonstrated the validity of these properties in various tasks, image classification, neural implicit representation, and sum-of-digits tasks. Although Section 4 experimentally showed that noncommutative \(C^{*}\)-algebra nets outperformed the baselines, practical consideration of noncommutative \(C^{*}\)-algebra nets may arise in their computational complexities. In particular, the Figure 3: Ground truth images and their implicit representations of commutative and noncommutative \(C^{*}\)-algebra nets after 500 iterations of training. The noncommutative \(C^{*}\)-algebra net reproduces colors more faithfully. \(C^{*}\)-algebra net over matrices used in the experiments requires \(O(d^{2})\) space complexity for the number of sub-models \(d\), which limits the possible number of sub-models and their interactions. This complexity could be alleviated by, for example, parameter sharing or introducing structures to nondiagonal elements by an analogy between self-attentions and their efficient variants. The design of such structures may be data-dependent, and we leave it for future research. Another important and interesting research direction is an application of infinite-dimensional \(C^{*}\)-algebras. In this paper, we focused mainly on finite-dimensional \(C^{*}\)-algebras. We showed that the product structure in \(C^{*}\)-algebras is a powerful tool for neural networks, for example, learning with interactions and group equivariance (or invariance) even for the finite-dimensional case. Infinite-dimensional \(C^{*}\)-algebra allows us to analyze functional data. Practical applications of our framework to functional data with infinite-dimensional Figure 4: Synthesized views of 3D implicit representations of commutative and noncommutative \(C^{*}\)-algebra nets after 5000 iterations of training. The noncommutative \(C^{*}\)-algebra net can produce finer details. Note that the commutative \(C^{*}\)-algebra net could not synthesize the chair on the left. \begin{table} \begin{tabular}{c c c} \hline \hline Dataset size & DeepSet & Group \(C^{*}\)-algebra net \\ \hline 1k & \(0.413\pm 0.031\) & \(0.777\pm 0.009\) \\ 5k & \(0.807\pm 0.031\) & \(0.921\pm 0.002\) \\ 10k & \(0.878\pm 0.009\) & \(0.944\pm 0.005\) \\ 50k & \(0.904\pm 0.007\) & \(0.971\pm 0.001\) \\ \hline \hline \end{tabular} \end{table} Table 3: Average test accuracy of a DeepSet model and a group \(C^{*}\)-algebra net on test data of the sum-of-digits task after 100 epochs of training. The group \(C^{*}\)-algebra net can learn from fewer data. \(C^{*}\)-algebras are also our future work. Our framework with noncommutative \(C^{*}\)-algebras has a wide range of applications, and we believe that our framework opens up a new approach to learning neural networks. ## Acknowledgement We would like to thank Dr. Tomohiro Hayase for a helpful and constructive discussion about the application of \(C^{*}\)-algebra to neural networks. ## 6 Supplemental Material ### Implementation details We implemented \(C^{*}\)-algebra nets using JAX[1] with equinox[13] and optax[1]. Throughout the experiments, we used the Adam optimizer [14] with a learning rate of \(1.0\times 10^{-4}\), except for the 3D NIR experiment, where Adam's initial learning rate was set to \(1.0\times 10^{-3}\). We set the batch size to 32, except for the 2D NIR, where each batch consisted of all pixels, and 3D NIR, where a batch size of 4 was used. The implementation of 3D neural implicit representation (Section 4.1.2) is based on a simple NeRF-like model and its renderer in Tancik et al. [2021]. For training, 25 views of each 3D chair from the ShapeNet dataset [14] are adopted with their \(64\times 64\) pixel reference images. The same \(C^{*}\)-algebra MLPs with the 2D experiments were used, except for the hyperparameters: the number of layers of four and the hidden dimensional size of 128. Figure 5: Average test accuracy curves of a DeepSet model and a group \(C^{*}\)-algebra net trained on 10k data of the sum-of-digits task. The group \(C^{*}\)-algebra net can learn more efficiently and effectively. The permutation-invariant DeepSet model used in Section 4.2 processes each data sample with a four-layer MLP with hyperbolic tangent activation, sum-pooling, and a linear classifier. Although we tried leaky ReLU activation as the group \(C^{*}\)-algebra net, this setting yielded sub-optimal results. The hidden dimension of the MLP was set to 84 to match the number of floating-point-number parameters equal to that of the group \(C^{*}\)-algebra net. ### Additional results Figures 6 and 7 present the additional figures of 2D INRs (Section 4.1.2). Figure 6 is an ukiyo-e counterpart of Figure 3 in the main text. Again, the noncommutative \(C^{*}\)-algebra net learns color details faster than the commutative one. Figure 7 shows average PSNR curves over three NIRs of the image initialized with different random states for 5,000 iterations. Although it is not as effective as the beginning stage, the noncommutative \(C^{*}\)-algebra net still outperforms the commutative one after the convergence. \begin{table} \begin{tabular}{c c} \hline Commutative \(C^{*}\)-algebra net & Noncommutative \(C^{*}\)-algebra net \\ \hline \(18.40\pm 4.30\) & \(25.22\pm 1.45\) \\ \hline \hline \end{tabular} \end{table} Table 4: Average PSNR over synthesized views. The specified poses of the views are unseen during training. Figure 8: Synthesized views of implicit representations of a chair. Figure 7: Average PSNR over implicit representations of the image of commutative and noncommutative \(C^{*}\)-algebra nets trained on five cat pictures (top) and reconstructions of the ground truth image at every 500 iterations (bottom).
2303.07811
ICICLE: Interpretable Class Incremental Continual Learning
Continual learning enables incremental learning of new tasks without forgetting those previously learned, resulting in positive knowledge transfer that can enhance performance on both new and old tasks. However, continual learning poses new challenges for interpretability, as the rationale behind model predictions may change over time, leading to interpretability concept drift. We address this problem by proposing Interpretable Class-InCremental LEarning (ICICLE), an exemplar-free approach that adopts a prototypical part-based approach. It consists of three crucial novelties: interpretability regularization that distills previously learned concepts while preserving user-friendly positive reasoning; proximity-based prototype initialization strategy dedicated to the fine-grained setting; and task-recency bias compensation devoted to prototypical parts. Our experimental results demonstrate that ICICLE reduces the interpretability concept drift and outperforms the existing exemplar-free methods of common class-incremental learning when applied to concept-based models.
Dawid Rymarczyk, Joost van de Weijer, Bartosz Zieliński, Bartłomiej Twardowski
2023-03-14T11:31:45Z
http://arxiv.org/abs/2303.07811v2
# ICICLE: Interpretable Class Incremental Continual Learning ###### Abstract Continual learning enables incremental learning of new tasks without forgetting those previously learned, resulting in positive knowledge transfer that can enhance performance on both new and old tasks. However, continual learning poses new challenges for interpretability, as the rationale behind model predictions may change over time, leading to interpretability concept drift. We address this problem by proposing Interpretable Class-InCremental LEarning (ICICLE), an exemplar-free approach that adopts a prototypical part-based approach. It consists of three crucial novelties: interpretability regularization that distills previously learned concepts while preserving user-friendly positive reasoning; proximity-based prototype initialization strategy dedicated to the fine-grained setting; and task-recency bias compensation devoted to prototypical parts. Our experimental results demonstrate that ICICLE reduces the interpretability concept drift and outperforms the existing exemplar-free methods of common class-incremental learning when applied to concept-based models. We make the code available. ## 1 Introduction With the growing use of deep learning models in diverse domains, including robotics [10], medical imaging [17], and autonomous driving [43], there is a pressing need to develop models that can adapt to ever-changing conditions and learn new tasks from non-stationary data. However, a significant challenge with neural networks is their tendency to suffer from _catastrophic forgetting_[26, 30, 44], where performance on previous tasks deteriorates rapidly as new ones are acquired. Continual Learning (CL) [19] has emerged as a promising technique to address this challenge by enabling models to learn new tasks without forgetting those learned before. While existing CL approaches significantly reduce catastrophic forgetting, they are often difficult for humans to understand. It is especially problematic because deep networks often predict the right answer for the wrong reason (the "Clever Hans" phenomenon), leading to excellent performance in training but poor performance in practice [60]. This results in serious societal problems that deeply affect health, freedom, racial bias, and safety [11]. As a result, some initial steps were taken in the literature to introduce explainable posthoc methods into the CL setup [32, 52, 61]. However, explaining black boxes, rather than replacing them with interpretable (self-expl Figure 1: We process the input image (top left) through the network and visualize how its specific areas are similar to one of the prototypes. The interpretability concept drift occurs when such a similarity map differs between tasks. ICICLE performs best, preserving similarity maps better than the other continual learning methods. calate the problem by providing misleading or false characterizations [66] or adding unnecessary authority to the black box [12]. Therefore, there is a clear need for innovative machine-learning models that are inherently interpretable [11]. To the best of our knowledge, no interpretable CL approach has been proposed so far. In this work, we introduce Interpretable Class-Incremental Learning (ICICLE), an interpretable approach to class-incremental learning based on prototypical parts methodology. Similarly to _This looks like that_ reasoning [16], ICICLE learns a set of prototypical parts representing reference concepts derived from the training data and makes predictions by comparing the input image parts to the learned prototypes. However, the knowledge transfer between tasks in continual learning poses new challenges for interpretability. Mainly because the rationale behind model predictions may change over time, leading to _interpretability concept drift_ and making explanations inconsistent (see Figure 1 and Table 1). Therefore, ICICLE contains multiple mechanisms to prevent this drift and, at the same time, obtain satisfactory results. First, we propose an interpretability regularization suited for prototypical part-based models to retain previously gained knowledge while maintaining model plasticity. It ensures that previously learned prototypical parts are similarly activated within the current task data, which makes explanations consistent over time. Moreover, considering the fine-grained nature of considered datasets, we introduce proximity-based prototype initialization for a new task. It searches for representative concepts within the new task data close to previously learned concepts, allowing the model to recognize high-level features of the new task and focusing on tuning details. Thirdly, to overcome task-recency bias in class-incremental learning scenarios, we propose a simple yet effective method that balances the logits of all tasks based on the last task data. Finally, we reduce multi-stage training while preserving user-friendly positive reasoning. We evaluate ICICLE on two datasets, CUB-200-2011 [80] and Stanford Cars [46], and conduct exhaustive ablations to demonstrate the effectiveness of our approach compared to the standard CL approaches. We show that this problem is challenging but opens up a promising new area of research that can further advance our understanding of CL methods. Our contributions can be summarized as follows: * We are the first to introduce interpretable class-incremental learning and propose a new method ICICLE, based on prototypical part methodology. * We propose interpretability regularization that prevents interpretability concept drift without using exemplars. * We define a dedicated prototype initialization strategy and a method compensating for task-recency bias. ## 2 Related Works **Continual Learning and Class Incremental Learning** Existing continual learning methods can be broadly categorized into three types: replay-based, architecture-based, and regularization-based methods [19, 54]. Replay-based methods either save a small amount of data from previously seen tasks [5, 15] or generate synthetic data with a generative model [82, 89]. The replay data can be used during training together with the current data, such as in iCaRL [64] and LUCIR [37], or to constrain the gradient direction while training, such as in AGEM [14]. Architecture-based methods activate different subsets of network parameters for different tasks by allowing model parameters to grow linearly with the number of tasks. Previous works following this strategy include DER [85], Piggyback [50], PackNet [51]. Regularization-based methods add an additional regularization term derived from knowledge of previous tasks to the training loss. This can be done by either regularizing the weight space, which constrains important parameters [75, 78], or the functional space, which constrains predictions or intermediate features [23, 38]. EWC [44], MAS [3], REWC [49], SI [88], and RWalk [13] constrain the importance of network parameters to prevent forgetting. Methods such as LWF [48], LWM [21], and BiC [84] leverage knowledge distillation to regularize features or predictions. Class-incremental learning (class-IL) is the most challenging scenario where the classifier learns new classes sequentialy. The model needs to maintain good performance on all classes seen so far [79]. Two types of evaluation methods are defined [54]: task-agnostic (no access to task-ID during inference, e.g., BiC [84]) and task-aware (task-ID is given during inference, e.g., HAT [74]). **Explainable Artificial Intelligence** In the field of deep learning explanations, two types of models have been explored: post hoc and self-expla \begin{table} \begin{tabular}{|c||c|c|c|c|} \hline \multirow{2}{*}{Method} & \multicolumn{4}{c|}{IoU} \\ \cline{2-5} & task I & task Z & task 3 & mean \\ \hline \hline Finutuning & 0.115 & 0.149 & 0.260 & 0.151 \\ EWC & 0.192 & 0.481 & 0.467 & 0.334 \\ LWF & 0.221 & 0.193 & 0.077 & 0.188 \\ LWM & 0.332 & 0.312 & 0.322 & 0.325 \\ \hline ICICLE & **0.705** & **0.753** & **0.742** & **0.728** \\ \hline \end{tabular} \end{table} Table 1: Quantitative results for interpretability concept drift presented in Figure 1. We compute IoU between similarities obtained after each task and after incremental tasks. E.g. in column “task 1”, we calculate IoU between similarity maps of task one prototypes after each learning episode. models explain the reasoning process of black-box methods, including saliency maps [53, 63, 72, 73, 76], concept activation vectors [18, 29, 40, 45, 86], counterfactual examples [1, 31, 56, 59, 83], and image perturbation analysis [7, 24, 25, 65]. Self-explainable models, on the other hand, aim to make the decision process more transparent and have attracted significant attention [4, 9]. Recently, researchers have focused on enhancing the concept of prototypical parts introduced in ProtoPNet [16] to represent the activation patterns of networks. Several extensions have been proposed, including TesNet [81] and Deformable ProtoPNet [22], which exploit orthogonality in prototype construction. ProtoPShare [70], ProtoTree [57], and ProtoPool [69] reduce the number of prototypes used in classification. Other methods consider hierarchical classification with prototypes [33], prototypical part transformation [47], and knowledge distillation techniques from prototypes [39]. Prototype-based solutions have been widely adopted in various applications such as medical imaging [2, 6, 41, 68, 77], time-series analysis [28], graph classification [67, 90], sequence learning [55], and semantic segmentation [71]. In this work, we adapt the prototype mechanism to class incremental learning. ## 3 Methods The fundamental aim of our approach is to increase the interpretability in the class-incremental scenario. For this purpose, we adapt the prototypical parts [16], which directly participate in the model computation, making explanations faithful to the classification decision. To make this work self-contained, we first recall the prototypical parts methodology, and then we describe how we adapt them to the class-incremental scenario. ### Prototypical parts methodology Architecture.The original implementation of prototypical parts [16] introduces an additional prototypical part layer \(g\) proceeded by a backbone convolutional network \(f\) with an add-on \(f_{A}\) and followed by the fully connected layer \(h\). The \(f_{A}\) add-on consists of two \(1\times 1\) convolutional layers and a sigmoid activation at the end, translating the convolutional output to a prototypical part space. The prototypical part layer \(g\) consists of \(K\) prototypes \(p_{i}\in\mathbb{R}^{D}\) per class, and their assignment is handled by the fully connected layer \(h\). If prototype \(p_{i}\) is assigned to class \(c\), then \(h_{ci}=1\), otherwise, it is set to \(-0.5\). Figure 2: The architecture of our ICICLE with separate prototypical part layers \(g^{t}\) for each task \(t\). In this example, prototypes of classes _Prothonotary Warbler_ and _Cardinal_ belong to task \(t-1\), while prototypes of _Wilson Warbler_ and _Sooty Albatross_ to task \(t\). Layers \(g^{t}\) are preceded by shared backbone \(f\), add-on \(f_{A}\), and sigmoid. Moreover, they are followed by the last layers \(h^{t}\) with weight \(h_{ci}^{t}=1\) if prototype \(p_{i}\) is assigned to class \(c\) and equals \(0\) otherwise. Inference.Given an input image \(x\), the backbone \(f\) generates its representation \(f(x)\) of shape \(H\times W\times D\), where \(H\) and \(W\) are the height and width of the representation obtained at the last convolutional layer, and \(D\) is the number of channels. This representation is translated by \(f_{A}\) to a prototypical part space, again of size \(H\times W\times D\). Then, each prototypical part \(p_{i}\) is compared to each of \(H\times W\) representation vectors to calculate the maximum similarity (i.e. the maximal activation of this prototype on the analyzed image) \(\max_{j\in\{1..HW\}}sim(p_{i},z_{j})\), where \(sim(p_{i},z_{j})=\log\frac{|z_{j}-p_{i}|z+1}{|z_{j}-p_{i}|z+\eta}\) and \(\eta\ll 1\). To obtain the final predictions, we push those values through the fully connected (and appropriately initialized) layer \(h\). Training.Training is divided into three optimization phases: warm-up, joint learning, and convex optimization of the last layer. The first phase trains add-on \(f_{A}\) and the prototypical part layer \(g\). The second phase learns \(f_{A}\), \(g\), and the backbone network \(f\). The last phase fine-tunes the fully-connected layer \(h\). Training is conducted with the cross-entropy loss supported by two regularizations, cluster and separation costs [16]. Cluster encourages each training image to have a latent patch close to at least one prototype of its class. In contrast, the separation cost encourages every latent patch of a training image to stay away from the prototypes of the remaining classes. ### Icicle Significant modifications of architecture and training are required to employ prototypical parts methodology to class-incremental learning (the inference is identical). Mostly because incremental learning has considerably different conjectures. It assumes \(T\) tasks \((C^{1},X^{1}),(C^{2},X^{2}),\ldots,(C^{T},X^{T})\), where each task \(t\) contains classes \(C^{t}\) and training set \(X^{t}\). Moreover, during task \(t\), only \(X_{t}\) training data are available, as we consider the exemplar-free setup, where it is prohibited to save any data from previous tasks (no replay buffer is allowed). Architecture.As in the baseline model, ICICLE comprises backbone \(f\) and add-on \(f_{A}\). However, it does not use one fixed prototypical part layer \(g\) and one fully-connected layer \(h\). Instead, it introduces a prototypical part layer \(g^{t}\) and a fully-connected layer \(h^{t}\) for each successive task. Layers \(g^{t}\) consist of \(M^{t}\) prototypical parts, where \(M^{t}=K\cdot C^{t}\) and \(K\) is the number of prototypes per class. On the other hand, layer \(h^{t}\) has weight \(h^{t}_{ci}=1\) if prototype \(p_{i}\) is assigned to class \(c\) and it is set to \(0\) otherwise. We eliminated negative weights (\(-0.5\)) from the last layer because multi-stage training is not beneficial for a class-incremental scenario (see Figure 6). Training.To prevent catastrophic forgetting, ICICLE modifies the loss function of the baseline solution. Additionally, it introduces three mechanisms: interpretability regularization, proximity-based prototype initialization, and task-recency bias compensation. Regarding the baseline loss function, the cross-entropy is calculated on the full output of the model, including logits from classes learned in previous tasks. However, the cluster and separation costs are only calculated within the \(g^{t}\) head. Interpretability regularization.Knowledge distillation [35] is one of the strong regularization methods applied to prevent forgetting [48]. However, the results obtained by its straightforward application are not satisfactory and lead to significant interpretation drift (see Figure 1 and Section 5). Therefore, we introduce an additional regularization cost \(L_{IR}\) (see Figure 3), inspired by [39], that minimizes the changes in the similarities for the prototypical parts of the previous tasks. It is defined as: \[L_{IR}=\sum_{i=0}^{H}\sum_{j=0}^{W}|sim(p^{t-1},z_{i,j}^{t})-sim(p^{t},z_{i,j}^ {t})|\cdot S_{i,j} \tag{1}\] where \(sim(p^{t-1},z_{i,j}^{t})\) is computed for the model stored before training task \(t\), and \(S\) is a binary mask of size \(H\times W\), indicating the representation pixels with the highest similarity (\(\gamma\) quantile of those pixels). Such similarity distillation gives higher plasticity when learning a new task but, at the same time, reduces the interpretability drift. Proximity-based prototype initialization.Random initialization of prototypes, proposed in [16], fails in the incremental learning (see Table 5). Most probably because new prototypes can be created far from the old ones, which are only slightly different (e.g. wing prototypes of various bird species). Therefore, we introduce initialization that sets new prototypes close to the old ones (see Figure 4). We start by passing training samples of task Figure 3: Our interpretability regularization aims to minimize the changes in the prototype similarities. It takes a prototype \(p^{t-1}\) of previous tasks and an image from task \(t\), selects the image area with the highest similarity to this prototype (binary mask \(S\)), and punishes the model for any changes in this area caused by training task \(t\). through the network and determining which patches are the most similar to existing prototypes (we choose patches from the \(\alpha\) quantile). More specifically, we compute set \(\{z_{j}^{t}:\max_{i}sim(p_{i}^{t-1},z_{j}^{t})\in\alpha\text{ quantile}\}\). This results in many candidates for new task prototypical parts. To obtain \(K\cdot C^{t}\) prototypes, we perform KMeans++ clustering, and the resulting centers of clusters are used to initialize the prototypical parts in \(g^{t}\). _Task-recency bias compensation._ When the model learns task \(t\), the similarities to the prototypes of previous tasks drop, mostly due to the changes in the backbone (see Figure 5). That is why, after training the final task, we compensate for this using \(T-1\) constants obtained using the last tasks data. More precisely, for each task \(t\), we take the logit values before Softmax and calculate scalar \(c_{t}\) so that \(\hat{y}_{t}=y_{t}+c_{t}\) are equalized between tasks. ## 4 Experimental Setup We evaluate our approach on the CUB-200-2011 [80] and Stanford Cars [46] datasets to classify 200 bird species and 196 car models, respectively. We consider 4, 10, and 20 task learning scenarios for birds and 4, 7, and 14 options for cars. As the backbone \(f\), we take ResNet-34 [34] without the last layer and pre-trained on ImageNet [20]. We set the number of prototypes per class to \(10\). Moreover, we use prototypical parts of size \(1\times 1\times 256\) and \(1\times 1\times 128\) for birds and cars, respectively. The weights of CE, cluster, separation, and distillation costs in the loss function equal \(1.0\), \(0.8\), \(-0.08\), and \(0.01\). In distillation, we take \(\lambda=1/49\) representation pixels with the highest similarity. For proximity-based initialization, we use \(\alpha=0.5\). For task-recency bias compensation, we take \(c_{t}\), which changes the predictions of the last validation set by less than \(10\%\). As the implementation framework, we use FACIL [54] based on the PyTorch library1. Details on the experimental setup and code (to be released upon acceptance) are provided in the Supplementary Materials. Footnote 1: [https://pytorch.org](https://pytorch.org) ## 5 Results Performance.We evaluated the effectiveness of ICICLE by comparing it with commonly used exemplar-free baseline methods in class-incremental learning, including LWF [48], LWM [21], and EWC[44]2. Additionally, Fine-tuning, and Freezing of the feature extractor (not trained at all) are provided. We also report multitask learning as an upper-bound where the various tasks are learned jointly in a multitask manner. To do so, we analyzed task-aware and task-agnostic accuracy for each task after the last one (Table 3) and the aggregated incremental average accuracies after learning the last task in scenarios involving 4, 10, and 20 tasks for CUB (Table 2) and 4, 7, and 14 tasks for Stanford Cars (Supplementary Materials). All methods use the same feature extractor network architectures and ProtoPNet for prototypical part-based learning. Our method outperformed the baseline methods in all cases, indicating its superior performance for prototypical part-based learning in a Figure 4: We introduce a new proximity-based prototype initialization. It starts by passing training samples of task \(t\) through the network (green dots) and choosing representations closest to existing prototypes (violet diamonds). This results in many points, which we cluster (purple circles) to obtain the initial locations of task \(t\) prototypical parts (yellow diamonds). Such initialization (bottom right) is preferred over random initialization (top right), where new prototypes can be created far from the old ones, even though they are only slightly different. Figure 5: When the model learns task \(t\), the similarities to the prototypes of previous tasks drop and are significantly lower than those of new tasks (upper plot). That is why, after training the final task, we compensate it with \(T-1\) calculated constants. As a result, the similarities obtained by prototypes of all tasks are roughly equalized. continual manner. ICICLE retains knowledge from previous tasks better, which results in a more balanced accuracy between tasks and higher accuracy for the first task compared to all other approaches. However, despite the significant improvement, our approach still has room for improvement compared to the upper-bound of multi-task training. With a larger number of tasks, the forgetting of the model is higher, resulting in poorer results, which may be attributed to the number of details that prototypes need to capture to classify a task correctly. Furthermore, we have noticed that freezing is a robust baseline for a task-aware scenario because of the model's fixed nature and pretrained backbone. InterpretabilityTo evaluate if the prototype's graphical representation of the concept has changed and how much we use the IoU metric [71]. IoU measures what is the overlap of the prototype visual representation (like in Figure 1) from the task in which it was learned, through all the following tasks. Freezing is superior in preserving the prototypical information because all weights from previous tasks are fixed. In terms of methods allowing changes in backbone and previously learned prototypes, ICICLE is superior over all baselines, as shown in Table 1. ICICLE keeps interpretable prototypes consistent with interpretability regularization distilling already learned concepts. ### Ablation study and analysis This section focuses on an in-depth ablation study of ICICLE. Initially, we outline the reasoning behind the modifications made to the ProtoPNet architecture. We then explore how various components affect the model's results. We also examine what kind of data should be regularized during the new task-learning episode and how different initialization techniques can impact the performance. Finally, we show that ICICLE is general and can be used with different concept-based architectures such as TesNet [81]. Why changes in ProtoPNet architecture and training are needed?ProtoPNet in the last training stage (last layer convex optimizations) aims to finetune positive connections and regularize the negative to be 0. As the result, the converged model returns interpretations in a form of positive reasoning, desired by the end users [11]. In the CL setting, the last step of training changes the negative connections in a different manner (see Figure 6). On the other hand, in an exemplar-free continual learning scenario, conducting the last-layer learning phase is unfeasible at the end of the training. That is why, we modified the ProtoPNet's last layer and retained only positive connections initialized to 1, eliminating the need for the convex optimization step. \begin{table} \begin{tabular}{|c||c|c|c|c|c|c|} \hline \hline \multirow{2}{*}{Method} & \multicolumn{3}{c|}{avg. inc. task-aware accuracy} & \multicolumn{3}{c|}{Avg. inc. task-agnostic accuracy} \\ \cline{2-7} & 4 Tasks & 10 Tasks & 20 tasks & 4 tasks & 10 tasks & 20 tasks \\ \hline \hline Freezing & \(0.560\pm 0.027\) & \(0.531\pm 0.042\) & \(0.452\pm 0.055\) & \(0.309\pm 0.024\) & \(0.115\pm 0.028\) & \(0.078\pm 0.004\) \\ Finetuning & \(0.229\pm 0.005\) & \(0.129\pm 0.017\) & \(0.147\pm 0.021\) & \(0.177\pm 0.006\) & \(0.072\pm 0.008\) & \(0.044\pm 0.006\) \\ EWC & \(0.445\pm 0.012\) & \(0.288\pm 0.034\) & \(0.188\pm 0.031\) & \(0.213\pm 0.008\) & \(0.095\pm 0.007\) & \(0.046\pm 0.011\) \\ LWM & \(0.452\pm 0.023\) & \(0.294\pm 0.032\) & \(0.226\pm 0.025\) & \(0.180\pm 0.028\) & \(0.090\pm 0.011\) & \(0.044\pm 0.008\) \\ LWF & \(0.301\pm 0.048\) & \(0.175\pm 0.028\) & \(0.129\pm 0.023\) & \(0.219\pm 0.019\) & \(0.078\pm 0.008\) & \(0.072\pm 0.008\) \\ \hline ICICLE & \(\mathbf{0.654\pm 0.011}\) & \(\mathbf{0.602\pm 0.035}\) & \(\mathbf{0.497\pm 0.099}\) & \(\mathbf{0.350\pm 0.053}\) & \(\mathbf{0.185\pm 0.005}\) & \(\mathbf{0.099\pm 0.003}\) \\ \hline Multi-task & \(0.858\pm 0.005\) & \(0.905\pm 0.012\) & \(0.935\pm 0.019\) & \(0.499\pm 0.009\) & \(0.196\pm 0.017\) & \(0.148\pm 0.009\) \\ \hline \end{tabular} \end{table} Table 2: Average incremental accuracy comparison for different numbers of tasks on CUB-200-2011, demonstrating the negative impact of the high number of tasks to be learned on models’ performance. Despite this trend, ICICLE outperforms the baseline methods across all task numbers. \begin{table} \begin{tabular}{|c||c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Method} & \multicolumn{3}{c|}{Task-aware accuracy} & \multicolumn{3}{c|}{Task-agnostic accuracy} \\ \cline{2-10} & Task 1 & Task 2 & Task 3 & Task 4 & Task 1 & Task 2 & Task 3 & Task 4 \\ \hline \hline Freizing & \(\mathbf{0.806\pm 0.024}\) & \(0.462\pm 0.037\) & \(0.517\pm 0.041\) & \(0.455\pm 0.027\) & \(\mathbf{0.570\pm 0.031}\) & \(0.195\pm 0.017\) & \(0.258\pm 0.019\) & \(0.213\pm 0.020\) \\ Finetuning & \(0.007\pm 0.004\) & \(0.016\pm 0.008\) & \(0.032\pm 0.009\) & \(0.759\pm 0.019\) & \(0.0\pm 0.0\) & \(0.0\pm 0.0\) & \(0.0\pm 0.0\) & \(0.759\pm 0.019\) \\ EWC & \(0.244\pm 0.024\) & \(0.378\pm 0.072\) & \(0.539\pm 0.043\) & \(0.602\pm 0.054\) & \(0.001\pm 0.001\) & \(0.059\pm 0.004\) & \(0.267\pm 0.031\) & \(0.527\pm 0.051\) \\ LWF & \(0.169\pm 0.046\) & \(0.119\pm 0.008\) & \(0.235\pm 0.017\) & \(0.743\pm 0.061\) & \(0.158\pm 0.035\) & \(0.003\pm 0.002\) & \(0.018\pm 0.003\) & \(0.537\pm 0.142\) \\ LWM & \(0.195\pm 0.012\) & \(0.412\pm 0.014\) & \(0.430\pm 0.028\) & \(\mathbf{0.72\pm 0.011}\) & \(0.027\pm 0.024\) & \(0.023\pm 0.020\) & \(0.085\pm 0.005\) & \(\mathbf{0.772\pm 0.037}\) \\ \hline ICICLE & \(0.523\pm 0.020\) & \(\mathbf{0.663\pm 0.053}\) & \(\mathbf{0.709\pm 0.038}\) & \(0.723\pm 0.002\) & \(0.233\pm 0.014\) & \(\mathbf{0.365\pm 0.021}\) & \(\mathbf{0.314\pm 0.011}\) & \(0.486\pm 0.021\) \\ \hline \end{tabular} \end{table} Table 3: Comparison of task accuracies for modified ProtoPNet architecture in a class-incremental learning scenario after 4 tasks train on CUB-200-2011 dataset, averaged over 3 runs with standard error of the mean. Our ICICLE outperforms baseline methods and achieves the best results for all previous incremental tasks, demonstrating its ability to maintain prior knowledge while learning new tasks. Freezing due to the weight fixation cannot properly learn new tasks. What is the influence of each of the introduced components?Table 4 presents the influence of different components of our approach on the final model's average incremental accuracy in the CUB-200-2011 dataset with four tasks split scenario. Combining all the components resulted in the best-performing model. Our results show that compensation of task-recency bias helps in task-agnostic evaluation and gives additional improvement of \(4.5\%\). However, most of the accuracy improvements were attributed to interpretability regularization and proximity initialization. Notably, task-recency bias compensation significantly improved the performance of task one classes compared to an approach without it, from 0.028 to 0.255 in a task-agnostic scenario, as detailed in the Supplementary Materials. Where should we perform interpretability regularization?The ProtoPNet model's prototypical layer can be regularized in three different ways: feature regularization on add-on layer representations, regularization of distances between prototypical parts and latent data vectors, and similarity-based regularization. The strictest approach is feature regularization, which does not allow the model to change how it represents data from a new task, resulting in significantly reduced model plasticity. When distances are regularized, the model can change its representation to maintain the same distance from the prototype on the surface of the sphere. On the other hand, similarity-based regularization allows the model to retain key knowledge from previous tasks by preserving only the information related to specific features that are close to the prototypical parts in the latent space, allowing for greater flexibility in exchange for forgetting irrelevant features. Therefore, we stick to interpretability regularization in ICICLE, which is based on similarities and maintains the essential knowledge from previous tasks while retaining high plasticity to learn new ones. Figure 7 illustrates these three approaches and their comparison in terms of average incremental accuracy for ProtoPNet only with regularization (without changing initialization): \(0.507\), \(0.535\), and \(0.559\) in task-aware and \(0.261\), \(0.230\), and \(0.280\) in task-agnostic scenarios for feature, distance, and similarity-based regularization, respectively, on the CUB-200-2011 dataset with four tasks scenarios. What is the influence of hyperparameters in interpretability regularization?In Figure 8 and Figure 9 the influence of \(\lambda\) and mask percentile threshold in the interpretability regularization on average incremental accuracies are presented. We use CUB-200-2011 datasets with four tasks split setting. For this dataset, the results reveal that the regularization of only the maximum prototypical similarity is the most effective (Figure 9). Regarding \(\lambda_{IR}\), a value that is too small leads to high network plasticity, increased forgetting, and poor results, while a value that is too large reduces model plasticity and may not represent new knowledge well. \begin{table} \begin{tabular}{|c|c|c||c|c|} \hline Regularization & Initialization & Compensation & \multicolumn{1}{c|}{Taw acc.} & \multicolumn{1}{c|}{Tag acc.} \\ \hline & & & 0.216 & 0.182 \\ ✓ & & & 0.559 & 0.280 \\ ✓ & ✓ & & **0.654** & 0.335 \\ ✓ & ✓ & ✓ & **0.654** & **0.350** \\ \hline \end{tabular} \end{table} Table 4: Influence of different novel components on the average incremental accuracy in four tasks learning scenario. Combination of all components results in the best-performing model. Figure 6: Average weight of positive and negative connections per class in 4 task learning scenario. Unbalanced and strong negative connections between tasks result in undesired properties in terms of the model’s interpretability. Figure 7: Visualization of three possible approaches to interpretability similarity and their influence on the model plasticity. Only similarity-based regularization takes into account how a given image part corresponds to a prototypical part. If it is close then the similarity value is high and small changes in the distance results in a great decrease in similarity. While latent vectors that are distant from prototypical parts can more freely be changed by the model to better represents the current task data. Other approaches are limiting the models’ plasticity treating each latent representation of the image part as equally important. Which way is the best to initialize new prototypical parts?In this ablation part, we investigate the optimal strategy for initializing prototypical parts at the beginning of a new task in the ProtoPNet model. We evaluate our initialization method, which initializes the parts in close proximity to existing prototypes, against three other approaches: random initialization, clustering of all image part representations, and clustering of only distant latent vectors. Results are presented in Table 5. The proximity initialization method outperforms the distant strategy, as the latter tends to assign prototypical parts to latent vectors that correspond to the background of the images, resulting in learning irrelevant concepts that can easily activate on other task data, as shown in the Supplementary Materials. Does ICICLE generalize to other architectures?Lastly, we show that ICICLE generalizes to other concept-based architecture. We demonstrate that using a TesNet model [81], and provide results in Table 6, where ICICLE obtains the best results. The average incremental accuracy of ICICLE with TesNet is even better than ProtoPNet for both task-aware and task-agnostic evaluation. ## 6 Conclusions and future work This work proposes a novel approach called ICICLE for interpretable class incremental learning. ICICLE is based on prototypical parts and incorporates interpretability regularization, proximity initialization, and compensation for task-recency bias. The proposed method outperforms classical class-incremental learning methods applied for prototypical part-based networks in terms of task-aware and task-agnostic accuracies while maintaining prototype interpretability. We also conducted ablation studies and multiple analyses to justify our choices and highlight the challenges associated with combining interpretable concepts with CL. This work is expected to inspire research on XAI and CL. Moving forward, we plan to explore methods suitable for single-class incremental learning with interpretable models. We also intend to investigate how other interpretable architectures, such as B-COS [8], can be adapted to the class incremental learning scenario. Limitations.Our work limits itself only to prototypical part methods, that are suited for fine-grained image recognition and derives all of their drawbacks previously discussed in [27, 36, 42, 58, 69]. Additionally, as we consider only an exemplar-free scenario, we do not analyze how having a replay buffer would influence the method's performance. Impact.ICICLE highlights that traditional exemplar-free approaches for continual learning are not well suited for gray-box models that utilize concepts for predictions. This finding has implications for the development of continual learning methods, as they must balance the need for generality with the need to be adapted to specific architectures. Furthermore, it has an impact on the field of concept-based models and explainable AI, demonstrating the need for further research on CL methods for XAI. In some cases, practitioners who know that their system will need to learn new tasks continuously may choose to use black-box models and explainers rather than interpretable models, sacrificing the \begin{table} \begin{tabular}{|c||c|c|c|c|c|c|} \hline & Freezing & Finentuning & EWC & LWM & LWF & ICICLE \\ \hline Taw Acc. & \(0.637\) & \(0.355\) & \(0.592\) & \(0.648\) & \(0.581\) & **0.746** \\ Tag Acc. & \(0.222\) & \(0.183\) & \(0.272\) & \(0.252\) & \(0.205\) & **0.362** \\ \hline \end{tabular} \end{table} Table 6: Results for four task learning scenario on CUB-200-2011 dataset with TesNet [81] as a concept-based architecture. The table shows the versatility of the ICICLE approach for interpretable models. \begin{table} \begin{tabular}{|c||c|c|c|c|} \hline Initialization type & Random & Distant & All & Proximity \\ \hline Task aware acc. & \(0.559\) & \(0.592\) & \(0.626\) & **0.654** \\ Task agnostic acc. & \(0.280\) & \(0.290\) & \(0.297\) & **0.335** \\ \hline \end{tabular} \end{table} Table 5: Comparison of different initialization strategies for prototypical parts. Our proximity initialization of new task prototypes is superior. Figure 8: Influence of the \(\lambda_{IR}\) in the interpretability regularization. Figure 9: Influence of \(\gamma\) in the interpretability regularization. Notice that regularizing only in the place of maximum similarity is the most beneficial for ICICLE for the four task learning scenario in CUB-200-2011. fidelity of explanations for improved model performance. ## Acknowledgements Joost van de Weijer and Bartlomiej Twardowski acknowledge the support from the Spanish Government funding for projects PID2019-104174GB-I00, TED2021-132513B-I00, and grant RYC2021-032765-I, also the support from the European Commission under the Horizon 2020 Programme, funded by MCIN/AEI/10.13039/501100011033 and by European Union NextGenerationEU/PRTR. Dawid Rymarczyk is supported in whole or in part by National Science Centre, Poland Grant No. 2022/45/N/ST6/04147. For the purpose of Open Access, the author has applied a CC-BY public copyright licence to any Author Accepted Manuscript (AAM) version arising from this submission.". He received an incentive scholarship from the funds of the program Excellence Initiative - Research University at the Jagiellonian University in Krakow. Bartosz Zielinski is supported by the National Centre of Science (Poland) Grant No. 2021/41/B/ST6/01370.
2308.12808
Decreasing the mean subtree order by adding $k$ edges
The mean subtree order of a given graph $G$, denoted $\mu(G)$, is the average number of vertices in a subtree of $G$. Let $G$ be a connected graph. Chin, Gordon, MacPhee, and Vincent [J. Graph Theory, 89(4): 413-438, 2018] conjectured that if $H$ is a proper spanning supergraph of $G$, then $\mu(H) > \mu(G)$. Cameron and Mol [J. Graph Theory, 96(3): 403-413, 2021] disproved this conjecture by showing that there are infinitely many pairs of graphs $H$ and $G$ with $H\supset G$, $V(H)=V(G)$ and $|E(H)|= |E(G)|+1$ such that $\mu(H) < \mu(G)$. They also conjectured that for every positive integer $k$, there exists a pair of graphs $G$ and $H$ with $H\supset G$, $V(H)=V(G)$ and $|E(H)| = |E(G)| +k$ such that $\mu(H) < \mu(G)$. Furthermore, they proposed that $\mu(K_m+nK_1) < \mu(K_{m, n})$ provided $n\gg m$. In this note, we confirm these two conjectures.
Stijn Cambie, Guantao Chen, Yanli Hao, Nizamettin Tokar
2023-08-24T14:10:32Z
http://arxiv.org/abs/2308.12808v1
# Decreasing the mean subtree order by adding \(k\) edges ###### Abstract The _mean subtree order_ of a given graph \(G\), denoted \(\mu(G)\), is the average number of vertices in a subtree of \(G\). Let \(G\) be a connected graph. Chin, Gordon, MacPhee, and Vincent [J. Graph Theory, 89(4): 413-438, 2018] conjectured that if \(H\) is a proper spanning supergraph of \(G\), then \(\mu(H)>\mu(G)\). Cameron and Mol [J. Graph Theory, 96(3): 403-413, 2021] disproved this conjecture by showing that there are infinitely many pairs of graphs \(H\) and \(G\) with \(H\supset G\), \(V(H)=V(G)\) and \(|E(H)|=|E(G)|+1\) such that \(\mu(H)<\mu(G)\). They also conjectured that for every positive integer \(k\), there exists a pair of graphs \(G\) and \(H\) with \(H\supset G\), \(V(H)=V(G)\) and \(|E(H)|=|E(G)|+k\) such that \(\mu(H)<\mu(G)\). Furthermore, they proposed that \(\mu(K_{m}+nK_{1})<\mu(K_{m,n})\) provided \(n\gg m\). In this note, we confirm these two conjectures. _Keywords:_ Mean subtree order; Subtree ## 1 Introduction Graphs in this paper are simple unless otherwise specified. Let \(G\) be a graph with vertex set \(V(G)\) and edge set \(E(G)\). The _order_ of \(G\), denoted by \(|G|\), is the number of vertices in \(G\), that is, \(|G|=|V(G)|\). The _complement_ of \(G\), denoted by \(\overline{G}\), is the graph on the same vertex set as \(G\) such that two distinct vertices of \(\overline{G}\) are adjacent if and only if they are not adjacent in \(G\). For an edge subset \(F\subseteq E(\overline{G})\), denote by \(G+F\) the graph obtained from \(G\) by adding the edges of \(F\). For a vertex subset \(U\subseteq V(G)\), denote by \(G-U\) the graph obtained from \(G\) by deleting the vertices of \(U\) and all edges incident with them. For any two graphs \(G_{1},G_{2}\) with \(V(G_{1})\cap V(G_{2})=\emptyset\), denote by \(G_{1}+G_{2}\) the graph obtained from \(G_{1},G_{2}\) by adding an edge between any two vertices \(v_{1}\in V(G_{1})\) and \(v_{2}\in V(G_{2})\). A tree is a graph in which every pair of distinct vertices is connected by exactly one path. A subtree of a graph \(G\) is a subgraph of \(G\) that is a tree. By convention, the empty graph is not regarded as a subtree of any graph. The _mean subtree order_ of \(G\), denoted \(\mu(G)\), is the average order of a subtree of \(G\). Jamison [5, 6] initiated the study of the mean subtree order in the 1980s, considering only the case that \(G\) is a tree. In [5], he proved that \(\mu(T)\geq\frac{n+2}{3}\) for any tree \(T\) of order \(n\), with this minimum achieved if and only if \(T\) is a path; and \(\mu(T)\) could be very close to its order \(n\). Jamison's work on the mean order of subtrees of a tree has received considerable attention [4, 8, 9, 10, 11]. At the 2019 Spring Section AMS meeting in Auburn, Jamison presented a survey that provided an overview of the current state of open questions concerning the mean subtree order of a tree, some of which have been resolved [1, 7]. Recently, Chin, Gordon, MacPhee, and Vincent [3] initiated the study of subtrees of graphs in general. They believed that the parameter \(\mu\) is monotonic with respect to the inclusion relationship of subgraphs. More specifically, they [3, Conjecture 7.4] conjectured that for any simple connected graph \(G\), adding any edge to \(G\) will increase the mean subtree order. Clearly, the truth of this conjecture implies that \(\mu(K_{n})\) is the maximum among all connected simple graphs of order \(n\), but it's unknown if \(\mu(K_{n})\) is the maximum. Cameron and Mol [2] constructed some counterexamples to this conjecture by a computer search. Moreover, they found that the graph depicted in Figure 1 is the smallest counterexample to this conjecture and there are infinitely many graphs \(G\) with \(xy\in E(\overline{G})\) such that \(\mu(G+xy)<\mu(G)\). In their paper, Cameron and Mol [2] initially focused on the case of adding a single edge, but they also made the following conjecture regarding adding several edges. **Conjecture 1.1**.: _For every positive integer \(k\), there are two connected graphs \(G\) and \(H\) with \(G\subset H\), \(V(G)=V(H)\) and \(|E(H)\backslash E(G)|=k\) such that \(\mu(H)<\mu(G)\)._ We will confirm Conjecture 1.1 by proving the following theorem, which will be presented in Section 2. **Theorem 1.2**.: _For every positive integer \(k\), there exist infinitely many pairs of connected graphs \(G\) and \(H\) with \(G\subset H\), \(V(G)=V(H)\) and \(|E(H)\backslash E(G)|=k\) such that \(\mu(H)<\mu(G)\)._ Figure 1: Adding the edge between \(a\) and \(b\) decreases the mean subtree order In the same paper, Cameron and Mol [2] also proposed the following conjecture. **Conjecture 1.3**.: _Let \(m,n\) be two positive integers. If \(n\gg m\), then we have \(\mu(K_{m}+nK_{1})<\mu(K_{m,n})\)._ We can derive Conjecture 1.1 from Conjecture 1.3, the proof of which is presented in Section 3, by observing that when \(m=2k\), the binomial coefficient \(\binom{m}{2}\) is divisible by \(k\). With \(2k-1\) steps, we add \(k\) edges in each step, and eventually the mean subtree order decreases, so it must have decreased in some intermediate step. ## 2 Theorem 1.2 Let \(G\) be a graph of order \(n\), and let \(\mathcal{T}_{G}\) be the family of subtrees of \(G\). By definition, we have \(\mu(G)=(\sum_{T\in\mathcal{T}_{G}}|T|)/|\mathcal{T}_{G}|\). The _density_ of \(G\) is defined by \(\sigma(G)=\mu(G)/n\). More generally, for any subfamily \(\mathcal{T}\subseteq\mathcal{T}_{G}\), we define \(\mu(\mathcal{T})=(\sum_{T\in\mathcal{T}}|T|)/|\mathcal{T}|\) and \(\sigma(\mathcal{T})=\mu(\mathcal{T})/n\). Clearly, \(1\leq\mu(G)\leq n\) and \(0<\sigma(G)\leq 1\). ### The Construction Fix a positive integer \(k\). For some integer \(m\), let \(\{s_{n}\}_{n\geq m}\) be a sequence of non-negative integers satisfying: (1) \(2s_{n}\leq n-k-1\) for all \(n\geq m\); (2) \(s_{n}=o(n)\), i.e., \(\lim_{n\to\infty}s_{n}/n=0\); and (3) \(2^{s_{n}}\geq n^{2}\) for all \(n\geq m\). Notice that many such sequences exist. Take, for instance, the sequence \(\{\lceil 2\log_{2}(n)\rceil\}_{n\geq m}\), as in [2], where \(m\) is the least positive integer such that \(m-2\lceil 2\log_{2}(m)\rceil\geq k+1\). In the remainder of this paper, we fix \(P\) for a path \(v_{1}v_{2}\cdots v_{n-2s_{n}}\) of order \(n-2s_{n}\). Clearly, \(|P|\geq k+1\). Furthermore, let \(P^{\star}:=P-\{v_{1},\ldots,v_{k-1}\}=v_{k}\cdots v_{n-2s_{n}}\). Let \(G_{n}\) be the graph obtained from the path \(P\) by joining \(s_{n}\) leaves to each of the two endpoints \(v_{1}\) and \(w:=v_{n-2s_{n}}\) of \(P\) (see Figure 2). Let \(G_{n,k}:=G_{n}+\{v_{1}w,v_{2}w,\ldots,v_{k}w\}\), that is, \(G_{n,k}\) is the graph obtained from \(G_{n}\) by adding \(k\) new edges \(e_{1}:=v_{1}w,e_{2}:=v_{2}w,\ldots,e_{k}:=v_{k}w\) (see Figure 3). Figure 2: \(G_{n}\) Let \(\mathcal{T}_{n,k}\) be the family of subtrees of \(G_{n,k}\) containing the vertex set \(\{v_{1},v_{k},w\}\) but not containing the path \(P^{*}=v_{k}\cdots w\). It is worth noting that \(\mathcal{T}_{n,1}\) is the family of subtrees of \(G_{n,1}\) containing edge \(v_{1}w\). Note that the graphs \(G_{n}\) and \(G_{n,1}\) defined above are actually the graphs \(T_{n}\) and \(G_{n}\) constructed by Cameron and Mol in [2], respectively. From the proof of Theorem 3.1 in [2], we obtain the following two results regarding the density of \(G_{n},G_{n,1},\mathcal{T}_{n,1}\). **Lemma 2.1**.: \(\lim\limits_{n\to\infty}\sigma(G_{n})=1\)_._ **Lemma 2.2**.: \(\lim\limits_{n\to\infty}\sigma(G_{n,1})=\lim\limits_{n\to\infty}\sigma( \mathcal{T}_{n,1})=\frac{2}{3}\)_._ The following two technical results concerning the density of \(\mathcal{T}_{n,k}\) are crucial in the proof of Theorem 1.2. The proofs of these results will be presented in Subsubsection 2.1.1 and Subsubsection 2.1.2, respectively. **Lemma 2.3**.: _For any fixed positive integer \(k\), \(\lim\limits_{n\to\infty}\sigma(\mathcal{T}_{n,k})=\lim\limits_{n\to\infty} \sigma(\mathcal{T}_{n-k+1,1})\)._ **Lemma 2.4**.: _For any fixed positive integer \(k\), \(\lim\limits_{n\to\infty}\sigma(\mathcal{T}_{n,k})=\lim\limits_{n\to\infty} \sigma(G_{n,k})\)._ The combination of Lemma 2.2, Lemma 2.3 and Lemma 2.4 immediately yields the following result. **Corollary 2.5**.: _For any fixed positive integer \(k\), \(\lim\limits_{n\to\infty}\sigma(G_{n,k})=\frac{2}{3}\)._ Combining Lemma 2.1 and Corollary 2.5, we have that \(\lim\limits_{n\to\infty}\sigma(G_{n,k})=\frac{2}{3}<1=\lim\limits_{n\to\infty} \sigma(G_{n})\) for any fixed positive integer \(k\). By definition, we gain that \(\sigma(G_{n,k})=\mu(G_{n,k})/|G_{n,k}|\) and \(\sigma(G_{n})=\mu(G_{n})/|G_{n}|\). Since \(|G_{n,k}|=|G_{n}|\), it follows that \(\mu(G_{n,k})<\mu(G_{n})\) for \(n\) sufficiently large, which in turn gives Theorem 1.2. The following result presented in [2, page 408, line -2] will be used in our proof. **Lemma 2.6**.: \(|\mathcal{T}_{n,1}|=2^{2s_{n}}\cdot\binom{n-2s_{n}}{2}\)_._ Figure 3: \(G_{n,k}\) #### 2.1.1 Proof of Lemma 2.3 Let \(H\) be the subgraph of \(G_{n,k}\) induced by vertex set \(\{v_{1},\ldots,v_{k},w\}\) (see Figure 4). Furthermore, set \(n_{1}=n-k+1\), and let \(G_{n_{1}}^{+}\) be the graph obtained from \(G_{n,k}\) by contracting vertex set \(\{v_{1},\ldots,v_{k}\}\) into vertex \(v_{1}\) and removing any resulting loops and multiple edges (see Figure 5). Clearly, \(G_{n_{1}}^{+}\) is isomorphic to \(G_{n_{1},1}\). Let \(T\in\mathcal{T}_{n,k}\), that is, \(T\) is a subtree of \(G_{n,k}\) containing the vertex set \(\{v_{1},v_{k},w\}\) but not containing the path \(P^{*}=v_{k}\cdots w\). Let \(T_{1}\) be the subgraph of \(H\) induced by \(E(H)\cap E(T)\). Since \(T\) does not contain the path \(P^{*}\), we have that \(T_{1}\) is connected, and so it is a subtree of \(H\). Let \(T_{2}\) be the graph obtained from \(T\) by contracting vertex set \(\{v_{1},\ldots,v_{k}\}\) into the vertex \(v_{1}\) and removing any resulting loops and multiple edges. Since \(T_{1}\) is connected and contains vertex set \(\{v_{1},v_{k},w\}\), it follows that \(T_{2}\) is a subtree of \(G_{n_{1}}^{+}\) containing edge \(v_{1}w\). So, each \(T\in\mathcal{T}_{n,k}\) corresponds to a unique pair \((T_{1},T_{2})\) of trees, where \(T_{1}\) is a subtree of \(H\) containing vertex set \(\{v_{1},v_{k},w\}\), and \(T_{2}\in\mathcal{T}_{n_{1},1}\). We also notice that \(|T|=|T_{1}|+|T_{2}|-2\), where the \(-2\) arises due to the fact that \(T_{1}\) and \(T_{2}\) share exactly two vertices \(v_{1}\) and \(w\). Let \(\mathcal{T}_{H}^{\prime}\subseteq\mathcal{T}_{H}\) be the family of subtrees of \(H\) containing vertex set \(\{v_{1},v_{k},w\}\). By the corresponding relationship above, we have \(|\mathcal{T}_{n,k}|=|\mathcal{T}_{H}^{\prime}|\cdot|\mathcal{T}_{n_{1},1}|\). Hence, we obtain that Figure 4: \(H\) Figure 5: \(G_{n_{1}}^{+}\) \[\mu(\mathcal{T}_{n,k}) = \frac{\sum\limits_{T\in\mathcal{T}_{n,k}}|T|}{|\mathcal{T}_{n,k}|} =\frac{\sum\limits_{T_{1}\in\mathcal{T}_{H}^{\prime}}\sum\limits_{T_{2}\in \mathcal{T}_{n_{1},1}}(|T_{1}|+|T_{2}|-2)}{|\mathcal{T}_{H}^{\prime}|\cdot| \mathcal{T}_{n_{1},1}|}\] \[= \frac{|\mathcal{T}_{H}^{\prime}|\cdot\sum\limits_{T_{2}\in \mathcal{T}_{n_{1},1}}|T_{2}|+|\mathcal{T}_{n_{1},1}|\cdot\sum\limits_{T_{1} \in\mathcal{T}_{H}^{\prime}}|T_{1}|-2|\mathcal{T}_{n_{1},1}|\cdot|\mathcal{T}_ {H}^{\prime}|}{|\mathcal{T}_{H}^{\prime}|\cdot|\mathcal{T}_{n_{1},1}|}\] \[= \mu(\mathcal{T}_{n_{1},1})+\mu(\mathcal{T}_{H}^{\prime})-2.\] Dividing through by \(n\), we further gain that \[\sigma(\mathcal{T}_{n,k})=\frac{n_{1}}{n}\cdot\sigma(T_{n_{1},1})+\frac{k+1}{ n}\cdot\sigma(\mathcal{T}_{H}^{\prime})-\frac{2}{n}.\] Since \(\sigma(\mathcal{T}_{H}^{\prime})\) is always bounded by \(1\), it follows that \(\lim\limits_{n\to\infty}\frac{k+1}{n}\cdot\sigma(\mathcal{T}_{H}^{\prime})=0\). Combining this with \(\lim\limits_{n\to\infty}\frac{n_{1}}{n}=1\) and \(\lim\limits_{n\to\infty}\frac{2}{n}=0\), we get \(\lim\limits_{n\to\infty}\sigma(\mathcal{T}_{n,k})=\lim\limits_{n\to\infty} \sigma(\mathcal{T}_{n_{1},1})=\frac{2}{3}\) (by Lemma 2.2), which completes the proof of Lemma 2.3. #### 2.1.2 Proof of Lemma 2.4 Let \(\overline{\mathcal{T}}_{n,k}:=\mathcal{T}_{G_{n,k}}\setminus\mathcal{T}_{n,k}\). If \(\lim\limits_{n\to\infty}|\overline{\mathcal{T}}_{n,k}|/|\mathcal{T}_{n,k}|=0\), then \(\lim\limits_{n\to\infty}\frac{|\overline{\mathcal{T}}_{n,k}|}{|\mathcal{T}_{n,k }|+|\overline{\mathcal{T}}_{n,k}|}=0\) because \(\frac{|\overline{\mathcal{T}}_{n,k}|}{|\mathcal{T}_{n,k}|+|\overline{ \mathcal{T}}_{n,k}|}\leq|\overline{\mathcal{T}}_{n,k}|/|\mathcal{T}_{n,k}|\), and so \(\lim\limits_{n\to\infty}\frac{|\mathcal{T}_{n,k}|}{|\mathcal{T}_{n,k}|+| \overline{\mathcal{T}}_{n,k}|}=1\). Hence, \[\lim\limits_{n\to\infty}\sigma(G_{n,k}) = \lim\limits_{n\to\infty}\frac{\mu(G_{n,k})}{n}=\lim\limits_{n\to \infty}\frac{1}{n}\cdot\left(\frac{\sum\limits_{T\in\mathcal{T}_{n,k}}|T|}{| \mathcal{T}_{n,k}|+|\overline{\mathcal{T}}_{n,k}|}+\frac{\sum\limits_{T\in \overline{\mathcal{T}}_{n,k}}|T|}{|\mathcal{T}_{n,k}|+|\overline{\mathcal{T}}_{n,k}|}\right)\] \[= \lim\limits_{n\to\infty}\left(\sigma(\mathcal{T}_{n,k})\cdot\frac{ |\mathcal{T}_{n,k}|}{|\mathcal{T}_{n,k}|+|\overline{\mathcal{T}}_{n,k}|}+ \sigma(\overline{\mathcal{T}}_{n,k})\cdot\frac{|\overline{\mathcal{T}}_{n,k}|} {|\mathcal{T}_{n,k}|+|\overline{\mathcal{T}}_{n,k}|}\right)=\lim\limits_{n\to \infty}\sigma(\mathcal{T}_{n,k}).\] Thus, to complete the proof, it suffices to show that \(\lim\limits_{n\to\infty}|\overline{\mathcal{T}}_{n,k}|/|\mathcal{T}_{n,k}|=0\). We now define the following two subfamilies of \(\mathcal{T}_{G_{n,k}}\). * \(\mathcal{B}_{1}=\{T\in\mathcal{T}_{G_{n,k}}\ :\ v_{1}\notin V(T)\ \text{or}\ w\notin V(T)\}\); and * \(\mathcal{B}_{2}=\{T\in\mathcal{T}_{G_{n,k}}\ :\ T\cap P^{*}\ \text{is a path, and}\ T\ \text{contains}\ w\}\). Recall that \(\mathcal{T}_{n,k}\) is the family of subtrees of \(G_{n,k}\) containing vertex set \(\{v_{1},v_{k},w\}\) and not containing the path \(P^{*}=v_{k}\cdots w\). For any \(T\in\overline{\mathcal{T}}_{n,k}\), by definition, we have the following scenarios: \(v_{1}\notin V(T)\), and so \(T\in\mathcal{B}_{1}\) in this case; \(w\notin V(T)\), and so \(T\in\mathcal{B}_{1}\) in this case; \(v_{k}\notin V(T)\) and \(w\in V(T)\), then \(T\cap P^{*}\) is a path, and so \(T\in\mathcal{B}_{2}\) in this case; \(P^{*}\subseteq T\), and so \(T\in\mathcal{B}_{2}\) in this case. Consequently, \(\overline{\mathcal{T}}_{n,k}\subseteq\mathcal{B}_{1}\cup\mathcal{B}_{2}\), which in turn gives that \[|\overline{\mathcal{T}}_{n,k}|\leq|\mathcal{B}_{1}|+|\mathcal{B}_{2}|. \tag{1}\] Let \(S_{v_{1}}\) denote the star centered at \(v_{1}\) with the \(s_{n}\) leaves attached to it and \(S_{w}\) denote the star centered at \(w\) with the \(s_{n}\) leaves attached to it. Then \(G_{n,k}\) is the union of four subgraphs \(S_{v_{1}}\), \(S_{w}\), \(H\), and \(P^{*}\). * Considering the subtrees of \(S_{v_{1}}\) with at least two vertices and the subtrees of \(S_{v_{1}}\) with a single vertex, we get \(|\mathcal{T}_{S_{v_{1}}}|=(2^{s_{n}}-1)+(s_{n}+1)=2^{s_{n}}+s_{n}=2^{s_{n}}+o(2 ^{s_{n}})\). * Considering the subtrees of \(S_{w}\) with at least two vertices and the subtrees of \(S_{w}\) with a single vertex, we get \(|\mathcal{T}_{S_{w}}|=(2^{s_{n}}-1)+(s_{n}+1)=2^{s_{n}}+s_{n}=2^{s_{n}}+o(2^{s _{n}})\). * Considering the subpaths of \(P^{*}\) with at least two vertices and the subpaths of \(P^{*}\) with a single vertex, we get \(|\mathcal{T}_{P^{*}}|={|P^{*}|\choose 2}+|P^{*}|={|P^{*}|+1\choose 2}={n-2s_{n}-k+2 \choose 2}\leq\frac{n^{2}}{2}\). * The number of subpaths of \(P^{*}\) containing \(w\) is bounded above by \(|P^{*}|=n-2s_{n}-k+1\leq n\). Since \(s_{n}=o(n)\), we have the following two inequalities \[|\mathcal{B}_{1}| \leq (s_{n}+|\mathcal{T}_{H}|\cdot|\mathcal{T}_{P^{*}}|\cdot|\mathcal{ T}_{S_{w}}|)+(s_{n}+|\mathcal{T}_{H}|\cdot|\mathcal{T}_{P^{*}}|\cdot|\mathcal{ T}_{S_{v_{1}}}|)\] \[\leq 2\left[s_{n}+|\mathcal{T}_{H}|\cdot(2^{s_{n}}+o(2^{s_{n}})) \cdot\frac{n^{2}}{2}\right]=|\mathcal{T}_{H}|\cdot\left(2^{s_{n}}\cdot n^{2}+o (2^{s_{n}}\cdot n^{2})\right)\] \[|\mathcal{B}_{2}| \leq |\mathcal{T}_{S_{v_{1}}}|\cdot|\mathcal{T}_{S_{w}}|\cdot|P^{*}| \cdot|\mathcal{T}_{H}|=\left(2^{2s_{n}}\cdot n+o(2^{2s_{n}}\cdot n)\right) \cdot|\mathcal{T}_{H}|.\] Recall that \(n_{1}=n-k+1\). Applying Lemma 2.6, we have \[|\mathcal{T}_{n,k}| = |\mathcal{T}_{H}^{\prime}|\cdot|\mathcal{T}_{n_{1},1}|=|\mathcal{ T}_{H}^{\prime}|\cdot 2^{2s_{n}}{n_{1}-2s_{n}\choose 2}=|\mathcal{T}_{H}^{\prime}| \cdot 2^{2s_{n}}\cdot\left(\frac{n^{2}}{2}-o(n^{2})\right).\] Recall that \(2^{s_{n}}\geq n^{2}\). Since \(|\mathcal{T}_{H}|\) is bounded by a function of \(k\) because \(|H|=k+1\), we have the following two inequalities. \[\lim_{n\to\infty}\frac{|\mathcal{B}_{1}|}{|\mathcal{T}_{n,k}|} = \lim_{n\to\infty}\frac{|\mathcal{T}_{H}|\cdot 2^{s_{n}}\cdot n^{2}}{| \mathcal{T}_{H}^{\prime}|\cdot 2^{2s_{n}}\cdot\frac{n^{2}}{2}}=\lim_{n\to\infty} \frac{2|\mathcal{T}_{H}|}{|\mathcal{T}_{H}^{\prime}|\cdot 2^{s_{n}}}=0\] and \[\lim_{n\to\infty}\frac{|\mathcal{B}_{2}|}{|\mathcal{T}_{n,k}|} = \lim_{n\to\infty}\frac{2^{2s_{n}}\cdot n\cdot|\mathcal{T}_{H}|}{ |\mathcal{T}_{H}^{\prime}|\cdot 2^{2s_{n}}\cdot\frac{n^{2}}{2}}=\lim_{n\to\infty} \frac{2\cdot|\mathcal{T}_{H}|}{|\mathcal{T}_{H}^{\prime}|\cdot n}=0.\] Hence, we conclude that \[\lim_{n\to\infty}\frac{|\mathcal{B}_{1}|+|\mathcal{B}_{2}|}{| \mathcal{T}_{n,k}|}=0\] Combining this with (1), we have that \(\lim_{n\to\infty}|\overline{\mathcal{T}}_{n,k}|/|\mathcal{T}_{n,k}|=0\), which completes the proof of Lemma 2.4. ### An Alternative Construction The graphs we constructed in order to prove Theorem 1.2, and the sets of \(k\) edges that were added to them, are certainly not the only examples that could be used to prove Theorem 1.2. For example, the \(k\)-edge set \(\{v_{1}w,v_{2}w,\ldots,v_{k}w\}\) can be replaced by the \(k\)-edge set \(\{v_{1}v_{n-2s_{n}},v_{2}v_{n-2s_{n}-1},\)\(\ldots,v_{k}v_{n-2s_{n}-k+1}\}\). Fix a positive integer \(k\) and let \(n\) be an integer much larger than \(k\). We follow the notation given in Section 2. Recall that \(G_{n}\) is obtained from a path \(P:=v_{1}v_{2}\cdots v_{n-2s_{n}}\) by attaching two stars centered at \(v_{1}\) and \(v_{n-2s_{n}}\), and \(\lim\limits_{n\to\infty}\sigma(G_{n})=1\). Let \(E_{k}:=\{v_{i_{1}}v_{j_{1}},v_{i_{2}}v_{j_{2}},\ldots,v_{i_{k}}v_{j_{k}}\}\) be a set of \(k\) edges in \(\overline{G_{n}}\) such that \(1\leq i_{1}<j_{1}\leq i_{2}<j_{2}\leq\cdots\leq i_{k}<j_{k}\leq n-2s_{n}\). Let \(H_{n,k}=G_{n}+E_{k}\). For convenience, we assume that \(j_{\ell}-i_{\ell}\) have the same value, say \(p\), for \(\ell\in\{1,\ldots,k\}\). A simple calculation shows that for each path \(Q\) of order \(q\), we have \(\mu(Q)=(q+2)/3\) (See Jamison [5]), and so \(\lim\limits_{q\to\infty}\sigma(Q)=1/3\). For any non-empty subset \(F\subseteq E_{k}\), we define \(\mathcal{T}_{F}=\{T\in\mathcal{T}_{H_{n,k}}:E(T)\cap E_{k}=F\}\). For any edge \(v_{i_{\ell}}v_{j_{\ell}}\in F\), let \(e_{\ell}=v_{i_{\ell}}v_{j_{\ell}}\) and \(P_{\ell}=v_{i_{\ell}}v_{i_{\ell}+1}\cdots v_{j_{\ell}}\). Note that every tree \(T\in\mathcal{T}_{F}\) is a union of a subtree of \(H_{n,k}-\cup_{e_{\ell}\in F}(V(P_{\ell})\backslash\{v_{i_{\ell}},v_{j_{\ell}}\})\) containing \(F\) and \(\cup_{e_{\ell}\in F}(E(P_{\ell})-E(P_{\ell}^{*}))\) for some path \(P_{\ell}^{*}\subseteq P_{\ell}\) containing at least one edge. Since \(|E(P_{\ell})|=p\), the line graph of \(P_{\ell}\) is a path of order \(p\). Consequently, the mean of \(|E(P_{\ell}^{*})|\) over subpaths of \(P_{\ell}\) is \((p+2)/3\). Hence, the mean of \(|E(P_{\ell})-E(P_{\ell}^{*})|\) over all subpaths \(P_{\ell}^{*}\) of \(P_{\ell}\) is \(p-(p+2)/3=2(p-1)/3\) for each \(e_{\ell}\in F\). Let \(s=|F|\). Since every subtree \(T\in\mathcal{T}_{F}\) has at most \(n-s(p-1)\) vertices outside \(\cup_{e_{\ell}\in F}(P_{\ell}-v_{i_{\ell}}-v_{j_{\ell}})\), we get the following inequality. \[\mu(\mathcal{T}_{F})\leq n-s(p-1)+s\cdot\frac{2(p-1)}{3}\leq n-\frac{s(p-1)}{3}.\] By taking \(p\) as a linear value of \(n\), say \(p=\alpha n\) (\(\alpha<\frac{1}{k}\)), we get \(\sigma(\mathcal{T}_{F})\leq 1-s\alpha/3+s/3n<\sigma(G_{n})\) since we assume that \(n\) is much larger than \(k\). Since \(\mathcal{T}_{H_{n,k}}=\bigcup_{F\subseteq E_{k}}\mathcal{T}_{F}\), we have \(\sigma(H_{n,k})<\sigma(G_{n})\), and so \(\mu(H_{n,k})<\mu(G_{n})\). **Remark 1**.: _The above construction gives an example where we can delete \(k\) edges in order in such a way that the mean subtree order increases in every step._ ## 3 Proof of Conjecture 1.3 To simplify notation, we let \(G:=K_{m}+nK_{1}\), where \(V(G)=V(K_{m,n})\). Denote by \(A\) and \(B\) the two color classes of \(K_{m,n}\) with \(|A|=m\) and \(|B|=n\), respectively. For each tree \(T\subseteq G\), we have \(E(T)\cap E(K_{m})=\emptyset\) or \(E(T)\cap E(K_{m})\neq\emptyset\). This implies that the family of subtrees of \(G\) consists of the subtrees of \(K_{m,n}\) and the subtrees sharing at least one edge with \(K_{m}\). For each tree \(T\subseteq G\), let \(A(T)=V(T)\cap A\) and \(B(T)=V(T)\cap B\). Then, \(|T|=|A(T)|+|B(T)|\). Furthermore, let \(B_{2}(T)\) and \(B_{\geq 2}(T)\) be the sets of vertices \(v\in B(T)\) such that \(d_{T}(v)=2\) and \(d_{T}(v)\geq 2\), respectively. Clearly, \(B_{2}(T)\subseteq B_{\geq 2}(T)\subseteq B(T)\). We define a subtree \(T\in\mathcal{T}_{G}\) to be a _b-stem_ if \(B_{\geq 2}(T)=B(T)\), which means that \(d_{T}(v)\geq 2\) for any \(v\in B(T)\). Let \(T\) be a b-stem and assume that \(T\) contains \(f\) edges in \(K_{m}\). Counting the number of edges in \(T\), we obtain \(|E(T)|=f+\sum_{v\in B(T)}d_{T}(v)\). Since \(T\) is a tree, we have \(|E(T)|=|T|-1=|A(T)|+|B(T)|-1\). Therefore, we gain \[|B(T)|=|A(T)|-1-\left(f+\sum_{v\in B(T)}(d_{T}(v)-2)\right). \tag{2}\] Since \(T\) is a b-stem, we have \(\sum_{v\in B(T)}(d_{T}(v)-2)\geq 0\), which implies that \(|B(T)|\leq|A(T)|-1\leq m-1\). Thus, \(|T|=2|A(T)|-\left(1+f+\sum_{v\in B(T)}(d_{T}(v)-2)\right)\leq 2|A(T)|-1\). It follows that a b-stem \(T\in\mathcal{T}_{G}\) is the _max b-stem_, i.e., the b-stem with the maximum order among all b-stems in \(\mathcal{T}_{G}\), if and only if \(A(T)=A\), \(E(T)\cap E(K_{m})=\emptyset\), and \(B_{2}(T)=B_{\geq 2}(T)\). This is equivalent to saying that \(T\) is a max b-stem if and only if \(|A(T)|=m\) and \(|B(T)|=m-1\). The b-stem of a tree \(T\subset G\) is the subgraph induced by \(A(T)\cup B_{\geq 2}(T)\), and it is a subtree in \(\mathcal{T}_{G}\). It is worth noting that the b-stem of every subtree \(T\subset G\) exists, except for the case when \(T\) is a tree with only one vertex belonging to \(B\). Conversely, given a b-stem \(T_{0}\), a tree \(T\subset G\) contains \(T_{0}\) as its b-stem if and only if \(T_{0}\subseteq T\), \(A(T)=A(T_{0})\), and \(B(T)\backslash B(T_{0})\) is a set of vertices with degree 1 in \(T\). Equivalently, \(T\) can be obtained from \(T_{0}\) by adding vertices in \(B(T)\backslash B(T_{0})\) as leaves. So, there are exactly \((|A(T_{0})|+1)^{n-|B(T_{0})|}\) trees containing \(T_{0}\) as their b-stem. For two non-negative integers \(a,b\), where \(a\geq b+1\geq 1\), let \(\mathcal{T}_{G}(a,b)\) (resp. \(\mathcal{T}_{K_{m,n}}(a,b)\)) be the family of subtrees in \(\mathcal{T}_{G}\) (resp. \(\mathcal{T}_{K_{m,n}}\)) whose b-stems \(T_{0}\) satisfy \(|A(T_{0})|=a\) and \(|B(T_{0})|=b\). For any \(A_{0}\subseteq A\) and \(B_{0}\subseteq B\), let \(f_{G}(A_{0},B_{0})\) (resp. \(f_{K_{m,n}}(A_{0},B_{0})\)) denote the number of b-stems \(T_{0}\) spanned by \(A_{0}\cup B_{0}\); that is, \(A(T_{0})=A_{0}\) and \(B_{\geq 2}(T_{0})=B_{0}\). Clearly, \(f_{G}(A_{0},B_{0})\) and \(f_{K_{m,n}}(A_{0},B_{0})\) depend only on \(|A_{0}|\) and \(|B_{0}|\), so we can denote them by \(f_{G}(|A_{0}|,|B_{0}|)\) and \(f_{K_{m,n}}(|A_{0}|,|B_{0}|)\), respectively. By counting, we have \(|\mathcal{T}_{G}(a,b)|={m\choose a}\cdot{n\choose b}\cdot f_{G}(a,b)\cdot(a+ 1)^{n-b}\) and \(|\mathcal{T}_{K_{m,n}}(a,b)|={m\choose a}\cdot{n\choose b}\cdot f_{K_{m,n}}( a,b)\cdot(a+1)^{n-b}\), due to the fact that there are \({m\choose a}\) ways to pick an \(a\)-set in \(A\) and \({n\choose b}\) ways to pick a \(b\)-set in \(B\). Since \(a\leq m\) and \(b\leq m-1\), there exist positive numbers \(c_{1}\) and \(c_{2}\) that depend only on \(m\), such that \[c_{1}n^{b}(a+1)^{n-b}\leq|\mathcal{T}_{G}(a,b)|\leq c_{2}n^{b}(a+1)^{n-b} \tag{3}\] Note that if \((a,b)\neq(m,m-1)\), then we have \(b\leq m-2\). Applying inequality (3), we get \(|\cup_{(a,b)\neq(m,m-1)}\mathcal{T}_{G}(a,b)|\leq c_{3}|\mathcal{T}_{G}(m,m-1) |/n\) for some constant \(c_{3}>0\) depending only on \(m\). Given a b-stem \(T_{0}\) with \(|A(T_{0})|=a\) and \(|B(T_{0})|=b\), let \(T\) be a tree chosen uniformly at random from \(\mathcal{T}_{G}\) (resp. \(\mathcal{T}_{K_{m,n}}\)) that contains \(T_{0}\) as its b-stem. Then, the probability of a vertex \(v\in B\backslash B(T_{0})\) in \(T\) is \(\frac{a}{a+1}\). This shows that the mean order of trees containing \(T_{0}\) as their b-stem is \((n-b)\frac{a}{a+1}+a+b\), denoted by \(\mu(a,b)\). Note that \(\sum_{T\in\mathcal{T}_{G}(a,b)}|T|=\mu(a,b)\cdot|\mathcal{T}_{G}(a,b)|\) and \(\sum_{T\in\mathcal{T}_{K_{m,n}}(a,b)}|T|=\mu(a,b)\cdot|\mathcal{T}_{K_{m,n}}( a,b)|\). Assume that \(T_{0}\) has \(f\) edges in \(K_{m}\), and set \(c=\sum_{v\in B(T_{0})}(d_{T_{0}}(v)-2)\). Using (2), we have \(b=a-(1+f+c)\). Hence, \(\mu(a,b)=\frac{(n+2+a)\cdot a}{a+1}-\frac{1+f+c}{a+1}\), which reaches its maximum value when \(a=m\) and \(f=c=0\), i.e., when \(T_{0}\) is a max b-stem. We then have: \[\mu(G) = \frac{\mu(m,m-1)|\mathcal{T}_{G}(m,m-1)|+\sum_{(a,b)\neq(m,m-1)} \mu(a,b)|\mathcal{T}_{G}(a,b)|+n}{|\mathcal{T}_{G}(m,m-1)|+\sum_{(a,b)\neq(m,m -1)}|\mathcal{T}_{G}(a,b)|+n},\] \[\mu(K_{m,n}) = \frac{\mu(m,m-1)|\mathcal{T}_{K_{m,n}}(m,m-1)|+\sum_{(a,b)\neq(m, m-1)}\mu(a,b)|\mathcal{T}_{K_{m,n}}(a,b)|+n}{|\mathcal{T}_{K_{m,n}}(m,m-1)|+ \sum_{(a,b)\neq(m,m-1)}|\mathcal{T}_{K_{m,n}}(a,b)|+n},\] where \(n\) denotes the number of subtrees with a single vertex in \(B\). Note that \(|\mathcal{T}_{G}(a,b)|\geq|\mathcal{T}_{K_{m,n}}(a,b)|\), with equality holding if and only if \(a=b-1\), and so in particular when \((a,b)=(m,m-1).\) We have derived before that \(0<\mu(a,b)<\mu(m,m-1)\) when \((a,b)\neq(m,m-1).\) Using the inequality \(|\cup_{(a,b)\neq(m,m-1)}\mathcal{T}_{G}(a,b)|\leq c_{3}|\mathcal{T}_{G}(m,m-1) |/n\), we conclude that \(\mu(G)>\frac{n}{n+c_{3}}\mu(m,m-1)>\max_{(a,b)\neq(m,m-1)}\mu(a,b)\) for \(n\) sufficiently large (for fixed \(m\)). Since \(\mu(K_{m,n})\) is the average of the same terms, as well as some additional terms of the form \(\mu(a,b)\), which are smaller than \(\mu(G)\), we conclude that \(\mu(G)<\mu(K_{m,n})\). This completes the proof. ## Acknowledgments We would like to express our sincere gratitude to the anonymous referees for their valuable comments and suggestions that improved this manuscript.
2304.12505
Theory of Posterior Concentration for Generalized Bayesian Additive Regression Trees
Bayesian Additive Regression Trees (BART) are a powerful semiparametric ensemble learning technique for modeling nonlinear regression functions. Although initially BART was proposed for predicting only continuous and binary response variables, over the years multiple extensions have emerged that are suitable for estimating a wider class of response variables (e.g. categorical and count data) in a multitude of application areas. In this paper we describe a Generalized framework for Bayesian trees and their additive ensembles where the response variable comes from an exponential family distribution and hence encompasses a majority of these variants of BART. We derive sufficient conditions on the response distribution, under which the posterior concentrates at a minimax rate, up to a logarithmic factor. In this regard our results provide theoretical justification for the empirical success of BART and its variants.
Enakshi Saha
2023-04-25T00:52:48Z
http://arxiv.org/abs/2304.12505v1
# Theory of Posterior Concentration for ###### Abstract Bayesian Additive Regression Trees (BART) are a powerful semiparametric ensemble learning technique for modeling nonlinear regression functions. Although initially BART was proposed for predicting only continuous and binary response variables, over the years multiple extensions have emerged that are suitable for estimating a wider class of response variables (e.g. categorical and count data) in a multitude of application areas. In this paper we describe a Generalized framework for Bayesian trees and their additive ensembles where the response variable comes from an exponential family distribution and hence encompasses a majority of these variants of BART. We derive sufficient conditions on the response distribution, under which the posterior concentrates at a minimax rate, up to a logarithmic factor. In this regard our results provide theoretical justification for the empirical success of BART and its variants. Bayesian additive regression trees, BART, Posterior concentration, Minimax rate, Exponential family, Generalized linear models ## 1 Introduction Additive ensemble of Bayesian trees [1, 2], more popularly known as Bayesian additive regression trees (BART) [3] is a flexible semiparametric tool that has been extremely successful in a multitude of high dimensional classification and regression tasks. Aided by efficient software implementations, (BART R package of [4], bartMachine R package of [5], parallel BART of [6] and XBART of [7]), BART has thrived in a wide range of application areas, including causal inference [8, 9, 10], interaction detection [11], survival analysis [12], time series analysis [13, 14] and variable selection [5, 15, 16, 17], to name a few. Even though BART was initially proposed for predicting univariate continuous and binary response variables, due to its flexibility and impressive performance, multiple extensions have emerged over the subsequent years, that are suitable for both univariate and multivariate prediction problems where the response variable is of a wider variety (e.g. categorical and count data [18], heteroscedastic responses [19]) and / or the target regression surface is of a constrained nature (e.g. monotone BART [20], varying coefficient BART [14], BART with targeted smoothing [21] etc.). Despite a long history of empirical success, theoretical studies on Bayesian trees and forests is a relatively new area of research. Recently emerging results along this line are geared towards providing a theoretical perspective on why these models have been so successful in a wide range of classification and regression problems. Among the initial developments, [22] and [23] demonstrated that the posterior concentration rate of BART equals to the minimax rate up to a logarithmic factor for various tree priors. Built on these findings, [24] derived a semiparametric Bernstein von-Mises theorem for the BART estimator. Extensions of BART, adapted to various special function types have also been studied from a theoretical perspective: [25] studied a version of BART suitable for smooth function estimation; [26] conducted a multiscale analysis of BART and [27] derived posterior concentration results for anisotropic functions. In this paper we study the posterior concentration rates of a generalized version of BART, thereby supplementing this newly emerging area of research. We formulate a Generalized BART (G-BART) model that extends the existing theoretical developments in several directions. Firstly while existing results focus on Gaussian response variables, we allow the response to come from an exponential family distribution. Hence G-BART can be regarded as semiparametric extensions of the widely popular 'Generalized Linear Models' (GLM) [28]. Many prominent Bayesian CART and BART models used in practice [2, 3, 18], including the traditional BART model [3], can be viewed as a special case of this generalized extension. Therefore theoretical properties of these conventional adaptations of BART can be studied as direct corollaries of analogous properties for the G-BART model. Secondly, existing results [23, 22, 25] build upon the assumption that the underlying regression function is Holder continuous. However given the efficacy of BART models in a multitude of prediction problems with varying degrees of complexity, the assumption of Holder continuity seems too restrictive. In this paper we demonstrate that similar posterior optimality results can be obtained for non-smooth functions as well, such as step functions and monotone functions, thus extending the theoretical findings on BART beyond the assumption of Holder continuity. Finally, the BART model [3] approximate the regression functions through step functions and assume that these step heights come from a Gaussian distribution. All subsequent theoretical and empirical developments have adopted this specification. In the G-BART setup we assume that the distribution of these step heights belong to a broader family of distributions that include both the Gaussian distribution and also some thicker tailed distributions like Laplace. We demonstrate that the BART model maintains a near-minimax posterior concentration rate, if the step heights come from any of the distributions belonging to this broader family, thus providing a wide range of distributional choices without sacrificing fast posterior concentration. The theory also shows how important modelling choices such as link functions can impact performance of the posterior and hence can serve as a guide for empirical implementations as well. This paper is organized as follows. In Section 2 we describe the generalized BART model with the associated priors. Section 3 discusses the notion of posterior concentration, followed by the main theoretical results on G-BART in Section 4. Broader implications of these results are described in Section 5. Finally, Section 6 concludes with a discussion. Proofs of the main theoretical results are provided in the supplementary material. ### Our contributions To summarize our previous discussion, we now briefly highlight our key contributions. Response distribution:We assume that the response variable comes from an exponential family distribution and derive sufficient conditions on the response density under which the posterior concentration rate of the BART model adapted to this particular response type would be almost equal to the minimax rate. This extends the existing theoretical results on BART for Gaussian regression. Step Size distribution:Instead of assigning a Gaussian distribution on the step-heights associated to the BART model, we impose sufficient conditions on the cumulative distribution function that guarantee a near-optimal posterior concentration rate. The resulting family of distributions encompasses the Gaussian distribution along with several thicker tailed distributions like Laplace, thus widening modeling choices for empirical applications. Types of functions:The objective of BART models is to estimate unknown functions that characterize the relationship between the response and the covariates. All existing results on BART assume this function to be Holder continuous. We extend these results to the situations where the underlying function to be estimated is either a monotone function or a step function supported on an axes-paralleled partition. The results on step functions are particularly important because posterior concentration rates for more general class of functions can be built upon these, aided by the "simple function approximation theorem" [29]. Empirical implications:As we will see in Section 5, specific model choices such as the choice of link functions can influence the posterior concentration rate of the G-BART model. The results discussed in this paper can provide useful insights into selecting link functions that provide faster concentration rates of the posterior, possibly leading to better empirical performance. ### Notations: For any two real numbers \(a\) and \(b\), \(a\lor b\) will denote the maximum of \(a\) and \(b\). The notations \(\gtrsim\) and \(\lesssim\) will stand for "greater than or equal to up to a constant" and "less than or equal to up to a constant", respectively. The symbol \(P_{f}\) will abbreviate \(\int fdP\) and \(\mathbb{P}_{f}^{(n)}=\prod_{i=1}^{n}\mathbb{P}_{f}^{i}\) will denote the \(n\)-fold product measure of the \(n\) independent observations, where the \(i\)-th observation comes from the distribution \(P_{f}^{i}\). Let \(h(f,g)=\left(\int(\sqrt{f}-\sqrt{g})^{2}d\mu\right)^{1/2}\) and \(K(f,g)=\int f\log(f/g)d\mu\) denote the Hellinger distance and the Kullback-Leibler divergence, respectively between any two non-negative densities \(f\) and \(g\) with respect to a measure \(\mu\). We define another discrepancy measure \(V(f,g)=\int f\left(\log(f/g)\right)^{2}d\mu\). Finally, for any set of real vectors \(\mathbf{X}_{1},\ldots,\mathbf{X}_{n}\in\mathbb{R}^{q}\) of size \(n\), define the average discrepancy measures \(H_{n}(f,g)=\frac{1}{n}\sum_{i=1}^{n}H\left(f(\mathbf{X}_{i}),g(\mathbf{X}_{i})\right)\), \(K_{n}(f,g)=\frac{1}{n}\sum_{i=1}^{n}K\left(f(\mathbf{X}_{i}),g(\mathbf{X}_{i})\right)\) and \(V_{n}(f,g)=\frac{1}{n}\sum_{i=1}^{n}V\left(f(\mathbf{X}_{i}),g(\mathbf{X}_{i})\right)\), where \(f(\theta)\) and \(g(\theta)\) denote the densities \(f\) and \(g\) with respect to parameter \(\theta\). Also, for any \(L_{p}\) norm \(\left\lVert\cdot\right\rVert_{p}\), define the average norm \(\left\lVert f-g\right\rVert_{p,n}=\frac{1}{n}\sum_{i=1}^{n}\left\lVert f-g \right\rVert_{p}\). ## 2 The Generalized BART Prior The BART method of [3] is a prominent example of Bayesian ensemble learning, where individual shallow trees are entwined together into a forest, that is capable of estimating a wide variety of nonlinear functions with exceptional accuracy, while simultaneously accounting for different orders of interactions among the covariates. Building upon BART, we describe a generalized model, where the response variable is assumed to come from an exponential family distribution. For continuous Gaussian response variables, this generalized BART model reduces to the original BART prior of [3]. The data setup under consideration consists of \(\mathbf{Y}_{i}=(y_{i1},\ldots,y_{ip})^{\prime}\in\mathbb{R}^{p}\), a set of \(p\)-dimensional outputs, and \(\mathbf{X}_{i}=(x_{i1},\ldots,x_{iq})^{\prime}\in[0,1]^{q}\), a set of \(q\) dimensional inputs for \(1\leq i\leq n\). We assume \(\mathbf{Y}\) follows some distribution in the exponential family with density of the following form: \[P_{f_{0}}(\mathbf{Y}\mid\mathbf{X})=h(\mathbf{Y})g\left[f_{0}(\mathbf{X})\right]\exp\left[ \eta\left(f_{0}(\mathbf{X})\right)^{T}T(\mathbf{Y})\right], \tag{1}\] where \(h:\mathbb{R}^{p}\rightarrow\mathbb{R}\), \(g:\mathbb{R}\rightarrow\mathbb{R}\), \(\eta:\mathbb{R}^{p}\rightarrow\mathbb{R}^{J}\), \(T:\mathbb{R}^{p}\rightarrow\mathbb{R}^{J}\) for some integer \(J\) and \(f_{0}:\mathbb{R}^{q}\rightarrow\mathbb{R}^{D}\), for some integer \(D\), are all real valued functions. Among these functions, \(h\), \(g\), \(\eta\) and \(T\) are usually _known_ depending on the nature of the response \(\mathbf{Y}\). The function \(f_{0}\), connecting the input \(\mathbf{X}\) with the output \(\mathbf{Y}\), is the only unknown function and estimating this function is the primary objective of the G-BART estimator. We assume that \(f_{0}\) is an unconstrained function, i.e. the range of \(f_{0}\) is the entire space \(\mathbb{R}^{D}\) for some integer \(D\). A suitable link function \(\Psi(\cdot)\) is used to transform \(f_{0}\) to the natural parameter of the distribution of \(\mathbf{Y}\), which is often constrained. For example, for the binary classification problem, \(\mathbf{Y}\sim Bernoulli\left(p(\mathbf{X})\right)\). Here the natural parameter \(p(\mathbf{X})\in(0,1)\) is restricted and hence we can use \(\Psi(z)=\frac{1}{1+\exp(-z)}\), the logistic function (or a probit function, as in [3]) to map the unconstrained function \(f_{0}(\mathbf{X})\) to the natural parameter \(p(\mathbf{X})\). There are usually several different choices for the link function. As we will see in Section 5, the BART estimator might have different posterior concentration rates depending on which link function is used to transform the function \(f_{0}\) to the natural parameter of the response distribution. The univariate regression and the two-class classification problem considered in the original BART paper [3] and many of its important extensions, such as the multi-class classification and the log-linear BART [18] for categorical and count responses can be formulated as special cases of (1). The specific forms of the functions \(h,g,\eta\) and \(T\) for continuous regression and multi-class classification are given in Table 1. \begin{table} \begin{tabular}{c c c} \hline \hline Response (\(\mathbf{Y}\)) & Continuous & Categorical \\ \hline Dist.(\(\mathbf{Y}\)) & \(\mathcal{N}\left(f_{0}(\mathbf{X}),\sigma^{2}\right)\) & \(\mathcal{M}\left(\Phi(f_{0}(\mathbf{X}))\right)\) \\ \(h(\mathbf{Y})\) & \(1/\sqrt{2\sigma}\sigma\) & 1 \\ \(g\left(f_{0}(\mathbf{X})\right)\) & \(\exp\left(-f_{0}(\mathbf{X})^{2}/\sigma^{2}\right)\) & 1 \\ \(\eta\left(f_{0}(\mathbf{X})\right)\) & \(\left(f_{0}(\mathbf{X}),1\right)\) & \(f_{0}(\mathbf{X})\) \\ \(T(\mathbf{Y})\) & \(\left(2Y/\sigma^{2},-Y^{2}/\sigma^{2}\right)\) & (\(\left\{\mathbbm{I}\{Y=i\}\right\}_{i=1}^{p}\))\({}^{\prime}\) \\ \(f_{0}(\mathbf{X})\) & \(\mathbb{R}^{q}\rightarrow\mathbb{R}\) & \(\mathbb{R}^{q}\rightarrow\mathbb{R}^{p-1}\) \\ \hline \hline \end{tabular} \end{table} Table 1: Univariate Regression (column 2) and Multi-class Classification (column 3), as special cases of the Generalized BART model. \(\Phi\) denotes the \(Softmax\) function and \(\mathcal{M}(\cdot)\) denotes the \(Multinomial(1;\cdot)\) distribution. \(\left(\left\{\mathbbm{I}\{Y=i\}\right\}_{i=1}^{p}\right)^{\prime}\) denotes the row vector where the \(i\)-th coordinate equals to one if \(\mathbf{Y}\) belongs to class \(i\) and zero otherwise. Next a regression tree is used to reconstruct the unknown function \(f_{0}:\mathbb{R}^{q}\rightarrow\mathbb{R}^{D}\) via a mapping \(f_{\mathcal{T},\mathbf{\beta}}:[0,1]^{q}\rightarrow\mathbb{R}^{D}\) so that \(f_{\mathcal{T},\mathbf{\beta}}(\mathbf{X})\approx f_{0}(\mathbf{X})\) for \(\mathbf{X}\notin\{\mathbf{X}_{i}\}_{i=1}^{n}\). Each such mapping is essentially a step function of the form \[f_{\mathcal{T},\mathbf{\beta}}(\mathbf{X})=\sum_{k=1}^{K}\beta_{k}\mathbb{I}(\mathbf{X}\in \Omega_{k}) \tag{2}\] supported on a tree-shaped partition \(\mathcal{T}=\{\Omega_{k}\}_{k=1}^{K}\) and specified by a vector of step heights \(\mathbf{\beta}=(\beta_{1},\ldots,\beta_{K})^{\prime}\). The vector \(\beta_{k}\in\mathbb{R}^{p}\) represents the value of the expected response inside the \(k\)-th cell of the partition \(\Omega_{k}\). Bayesian additive trees consist of an ensemble of multiple shallow trees, each of which is intended to be a weak learner, geared towards addressing a slightly different aspect of the prediction problem. These trees are then woven into an _additive_ forest mapping of the form \[f_{\mathcal{E},\mathbf{B}}(\mathbf{x})=\sum_{t=1}^{T}f_{\mathcal{T}_{t},\mathbf{\beta}_{t }}(\mathbf{x}), \tag{3}\] where each \(f_{\mathcal{T}_{t},\mathbf{\beta}_{t}}(\mathbf{x})\) is of the form (2), \(\mathcal{E}=\{\mathcal{T}_{1},\ldots,\mathcal{T}_{T}\}\) is an ensemble of \(T\) trees and \(\mathbf{B}=\{\mathbf{\beta}_{1},\ldots,\mathbf{\beta}_{T}\}^{\prime}\) is a collection of jump sizes corresponding to the \(T\) trees. Since each individual member of the approximating space is a step function of the form (3), supported on a Bayesian additive forest, the prior distribution should include three components: (i) a prior \(\pi(T)\) on the number of trees \(T\) in the ensemble, (ii) a prior on individual tree partitions \(\pi(\mathcal{T})\) and their collaboration within the ensemble and (iii) given a single tree partition \(\mathcal{T}\), a prior \(\pi(\mathbf{\beta}\mid\mathcal{T})\) has to be imposed on the individual step heights \(\mathbf{\beta}\). In this paper we follow the recommendation by [3] and assume the number of trees \(T\) to be fixed at a large value (e.g. \(T=200\) for regression and \(T=50\) for classification). This is equivalent to assigning a degenerate prior distribution on \(T\), where all probability mass is concentrated on a single positive integer. Alternatively, one can also assign a prior with higher dispersion, as in [23] and [25] and replicate the steps of the proofs provided in the appendix with minor modifications. Given the total number of trees in the ensemble, individual trees are assumed to be independent and identically distributed with some distribution \(\pi(\mathcal{T})\). This reduces the prior on the ensemble to be of the form \[\pi(\mathcal{E},\mathbf{B})=\prod_{t=1}^{T}\pi(\mathcal{T}_{t})\pi(\mathbf{\beta}_{t }\mid\mathcal{T}_{t}), \tag{4}\] where \(\pi(\mathcal{T}_{t})\) is the prior probability of a partition \(\mathcal{T}_{t}\), while \(\pi(\mathbf{\beta}_{t}\mid\mathcal{T}_{t})\) is the prior distribution over the jump sizes. The specific forms of the priors \(\pi(\mathcal{T})\) and \(\pi(\mathbf{\beta}\mid\mathcal{T})\) are described below. ### Prior on partitions We consider two distinct prior distributions on the partitions \(\pi(\mathcal{T})\) proposed by [1] and [2] respectively. The posterior concentration results discussed in Section 4 are applicable to both these priors. [1] specifies the prior over trees implicitly as a tree generating stochastic process, described as follows: 1. Start with a single leaf (a root node) encompassing the entire covariate space. 2. Split a terminal node, say \(\Omega\), with a probability \[p_{split}(\Omega)\propto\alpha^{-d(\Omega)}\text{ for some }0<\alpha<1/2.\] (5) where \(d(\Omega)\) is the depth of the node \(\Omega\) in the tree architecture. This choice, motivated by [22], is slightly different from the original prior of [1]1 Footnote 1: The reason behind this modification is that the original BART prior of [3] does not decay at a fast enough rate. However since we examine only sufficient (but not necessary) conditions for optimal posterior concentration, our results do not guarantee that the original prior is inherently worse than the modified prior In fact, empirical results indicate otherwise. The original BART prior will be examined in future work. 3. If the node \(\Omega\) splits, assign a splitting rule and create left and right children nodes. The splitting rule consists of picking a split variable \(j\) uniformly from available directions \(\{1,\ldots,p\}\) and picking a split point \(c\) uniformly from available data values \(x_{1j},\ldots,x_{nj}\). A description of the prior proposed by [2] is given in Section A.1 in the supplementary material. ### Prior on step heights We impose a broad class of priors on the step heights that incorporate the corresponding component of the classical BART model as a special case. Given a tree partition \(\mathcal{T}_{i}\) with \(K_{t}\) steps, [3] considers identically distributed independent Gaussian jumps with mean \(0\) and variance \(\sigma^{2}\). In the G-BART set-up we assume that the \(j\)-th step height of the \(t\)-th tree, \(\beta_{tj}\stackrel{{ i,i,d}}{{\sim}}F_{\beta}\), where \(F_{\beta}\) is any general distribution with the following property: for some constants \(C_{1},C_{2},C_{3}\) such that \(C_{1}>0\), \(0<C_{2}\leq 2\) and \(C_{3}>0\), \[F_{\beta}(\left\lVert\beta\right\rVert_{\infty}\leq t)\gtrsim\left(e^{-C_{1}t }c^{\varepsilon_{2}}t\right)^{p}\quad\text{for }0<t\leq 1 \tag{6}\] and \[F_{\beta}(\left\lVert\beta\right\rVert_{\infty}\geq t)\lesssim e^{-C_{3}t} \quad\text{for }t\geq 1 \tag{7}\] where \(\left\lVert\cdot\right\rVert_{\infty}\) represents the \(L_{\infty}\) norm and \(F_{\beta}(\left\lVert\beta\right\rVert_{\infty}\geq t)\) denotes the tail probability of the distribution on the step heights \(\beta\in\mathbb{R}^{p}\). Both the multivariate Gaussian and the multivariate Laplace distribution come from this family of distributions and so do any sub-Gaussian distributions. A proof of these statements is provided in the appendix. We will see in Section 4.1 and Section 4.3 that these conditions are _sufficient_ to guarantee that the G-BART estimator has a near-optimal posterior concentration rate. However we should note that the conditions (6)-(7), although _sufficient_, are not _necessary_ conditions and distributional assumptions on the step sizes that do not satisfy these conditions, might still guarantee a near-optimal posterior concentration rate. For such an example, please refer to the 'classification with Dirichlet step-heights' in Section 5. ## 3 Posterior Concentration Posterior concentration statements are a prominent artifact in Bayesian nonparametrics, where the primary motivation is to examine the quality of a Bayesian procedure, by studying the learning rate of its posterior, i.e. the rate at which the posterior distribution, centralizes around the truth as the sample size \(n\to\infty\). In empirical settings, posterior concentration results have often influenced the proposal and fine-tuning of priors. Oftentimes seemingly unremarkable priors give rise to capricious outcomes, specially in the infinite-dimensional parameter spaces, such as the one considered here ([30], [31]) and designing well-behaved priors turn out to be of utmost importance, thus further reinstating the importance of posterior concentration statements. The Bayesian approach proceeds by imposing a prior measure \(\Pi(\cdot)\) on \(\mathcal{F}\), the set of all estimators of \(f_{0}\). For the G-BART models this corresponds to the set of all step functions supported on an additive ensemble of Bayesian trees. Given observed data \(\boldsymbol{Y}^{(n)}=(Y_{1},\ldots,Y_{n})^{\prime}\), the inference about \(f_{0}\) is solely dependent on the posterior distribution \[\Pi(A\mid\boldsymbol{Y}^{(n)})=\frac{\int_{A}\prod_{i=1}^{n}\Pi_{f}(Y_{i}\mid \boldsymbol{X}_{i})\mathrm{d}\,\Pi(f)}{\int\prod_{i=1}^{n}\Pi_{f}(Y_{i}\mid \boldsymbol{X}_{i})\mathrm{d}\,\Pi(f)}\quad\forall A\in\mathcal{B}\] where \(\mathcal{B}\) is a \(\sigma\)-field on \(\mathcal{F}\) and where \(\Pi_{f}(Y_{i}\mid\boldsymbol{X}_{i})\) is the conditional likelihood function for the output \(Y_{i}\), given the covariates \(\boldsymbol{X}_{i}\), under the parameterization \(f\). Ideally under a suitable prior, the posterior should put most of its probability mass around a small neighborhood of the true function and as the sample size increases, the diameter of this neighborhood should go to zero at a fast pace. Formally speaking, for a given sample size \(n\), if we examine an \(\varepsilon_{n}\)-neighborhood of the true function \(\mathcal{A}_{\varepsilon_{n}}\), for some \(\varepsilon_{n}\to 0\) and \(n\varepsilon_{n}^{2}\to\infty\), we should expect \[\Pi(\mathcal{A}_{\varepsilon_{n}}^{c}\mid\boldsymbol{Y}^{(n)})\to 0\quad \text{in}\,\mathbb{P}_{f_{0}}^{(n)}\text{-probability as }n\to\infty, \tag{8}\] where \(\mathcal{A}_{\varepsilon_{n}}^{c}\) denotes the complement of the neighborhood \(\mathcal{A}_{\varepsilon_{n}}\). In the context of G-BART, given observed data \(\boldsymbol{Y}^{(n)}=(\boldsymbol{Y}_{1},\ldots,\boldsymbol{Y}_{n})^{\prime}\), we are interested in evaluating whether the posterior concentrates around the true likelihood \(\mathbb{P}_{f_{0}}^{(n)}=\prod_{i=1}^{n}P_{f_{0}}^{i}\) at a near-minimax rate, where \(P_{f_{0}}^{i}=P_{f_{0}}(\boldsymbol{Y}_{i}\!\mid\!\boldsymbol{X}_{i})\) is of the form (1), for \(i=1,\ldots,n\). Following the suggestions of [32], we look at the smallest \(H_{n}\)-neighborhoods around \(\mathbb{P}_{f_{0}}^{(n)}\) that contain the bulk of the posterior probability. Specifically, for a diameter \(\varepsilon>0\) define \[\mathcal{A}_{\varepsilon}=\{f\in\mathcal{F}:H_{n}(P_{f},P_{f_{0}})\leq \varepsilon\} \tag{9}\] Theorem 4 of [32] demonstrates that the statement (8) can be proved by verifying three sufficient conditions. The first condition, henceforth referred to as the "entropy condition" specifies that \[\sup_{\varepsilon>\varepsilon_{n}}\log N\left(\tfrac{\varepsilon}{30};\mathbb{ F}_{n}\cap\mathcal{A}_{\varepsilon};H_{n}\right)\lesssim n\,\varepsilon_{n}^{2},\] (C1) where \(N(\varepsilon;\Omega;d)\) denotes the \(\varepsilon\)-covering number of a set \(\Omega\) for a semimetric \(d\), i.e. the minimal number of \(d\)-balls of radius \(\varepsilon\) needed to cover the set \(\Omega\) and \(\left\{\mathbb{F}_{n}\right\}_{n\geq 1}\) denotes an increasing sequence of approximating sieves. The sequence of sieves used in this paper is described in the appendix. The second condition requires that the prior puts enough mass around the true likelihood \(\mathbb{P}_{f_{0}}^{(n)}\), meaning that for a given sample size \(n\in\mathbb{N}\setminus\left\{0\right\}\) and for some \(d>2\), \[\Pi(f\in\mathcal{F}:K_{n}(f,f_{0})\lor V_{n}(f,f_{0})\leq\varepsilon_{n}^{2}) \gtrsim e^{-d\,n\,\varepsilon_{n}^{2}},\] (C2) where \(K_{n}\) and \(V_{n}\) are the Kullback-Leibler divergence and the variation, averaged over the observed data points. The final condition, referred to as the "prior decay rate condition" stipulates that the sequence of sieves \(\mathbb{F}_{n}\uparrow\mathcal{F}\) captures the entire parameter space with increasing accuracy, in the sense that the complementary space \(\mathcal{F}\backslash\mathbb{F}_{n}\) has negligible prior probability mass for large values of \(n\). \[\Pi(\mathcal{F}\backslash\mathbb{F}_{n})=o(e^{-(d+2)\,n\,\varepsilon_{n}^{2}})\] (C3) The results of type (8) quantify not only the typical distance between a point estimator (posterior mean/median) and the truth, but also the typical spread of the posterior around the truth and hence are stronger than 'posterior consistency' statements. These results are usually the first step towards further uncertainty quantification statements such as semiparametric Bernstein-von Mises theorem [33]. ## 4 Main Results In this section we describe our main theoretical findings, which describe the posterior concentration rates of the generalized Bayesian trees and their additive ensembles (G-BART), when the true function \(f_{0}\) connecting the response \(\boldsymbol{Y}\) with the covariates \(\boldsymbol{X}\), is either (a) a step function (Theorem 4.1), or (b) a monotone function (Theorem 4.3), or (c) a \(\nu\)-Holder continuous function with \(0<\nu\leq 1\) (Theorem 4.4). We make two important assumptions: the first assumption (subsequently referred to as Assumption 1), given below restricts the distribution of the response variable \(\left\{\boldsymbol{Y}_{1},\ldots,\boldsymbol{Y}_{n}\right\}\in\mathbb{R}^{p}\) to a specific class of exponential family distributions while the second assumption (subsequently referred to as Assumption 2) concerns the spread of the covariates \(\left\{\boldsymbol{X}_{1},\ldots,\boldsymbol{X}_{n}\right\}\in\mathbb{R}^{q}\). Assumption 1:Let \(\boldsymbol{Y}_{1},\ldots,\boldsymbol{Y}_{n}\sim P_{f}\), where \(P_{f}\) denotes a probability density function of the form (1), such that, \(\eta(z)=z\) and there exists strictly increasing positive sequences \(\{C_{g}^{n}\}_{n\geq 1}\) and \(\{C_{\beta}^{n}\}_{n\geq 1}\), such that \[\left|\frac{\nabla g(\boldsymbol{\beta})}{g(\boldsymbol{\beta})}\right|\leq C _{g}^{n}\,\mathbf{1}_{p},\quad\forall\boldsymbol{\beta}\in B_{n}=\left\{ \boldsymbol{\beta}:\left\|\boldsymbol{\beta}\right\|_{\infty}\leq C_{\beta}^ {n}\right\}, \tag{10}\] where \(\mathbf{1}_{p}=(1,\ldots,1)\in\mathbb{R}^{p}\) denotes a \(p\)-dimensional vector of ones and \(\nabla g\) denotes the vector of partial derivatives. We assume \(\{C_{g}^{n}\}\vee\{C_{\beta}^{n}\}\lesssim n^{M}\) for some \(M>0\). The significance is that the function \(g(\cdot)\) should not change too rapidly, and the higher the sample size the larger the rate of change is allowed. The above assumption is satisfied by most distributions commonly used in the regression and classification settings, as will be demonstrated in Section 5. Assumption 2:For a k-d tree partition, \(\widehat{\mathcal{T}}=\{\widehat{\Omega_{k}}\}\), with \(K=2^{ps}\)-many leaves, the dataset \(\left\{\boldsymbol{X}_{1},\ldots,\boldsymbol{X}_{n}\right\}\) satisfies the following condition: for any nonnegative integer \(s\), there exists some large enough constant \(M>0\) such that \[\max_{1\leq k\leq K}\text{diam}(\widehat{\Omega_{k}})<M\sum_{k=1}^{K}\mu( \Omega_{k})\text{diam}(\widehat{\Omega_{k}}), \tag{11}\] where \(\mu(\Omega_{k})=\frac{1}{n}{\sum_{j=1}^{n}\mathbb{I}\left\{\boldsymbol{X}_{i} \in\Omega_{k}\right\}}\) denotes the proportion of observations in the cell \(\Omega_{k}\) and \(\text{diam}(\widehat{\Omega_{k}})=\max_{\boldsymbol{x},\boldsymbol{y}\in \Omega_{k}}\left\|\boldsymbol{x}-\boldsymbol{y}\right\|_{2}\) denotes the spread of the cell \(\Omega_{k}\) with respect to the \(L_{2}\)-norm. ### Results on Step-Functions Let us suppose \(f_{0}\) is a step function supported on an axes-paralleled partition \(\left\{\Omega_{k}\right\}_{k=1}^{K_{0}}\). For any such step function \(f_{0}\), we define the _complexity of \(f_{0}\)_, as the smallest \(K\) such that there exists a partition \(\left\{\Omega_{k}\right\}_{k=1}^{K}\) with \(K\) cells, for which the step function \(f(x)=\sum_{k=1}^{K}\beta_{k}\mathbb{I}\{x\in\Omega_{k}\}\) can approximate \(f_{0}\) without any error, for some step heights \((\beta_{1},\ldots,\beta_{K})\in\mathbb{R}^{K}\). This complexity number, denoted by \(K_{f_{0}}\), depends on the true number of step \(K_{0}\), the diameter of the intervals \(\left\{\Omega_{k}\right\}_{k=1}^{K_{0}}\), and the number of covariates \(q\). The actual minimax rate for approximating such piecewise-constant functions \(f_{0}\) with \(K_{0}>2\) pieces, is \(n^{-1/2}\sqrt{K_{0}\log\left(n/K_{0}\right)}\)[34]. The following theorem shows that the posterior concentration rate of G-BART is almost equal to the minimax rate, except that \(K_{0}\) gets replaced by \(K_{f_{0}}\). The discrepancy is an unavoidable consequence of the fact that the true number of steps \(K_{0}\) is unknown. Had this information been available, the G-BART estimator would have attained the exact minimax rate. **Theorem 4.1**.: _If we assume that the distribution of the step-sizes satisfies (6) and (7), then under Assumptions 1 and 2 with \(q\lesssim\sqrt{\log n}\), the generalized BART estimator satisfies the following property: If \(f_{0}\) is a step-function, supported on an axes-paralleled partition, with complexity \(K_{f_{0}}\lesssim\sqrt{n}\) and \(\left\|f_{0}\right\|_{\infty}\lesssim\sqrt{\log n}\), then with \(\varepsilon_{n}=n^{-1/2}\sqrt{K_{f_{0}}\log^{2\gamma}\left(n/K_{f_{0}}\right)}\) and \(\gamma>1/2\),_ \[\Pi\left(f\in\mathcal{F}:H_{n}(\mathbb{P}_{f},\mathbb{P}_{f_{0}})>\varepsilon _{n}\mid\boldsymbol{Y}^{(n)}\right)\to 0,\] _in \(\mathbb{P}_{f_{0}}^{(n)}\)-probability, as \(n,q\to\infty\)._ Proof.: Proof is given in the appendix. ### Results on Monotone Functions An important implication of Theorem 4.1 is that posterior concentration results on step functions can potentially build the foundation for similar results on broader class of functions, aided by the "simple function approximation theorem" [29], which states that for any measurable function \(f\) on \(\mathcal{E}\subseteq\mathbb{R}^{q}\), there exists a sequence of step functions \(\{f_{k}\}\) which converges point-wise to \(f\) almost everywhere [29]. As a corollary to this theorem, we can derive the following result on the set of all monotone functions. A function \(f_{0}:\mathbb{R}^{q}\to\mathbb{R}\) is defined as monotone increasing (or decreasing) if \(f_{0}(\boldsymbol{x}_{1})\geq f_{0}(\boldsymbol{x}_{2})\) (or, \(f_{0}(\boldsymbol{x}_{1})\leq f_{0}(\boldsymbol{x}_{2})\)) for all \(\boldsymbol{x}_{1},\boldsymbol{x}_{2}\) such that every coordinate of \(\boldsymbol{x}_{1}\) is greater than or equal to the corresponding coordinate of \(\boldsymbol{x}_{2}\). **Lemma 4.2**.: _Any_ **monotone** _bounded function \(f_{0}\) can be approximated with arbitrary precision \(\varepsilon\), by a step function supported on a \(k\)-\(d\) tree partition with number of leaves \(K_{f_{0}}(\varepsilon)\geq\left\lceil 1/\varepsilon\right\rceil\). We define \(K_{f_{0}}(\varepsilon)\) to be the complexity of the monotone function \(f_{0}\) with respect to \(\varepsilon>0\)._ The complexity \(K_{f_{0}}(\varepsilon)\) also depends on the dimension of the domain \(q\) as well as on the magnitude of the true function \(\left\|f_{0}\right\|_{\infty}\). This paves the way for deriving the posterior concentration rate of G-BART when the true function \(f_{0}(\cdot)\) connecting the covariates \(\boldsymbol{X}\) with a univariate response \(\boldsymbol{Y}\) is a monotone function. The minimax rate of estimation for such densities is \(n^{-1/(2+q)}\)[35]. The following theorem states that the posterior concentration rate of G-BART equals to this optimum rate up to a logarithmic function, provided that the magnitude of the true function \(f_{0}\) is not "too large". **Theorem 4.3**.: _If we assume that the distribution of the step-sizes satisfies (6) and (7), then under Assumptions 1 and 2 with \(q\lesssim\sqrt{\log n}\), the generalized BART estimator satisfies the following property: If the true function \(f_{0}:\mathbb{R}^{q}\to\mathbb{R}\) is monotonic on every coordinate, with \(\left\|f_{0}\right\|_{\infty}\lesssim\sqrt{\log n}\), then with \(\varepsilon_{n}=n^{-1/(2+q)}\sqrt{\log n}\),_ \[\Pi\left(f\in\mathcal{F}:H_{n}(\mathbb{P}_{f},\mathbb{P}_{f_{0}})>\varepsilon _{n}\mid\boldsymbol{Y}^{(n)}\right)\to 0,\] _in \(\mathbb{P}_{f_{0}}^{(n)}\)-probability, as \(n,q\to\infty\)._ Proof.: The first step of the proof involves finding an approximating step-function \(\widehat{f}_{0}\) by Lemma 4.2, such that \(\left\|f_{0}-\widehat{f}_{0}\right\|_{2,n}<\varepsilon_{n}/2\). The rest of the proof follows by retracing the steps as in the proof of Theorem 4.4 given in the supplementary material. The above result demonstrates that the Generalized BART model adapts to monotonic patterns in the true function \(f_{0}\), without any additional prior assumptions. ### Results on Holder Continuous Functions This section describes the posterior concentration results on G-BART when the true function \(f_{0}\) connecting \(\boldsymbol{X}\) with \(\boldsymbol{Y}\) is a \(\nu\)-Holder continuous function with \(0<\nu\leq 1\). [23] and [22] proved that the posterior concentration rates of the BART model (under the priors of [2] and [3] respectively) are equal to \(n^{-\alpha/(2\alpha+q)}\), the minimax rate of estimation for such functions [36], except for a logarithmic factor. These results can be derived as direct corollaries of the following theorem for G-BART, when \(\boldsymbol{Y}\) is a univariate continuous response and the step-sizes are assumed to follow a Gaussian distribution. **Theorem 4.4**.: _If we assume that the distribution of the step-sizes satisfies (6) and (7), then under Assumptions 1 and 2 with \(q\lesssim\sqrt{\log n}\), the generalized BART estimator satisfies the following property: If \(f_{0}\) is a \(\nu\)-Holder continuous function with \(0<\nu\leq 1\), where \(\|f_{0}\|_{\infty}\lesssim\sqrt{\log n}\), then with \(\varepsilon_{n}=n^{-\alpha/(2\alpha+q)}\sqrt{\log n}\),_ \[\Pi\left(f\in\mathcal{F}:H_{n}(\mathbb{P}_{f},\mathbb{P}_{f_{0}})>\varepsilon _{n}\mid\boldsymbol{Y}^{(n)}\right)\to 0,\] _in \(\mathbb{P}_{f_{0}}^{(n)}\)-probability, as \(n,q\to\infty\)._ Proof.: Proof is given in the appendix. Remark:Interestingly, the posterior concentration rates derived in Theorems 4.1-4.4, do not depend on the number of trees \(T\) in the generalized BART ensemble. In other words the concentration rate is equally valid for a single tree (i.e. \(T=1\)), as well as for tree ensembles (i.e. \(T>1\)), when the true regression function \(f_{0}\) is \(\nu\)-Holder continuous with \(0<\nu\leq 1\). However as has been seen in multiple empirical applications [3], Bayesian forests consisting of multiple trees provide superior out-of-sample predictive performance, compared to a single tree, the reason being that multiple weak tree learners, when woven together into a forest, can accommodate a wider class of partitions, as opposed to a single tree. This phenomenon can be reinforced by theoretical results, such as Theorem 6.1 of [23]. When the true function \(f_{0}\) is of the form \(f_{0}=\sum_{t=1}^{T_{0}}f_{0}^{t}\), where \(f_{0}^{t}\) is a \(\nu_{t}\)-Holder continuous function, with \(0\leq\nu^{t}\leq 1\), a forest with multiple trees have a posterior concentration rate equal to \(\varepsilon_{n}^{2}=\sum_{t=1}^{T_{0}}n^{-2\nu_{t}/(2\nu_{t}+p)}\log n\), provided \(T_{0}\lesssim n\), whereas single regression trees fail to recognize the additive nature of the true function and attain a slower concentration rate. A similar result is presented in Theorem 4 of [25], under a kernel-smoothed version of the BART prior. Although the BART prior considered by [23] is fundamentally different from the classical BART prior [3] considered here, their result on additive functions can be replicated in the present set up as well, provided we allow the number of trees \(T\) in the BART ensemble to be stochastic. In particular, we might assume that \(\pi(T)\propto e^{-C_{T}T}\), for \(T\in\mathbb{N}\setminus\{0\}\), with \(C_{T}>\log 2\), thus enabling the number of trees in the forest to adapt to unknown \(T_{0}\), as \(n,p\to\infty\). ## 5 Implications The primary significance of Theorems 4.1, 4.3 and 4.4 is that these results provide a frequentist theoretical justification for superior empirical performance of generalized Bayesian trees and forests, claiming that the posterior concentrates around the truth at a near-optimal learning rate. As demonstrated below, we can show that the original BART model [3], along with some of its commonly used variants (such as BART for multi-class classification and regression on count data) have near-optimal posterior concentration rates, as direct corollaries of Theorems 4.1 - 4.4. Another important consequence of these results is that (see Section A.5 of the supplementary material), they show that the posterior distribution on the number of leaves in a generalized Bayesian tree does not exceed the optimal number of splits by more than a constant multiple and hence are resilient to overfitting. Below we demonstrate the breadth of applicability of Theorems 4.1, 4.3 and 4.4 in proving analogous theoretical results for a wide range of commonly used BART models. Continuous Regression:For a (multivariate) continuous regression, assume that the response \(\boldsymbol{Y}\mid\boldsymbol{X}\sim\mathcal{N}_{p}(\boldsymbol{\mu}( \boldsymbol{X}),\Sigma)\), for some positive definite \(\Sigma\). The function \(g(f_{0}(\boldsymbol{X}))=g(\boldsymbol{\mu})=e^{-\boldsymbol{\mu}^{T}\Sigma^ {-1}\boldsymbol{\mu}/2}\) satisfies (10) with \(B_{n}=[-n,n]^{p}\) and \(C_{g}^{n}=n\lambda(\Sigma)\), where \(\lambda(\Sigma)\) denotes the maximum eigenvalue of \(\Sigma\). Hence from Theorems 4.1, 4.3 and 4.4, we can conclude that for continuous regression, the G-BART estimator has a near-minimax posterior concentration rate, provided that the true function \(f_{0}\) connecting the input \(\boldsymbol{X}\) with the output \(\boldsymbol{Y}\) is either a step function, a monotone function or a \(\nu\)-Holder continuous function with \(0<\nu\leq 1\). Classification with Gaussian Step Heights:For a \(p\)-class classification the response \(\boldsymbol{Y}\) can be written as a \(p\) dimensional binary vector that has \(1\) at the \(l\)-th coordinate if \(\boldsymbol{Y}\) belongs to category \(l\in\{1,\ldots,p\}\) and \(0\) elsewhere. We can assume \(\boldsymbol{Y}\mid\boldsymbol{X}\sim\text{Multinomial}[1;\boldsymbol{\pi}( \boldsymbol{X}))\) for some \(\boldsymbol{\pi}:\,\mathbb{R}^{q}\in(0,1)^{p}\) such that \(\pi^{\boldsymbol{1}}\boldsymbol{p}=1\). The unrestricted function \(f_{0}(\boldsymbol{X})\) can be transformed to the natural parameter \(\pi(\boldsymbol{X})\) by a logistic (softmax) or an inverse-probit link function [3] denoted by \(\Psi(\cdot)\), so that \(\pi(\boldsymbol{X})=\Psi(f_{0}(\boldsymbol{X}))\). In either case, the function \(g(f_{0}(\boldsymbol{X}))=1\) trivially satisfies condition (10). Hence from Theorem 4.1 and Theorem 4.4, we can conclude that the BART model for multi-class classification has a near-minimax posterior concentration rate. Classification with Dirichlet Step-HeightsFor the same multi-class classification problem with \(p\) classes described above, an alternative prior specification is recommended by [2]. The parameters \(\mathbf{\pi}(\mathbf{X})\) can be approximated by multivariate step functions of the form \(f_{\mathcal{T},P}(\mathbf{x})=\sum_{k=1}^{K}P_{k}\mathbb{I}(\mathbf{x}\in\Omega_{k})\) on a tree-partition \(\{\Omega_{k}\}_{k=1}^{K}\). [2] assumes that \(P_{k}=(P_{k1},\ldots,P_{kp})\)\(\stackrel{{ i.i.d}}{{\sim}}\) Dirichlet\((\alpha_{1},\ldots,\alpha_{p})\), where \(\alpha_{l}>0,\quad\forall l\in\{1,\ldots,p\}\). For example, in a binary classification (\(p=2\)) problem, we can assign prior \(P_{k}\stackrel{{ i.i.d}}{{\sim}}\) Beta\((2,2)\) on the step-heights. The prior Beta\((2,2)\) violates condition (6). But we can show that this estimator has a near-optimal posterior concentration rate, even if we cannot conclude this from the results discussed in Section 4. A proof is given in the supplementary material. This demonstrates that the assumptions we make in Section 4 are merely _sufficient_ but not _necessary_ conditions for proving that the generalized Bayesian tree estimator has a near-minimax posterior concentration rate. Count Regression:For count response variable, \(\mathbf{Y}\sim Poisson\left[\lambda(\mathbf{X})\right]\) with \(\lambda(\mathbf{X})>0\). There are several choices for the link function \(\Psi(\cdot)\) to map the unconstrained function \(f_{0}(\mathbf{X})\) to the constrained parameter \(\lambda(\mathbf{X})\). The posterior concentration rate of the Generalized Bayesian tree estimator might differ depending on which link function is used. For example, if we use \(\Psi(z)=\log\left(1+\exp(z)\right)\), the softplus link function, then \(g(f_{0}(\mathbf{X}))=1/(1+\exp\left(f_{0}(\mathbf{X})\right)\), trivially satisfies condition (10) and we can conclude that the generalized tree estimator has a near-minimax concentration rate from Theorems 4.1, 4.3 and 4.4. In contrast, if we use \(\Psi(z)=\exp(z)\) as the link function, then \(g(f_{0}(\mathbf{X}))=\exp\left(-\exp(f_{0}(\mathbf{X}))\right)\) does not satisfy the condition (10), when the true function \(f_{0}\) is a \(\nu\)-Holder continuous function. Therefore we cannot apply Theorem 4.4 anymore to imply that the generalized tree estimator has a near-optimal rate of posterior concentration. When \(f_{0}\) is a step function with complexity \(K_{f_{0}}\), the condition (10) is satisfied with \(B_{n}=\left[-K_{f_{0}}\log n,K_{f_{0}}\log n\right]\) and \(C_{g}^{n}=n^{K_{f_{0}}}\). The posterior concentration rate becomes \(\varepsilon_{n}=n^{-\frac{1-\alpha}{2}}\sqrt{K_{f_{0}}\log^{2\eta}(n/K_{f_{0 }})}\) under the assumption \(K_{f_{0}}\lesssim n^{\alpha}\) for some \(0<\alpha<1\). This is slower than the near-optimal concentration rate \(n^{-\frac{1}{2}}\sqrt{K_{f_{0}}\log^{2\eta}(n/K_{f_{0}})}\), if we use \(\Psi(z)=\log\left(1+\exp(z)\right)\), the softplus link function, instead. This demonstrates the need for choosing suitable link functions in empirical applications. ## 6 Discussion In this paper we have examined a general framework for Bayesian Additive Regression Tree Models that encapsulates various conventional BART models adapted to a wide range of regression and classification tasks. We demonstrated that these models have a near-minimax posterior concentration rate for a wide range of functions, thus corroborating the empirical success of BART and its variants, from a theoretical perspective. These results also build the foundation for uncertainty quantification statements for a wide variety of BART models, opening up an interesting avenue for future research. Among empirical implications, we have established the need for careful modeling choices such as selecting appropriate link functions. The theoretical results also substantiate the scope of a wider variety of distributions on approximating step-heights, that can prove advantageous for applications where the response distribution has a thicker tail. These theoretical findings also provide strong motivation for exploring novel application areas for flexible BART-like models.
2305.13532
InProC: Industry and Product/Service Code Classification
Determining industry and product/service codes for a company is an important real-world task and is typically very expensive as it involves manual curation of data about the companies. Building an AI agent that can predict these codes automatically can significantly help reduce costs, and eliminate human biases and errors. However, unavailability of labeled datasets as well as the need for high precision results within the financial domain makes this a challenging problem. In this work, we propose a hierarchical multi-class industry code classifier with a targeted multi-label product/service code classifier leveraging advances in unsupervised representation learning techniques. We demonstrate how a high quality industry and product/service code classification system can be built using extremely limited labeled dataset. We evaluate our approach on a dataset of more than 20,000 companies and achieved a classification accuracy of more than 92\%. Additionally, we also compared our approach with a dataset of 350 manually labeled product/service codes provided by Subject Matter Experts (SMEs) and obtained an accuracy of more than 96\% resulting in real-life adoption within the financial domain.
Simerjot Kaur, Andrea Stefanucci, Sameena Shah
2023-05-22T23:09:28Z
http://arxiv.org/abs/2305.13532v1
# InProC: Industry and Product/Service Code Classification ###### Abstract. Determining industry and product/service codes for a company is an important real-world task and is typically very expensive as it involves manual curation of data about the companies. Building an AI agent that can predict these codes automatically can significantly help reduce costs, and eliminate human biases and errors. However, unavailability of labeled datasets as well as the need for high precision results within the financial domain makes this a challenging problem. In this work, we propose a hierarchical multi-class industry code classifier with a targeted multi-label product/service code classifier leveraging advances in unsupervised representation learning techniques. We demonstrate how a high quality industry and product/service code classification system can be built using extremely limited labeled dataset. We evaluate our approach on a dataset of more than 20,000 companies and achieved a classification accuracy of more than 92%. Additionally, we also compared our approach with a dataset of 350 manually labeled product/service codes provided by Subject Matter Experts (SMEs) and obtained an accuracy of more than 96% resulting in real-life adoption within the financial domain. small datasets, representation learning, classification + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote †: journal: JPMorgan AI Research + Footnote † and have low precision. One of the standard taxonomy datasets for industry code classification is the North American Industry Classification System (NAICS) (Bang et al., 2017) codes but due to its lack of adaptation to changing economy, NAICS codes have not gained real-life adoption by most financial organizations. Hence, in case of financial applications, unavailability of labeled dataset and direct financial consequences makes it a hard problem to solve and poses a serious challenge to adoption within the financial domain. In this work, we propose a novel algorithm that leverages recent advances in representation learning to build a hierarchical multi-class classifier along with a targeted multi-label classifier which predicts which industries each company belongs to and which products and services the company builds within each industry by only using company's description as input. Formally, Fig 1 above summarizes our problem. _Given a limited sample-set of companies A, B...N, together with company descriptions, can we correctly predict which custom-defined industry codes each company belongs to and what products and services within each industry the company is involved in?_ Our proposed approach consists of first constructing a labeled dataset for training a multi-class industry classifier by leveraging human subject matter experts (SMEs) and unsupervised techniques. For this work, a pre-defined set of custom named industries and products and services within each industry were provided by the SMEs 1. The SMEs also provided brief one-to-two line description for each of the industry as well as the product and services taxonomy. We then build a high dimensional vector representation for each company using its description as available input data and predict top 3 industries in which the company might belong to. Finally, we construct a targeted multi-class products and services classifier by first building a high dimensional vector representation for each company as well as predefined products and services taxonomy within the predicted industries using their descriptions as inputs and generate similarity scores between them which are then used to identify top 2 products and services that the company might be involved in. Footnote 1: To protect proprietary data, the names of pre-defined set of custom industries and products/services have not been disclosed. The rest of the paper is outlined as follows. Section 2 describes related work and highlights the unique challenges that come about in building an industry and product/service code classifier, section 3 describes the details of how we constructed the dataset and section 4 provides details of our proposed hierarchical architecture and implementation in more detail. In section 5 we review the results of our approach and also demonstrate how the human targeted SME verification helped in the adoption of our model in the real-world of financial domain. Finally, section 6 concludes our work and lays out foundation for future work in this area. ## 2. Related Work There has been numerous works that focus on solving hierarchical text classification which try to leverage recent advances in deep learning and solve the task using supervised techniques such as (Han et al., 2017; Chen et al., 2018), (Bang et al., 2017), and (Bang et al., 2017) has done a comprehensive survey on these techniques. While many of these techniques have achieved remarkable results, they require huge amounts of labeled hierarchical data which is very hard to obtain in the financial domain. As mentioned in Section 1, one of the available taxonomy datasets for industry code classification is the NAICS (Bang et al., 2017) codes, a six-digit code, containing 1057 codes which comprises of 20 sectors, 99 sub-sectors, and 311 industry groups. However, this dataset is very focused on North America economy and has not been adaptive enough to capture the changing economy and hence has not been adopted by most financial organizations. This makes the problem very hard to solve as every financial organization defined their own custom industry and products and services codes. Some groups have tried to solve the text classification problem by using unsupervised techniques like hierarchical clustering and topic modeling such as (Bang et al., 2017; Chen et al., 2018; Chen et al., 2018). However, these models are hard to get adopted in real-world financial domain as they do not perform well on large scale data and the low precision results can have direct financial consequences as explained in Section 1. Additionally, there also has been some work in the literature on automated industry classification (Bang et al., 2017; Chen et al., 2018; Chen et al., 2018), almost all these approaches rely on self- reporting, manual entry, and possibly naive algorithms. (Chen et al., 2018) and (Bang et al., 2017) explored the possibility of using NAICS code for building a supervised learning algorithm and leverage advancements in deep learning. (Chen et al., 2018) tried to use clustering algorithms to capture similarities across companies and identify industry codes. These methodologies are not scalable and are difficult to be adopted and deployed in financial domain since the companies these days belong to multiple industries and the products and services are interdisciplinary in nature. Moreover, there has been no literature on hierarchical classification of the companies into identifying what products and services the companies are involved in. In this work, we have tried to leverage advances in deep learning and unsupervised learning to build a hierarchical multi-class multi-label classifier which uses the company's descriptions as input and predicts top 3 industries that the company might belong to and top 2 product/services within each industry that the company might be involved in. ## 3. Data This section describes the details for our dataset. Our model has been developed and tested using licensed data from Pitchbook (Pitchbook, 2017). The dataset contains details on companies such as a brief description on what the company does as well as pitchbook-defined industry verticals. Moreover, we were provided with \(\sim\)40 custom defined industry codes and \(\sim\)400 products and services codes by the SMEs into which the companies had to be classified in. The following sections detail how we constructed the dataset to build an industry code classifier as well as the products and services classifier. ### Dataset Construction for Industry Code Classification To classify the companies with high accuracy and precision into which industries they belong to, we first construct a labeled dataset with input as company descriptions and output as the industry codes. In order to construct this dataset, we leveraged the pitchbook industry codes. The pitchbook industry codes are defined in three levels: (a)Primary Industry Sector: A broad category that contains industry groups and codes, (b)Primary Industry Group: A sub-category that provides more specific classification, (c)Primary Industry Code: The primary industry the company operates in. In order to obtain the mapping between pitchbook descriptions and SME's custom defined industry codes, we leveraged our SMEs to create a mapping between the pitchbook's three level codes and SME's industry codes. Through this mapping we were able to generate a labeled dataset containing pitchbook descriptions as inputs and SME's industry codes as output. However, through this mapping we were only able to generate the labeled dataset for 75% of the industry codes. For the remaining 25%, we used an unsupervised approach to obtain the labeled dataset. Fig 2 outlines the methodology used to construct the dataset for the remaining industry codes. Since the industry codes in itself do not contain much contextual meaning, the SME's provided us with brief one-to-two line descriptions of all the custom defined industry codes. In order to enable us to easily perform comparisons and estimate how close/far the companies and the industry codes are, we then used a pre-trained large language model to encode the company descriptions and industry code descriptions into a high dimensional distributed vector representation. The key guiding principle behind the high dimensional vector representation is that similar sentences/words, or phrases within a similar contexts should map close to each other in a high-dimensional space, and sentences/words or phrases that are very different should be far away from each other. We then estimate the similarity scores between the two vector representations using cosine similarity, \(cos(comp,ind\_code)\). The industry code which has the highest similarity score and is above a given threshold, \(thresh\) (hyperparameter), is then assigned as the label to the corresponding company description. Please note that these comparisons are performed only for the remaining 25% of the industry codes. \[cos(comp,ind\_code)=\frac{comp\_vec\_ind\_vec}{||comp\_vec||||ind\_vec||} \tag{1}\] ### Dataset Construction for Products and Services Code Classification Since we use an unsupervised approach for this classification, we only used the company descriptions from pitchbook dataset. As the product and service codes in itself do not contain much contextual meaning, the SME's provided us with one-to-two line descriptions of all the custom defined product and service codes. ## 4. Proposed Approach This section describes our algorithm used to classify the companies into the industry codes to which it belongs, in Subsection 4.1, and subsequently how it predicts the product and service codes in which the companies are involved in, in Subsection 4.2. ### Industry Code Classifier Figure 3 describes our proposed approach to solve the industry code classification problem i.e. _'What are the industry codes to which Company i belongs to?'_. Our proposed approach consists of two sequential steps. In the first step, we pass the company descriptions through a pre-trained large language model, specifically Roberta-base model (Rajaj et al., 2017), and obtain high dimensional distributed vector representations for each company. In the second step, we use these representations as input features to train a multilayer perceptron with a classification layer using the labeled dataset constructed in Section 3.1. This trained classification model is then used to predict industry codes for new companies. Moreover, since the Roberta-base model is pre-trained over generic Wikipedia data, this makes our trained classifier generalizable to predict industry codes for company descriptions obtained from any data source. ### Products and Services Code Classifier Figure 4 describes our proposed approach to solve the product and service code classification problem i.e. _'What are the product and services codes within a particular industry that the Company i is involved in?'_. This is the second targeted classification layer wherein we try to predict which product and service codes the company is involved in within each predicted industry code (\(\sim\)8-15 product/services codes within each industry) in an unsupervised manner, obtained from Subsection 4.1. In this approach, we obtained the products and services code descriptions within each industry code from the SMEs, as described in Section 3.2. In order to enable us to easily perform comparisons and estimate how close/far the companies and the product and service codes are, we Figure 4. Product/Service Codes Classification Methodology Figure 3. Industry Codes Classification Methodology Figure 2. Construction of Labeled Data for the remaining 25% of the industry codes for Industry Code Classification then used a pre-trained large language model, specifically Roberta-base model, to encode the company descriptions and product and service code descriptions into a high dimensional distributed vector representation. We then estimate the similarity scores between the two vector representations using cosine similarity, Equation 1. The product and service codes which have the top 2 similarity scores are then assigned as the predicted product and service codes to the corresponding company. The _Algorithm 1_ describes our overall industry and product/service codes prediction algorithm in a step-by-step manner. ``` 1:Input 1: Companies, say \(comp\) where \(comp\in[1,2,3,\ldots,m]\), its descriptions, say \(desc\) where \(desc\in[1,2,3,\ldots,m]\) and its labels, say \(I\) where \(l\in[1,2,\ldots,m]\), where \(m\) is the number of companies in the dataset 2:Input 2: Products and services taxonomy, say \(ps\_code\) where \(ps\_code\in[A,B,\ldots,N]\) and its descriptions, say \(ps\_code\_desc\) where \(ps\_code\_desc\in[A,B,\ldots,N]\), where \(N\) is number of products and services codes belonging to each industry code (\(N\) varies for each industry code) 3:(1)Industry Codes Classifier: 4:Split the dataset into training set, say \(comp\_tr,desc\_tr,l\_tr\in[1,2,3,\ldots,k]\), and test set, say \(comp\_ts,desc\_ts,l\_ts\in[1,2,3,\ldots,k]\) 5:for\(comp\_tr,desc\_tr=[1,2,\ldots,k]\)do 6:Obtain \(tr\_n=\) vector rep. for \(desc\_tr\) 7:endfor 8:Train classification model, say \(ind\_code\_mo\), using \(tr\_v\) as inputs and \(l\_tr\) as actual labels 9:for\(comp\_ts,desc\_ts=[1,2,\ldots,k]\)do 10:Obtain \(ts\_v=\) vector rep. for \(desc\_ts\) 11:Use \(ind\_code\_mo\) and predict top 3 industry codes for \(comp\_ts\), say \(comp\_Inc\in[X,Y,Z]\) 12:endfor 13:(2)Products and Services Codes Classifier: 14:for\(comp\_ts,desc\_ts=[1,2,\ldots,k]\)do 15:for\(comp\_Inc=[X,Y,Z]\)do 16:for\(ps\_code=[A,B,\ldots,N]\)do 17:Obtain \(ps\_v=\) vector rep. for \(ps\_code\) description 18:Obtain \(ts\_v=\) vector rep. for \(desc\_ts\) 19:Calculate cosine similarity, \(sim(comp,ps\_code)=\frac{ts\_v,ps\_v}{||ts\_v||.|ps\_v||}\) 20:endfor 21:Obtain top 2 \(prod\_codes\) for which the \(sim(comp,ps\_code)\) are the highest 22:endfor 23:endfor ``` **Algorithm 1** Industry & Product/Service Codes Prediction Algo ## 5. Experiments and Results In order to first evaluate the performance of our industry code classifier, we first split the labeled dataset constructed in Section 3.1 into 80% training and 20% test set. Moreover, as discussed in Sections 1 and 2, since the companies are interdisciplinary in nature we generated the confusion matrix and observed that majority of the companies cover the span of at least 3 industry codes. Fig 5 shows the confusion matrix for a sample of 10 industry codes2 with red circles depicting the span size for \(Industry\ Code\). Finally, Table 1 shows top-3-accuracy score of the industry code classifier on this labeled test set. Footnote 2: Due to proprietary reasons, the industry codes have been masked. Further, in order to test the accuracy of product and services classifier as well as for real-life adoption, we were provided a small set of 350 companies which were manually labeled by Subject Matter Experts (SMEs) containing the ground truth for both industry and product and service. Table 1 shows top-3-accuracy score of industry classifier and top-2-accuracy score of product and services classifier on this SME-labeled dataset. ## 6. Conclusion and Future Work Our work proposes a novel hierarchical classification algorithm that leverages recent advances in representation learning to predict high precision custom-defined industry codes as well as products and services codes for a particular company by only using company's description as input. We also highlight how a high quality labeled industry code dataset can be constructed through mapping and unsupervised techniques, and demonstrate how a targeted unsupervised multi-class products and services classifier can yield high precision predictions resulting in real-life adoption and deployment. The work opens numerous avenues to further build upon. For instance, we could further enhance the algorithm to automatically generate custom industry codes and product and services codes based on company descriptions and then using public information extraction techniques to extract code definitions hence eliminating the need for predefining these codes by a subject matter expert. \begin{table} \begin{tabular}{l|c|c|c} \hline & Industry Code & Products and & \# of Samples \\ & Classifier & Services Classifier & \\ \hline Constructed & 92.38\% & - & 20,000 \\ Test Set & & & \\ SME-Labeled & 94.87\% & 96.85\% & 350 \\ Dataset & & & \\ \hline \end{tabular} \end{table} Table 1. Results of Industry Code Classifier and Product and Service Code Classifier Figure 5. Confusion Matrix for a sample of 10 industry codes
2307.05529
Keystroke Dynamics for User Identification
In previous research, keystroke dynamics has shown promise for user authentication, based on both fixed-text and free-text data. In this research, we consider the more challenging multiclass user identification problem, based on free-text data. We experiment with a complex image-like feature that has previously been used to achieve state-of-the-art authentication results over free-text data. Using this image-like feature and multiclass Convolutional Neural Networks, we are able to obtain a classification (i.e., identification) accuracy of 0.78 over a set of 148 users. However, we find that a Random Forest classifier trained on a slightly modified version of this same feature yields an accuracy of 0.93.
Atharva Sharma, Martin Jureček, Mark Stamp
2023-07-07T23:12:16Z
http://arxiv.org/abs/2307.05529v1
# Keystroke Dynamics for User Identification ###### Abstract In previous research, keystroke dynamics has shown promise for user authentication, based on both fixed-text and free-text data. In this research, we consider the more challenging multiclass user identification problem, based on free-text data. We experiment with a complex image-like feature that has previously been used to achieve state-of-the-art authentication results over free-text data. Using this image-like feature and multiclass Convolutional Neural Networks, we are able to obtain a classification (i.e., identification) accuracy of 0.78 over a set of 148 users. However, we find that a Random Forest classifier trained on a slightly modified version of this same feature yields an accuracy of 0.93. ## 1 Introduction Authentication and intrusion detection are crucial aspects of online security. Conventional authentication methods, such as passwords, have limitations, and biometric systems may require additional hardware or be unsuitable for specific user groups. Recent research highlights the need for accessible and inclusive authentication systems for all users, including elderly [14, 24] and disabled individuals [26]. Keystroke dynamics are a promising means for improved user authentication and identification. By analyzing keystroke patterns, a user can be identified based on their distinctive typing style, regardless of age or physical ability. Furthermore, keystroke dynamics can aid in detecting an intruder who has gained unauthorized access to a system, making such it potentially a useful tool for intrusion detection. Compared to traditional authentication methods such as passwords, keystroke dynamics offer several benefits. First, keystroke dynamics are challenging to break since people tend to have distinctive typing patterns that may be difficult to replicate or guess. In contrast, passwords can be compromised through data breaches or guessed through trial-and-error. Second, keystroke dynamics can provide a more robust and reliable two-factor authentication approach--if an unauthorized user obtains a valid user's login credentials, they may still be detected and denied access, due to their failure to mimic the expected typing characteristics. Also, keystroke dynamics can offer continuous authentication, enabling passive, ongoing user-identity verification throughout a session, adding an extra layer of intrusion detection. Overall, keystroke dynamics may enable improvements in authentication, identification, and intrusion detection. For the research presented in this paper, we use the so-called Buffalo free-text keystroke dataset to study keystroke dynamics. This dataset was collected by researchers at SUNY Buffalo and has been widely used in research in this field [27]. Here, free-text means that subjects do not type the same thing, which is in contrast to fixed-text data, where all subjects type the same text. While fixed text is used to study problems related to improved authentication (typically, via passwords), free text is useful for studying the intrusion detection problem which, in the context of free text, is sometimes referred to as continuous authentication. Both free-text and fixed-text data can be used to study the user identification problem. Note that in this context, for the identification problem we are trying to determine specifically who is typing, and there may be a very large number of possible typists. In contrast, for the authentication problem, the typist claims to be a specific user, and we only need to determine whether the person typing is the claimed user or not. Consequently, the authentication problem can be viewed as a 1-to-1 comparison, whereas the identification problem is a many-to-one comparison, and hence the identification problem is inherently more challenging. In this paper, we consider the user identification problem, based on the Buffalo free-text dataset. Free-text and fixed-text datasets have their advantages and drawbacks. Free-text datasets, collected while users type naturally without constraints, offer a more realistic representation of user behavior, providing a more transparent experience for users [18]. On the other hand, fixed-text datasets, collected under controlled conditions where participants type specific words, phrases, or sentences, enable more controlled experiments and easier comparison by eliminating variations in text input [13]. Due to the practicality and user experience aspects, we have chosen to work with free-text data in this study. Note that of the various permutations involving free-text or fixed-text for authentication or identification, the user identification based on free-text data is the most challenging case. Note also that in this context, identification is synonymous with classification. Inspired by successful authentication results in prior studies, we first consider a feature engineering approach that originated in [15], where elementary features are transformed into a multi-channel image-like transition matrix which is referred to a Keystroke Dynamics Image (KDI). Within this matrix, rows and columns denote keyboard keys, while the depth signifies distinct feature categories. We conduct multi-class classification experiments on the 148 users in our dataset, employing a Convolutional Neural Network (CNN) model trained on the KDI features with cutout regularization. To assess the effect of keystroke sequence lengths on our model, we experiment with multiple sequence lengths. The CNN model results yield a respectable accuracy of 0.78. We then experiment with classic learning techniques using a flattened version of the KDIs as our feature vectors. We find that a Random Forest model trained on these features yields much improved results, with an accuracy of 0.93 on this inherently challenging user identification problem. To the best of the authors' knowledge, this is the strongest result to date for the user identification problem, based on the popular Buffalo free-text dataset. The remaining paper is organized as follows: In Section 2, we delve into background topics such as the learning techniques utilized and the dataset considered in our study. This section also includes a review of related prior research. Section 3 details the features we employ and, specifically, discusses our feature engineering strategy for preparing input data for our classification models. In Section 4, we elaborate on the model architectures considered in this paper and discuss hyperparameter tuning. Section 5 encompasses our experiments and provides an analysis of the results. Lastly, Section 6 offers a conclusion and suggests potential avenues for future research. ## 2 Background Authentication is a fundamental aspect of security systems [3], and keystroke dynamics has emerged as a promising method for verifying user identity. Unlike traditional authentication methods, keystroke dynamics has the potential to detect intruders even after they have gained access to the system, making it a valuable tool for preventing security breaches. However, the effectiveness of keystroke dynamics-based systems depends on the ability to accurately classify users, based on their typing characteristics. The more challenging problem of user identification based on keystroke dynamics is also of interest, particularly in the context of an intrusion detection system (IDS). For such a scenario, the use of free text data may be advantageous, as compared to fixed text [1]. Free text data is more representative of how users type on a regular basis and is not constrained by a pre-determined text input. This may result in more accurate and reliable outcomes. Moreover, free text datasets are adaptable to passive monitoring within an IDS. Another advantage of keystroke dynamics-based systems is that they can benefit users of all ages and those with disabilities, provided that they type to use a system [23]. Therefore, this approach can provide a more inclusive and accessible method that does not discriminate based on age or physical ability. In summary, keystroke dynamics-based systems may offer a reliable and effective means of user authentication and identification, provided that we can accurately and efficiently distinguish between users. In this research, we show that even for the inherently challenging identification problem, it is possible to obtain strong results. ### Related Work Keystroke dynamics is a behavioral biometric that has been extensively studied for user authentication and less so for identification. In 1980, Gaines, et al. [8] analyzed digraph latencies to examine the distinctiveness of typing patterns and found that specific digraphs could distinguish right-handed touch typists from one another with 92% accuracy over a limited number of users. Following this, in 1990, Bleha, et al. [6] proposed a real-time pattern recognition based approach to classify users. The online verification system they developed had a false rejection rate (FRR) of 8.1% in rejecting valid users and 2.8% false acceptance rate (FAR). This work laid the foundation for much of the subsequent research in this field. Recently, machine learning techniques have been widely applied in keystroke dynamics. Classic machine learning algorithms, such as \(k\)-Nearest Neighbors (\(k\)-NN) and Support Vector Machines (SVM), have yielded promising results in user authentication tasks. However, these methods often rely on handcrafted features, which may be less robust and less generalizable to diverse user groups and typing scenarios. An SVM-based method by Giot, et al. [9], requires only five captures for initial enrollment, while Gingrich, et al. [11] utilize a \(k\)-NN approach, resulting in further improvements in efficiency. These approaches offer robust and generalizable methods with high accuracy and efficiency compared to previous work that utilized traditional statistical-based classification algorithms. Clustering techniques have been employed in the context of keystroke dynamics to group similar users or typing patterns, and to identify potential outliers. Revett, et al. [21] have demonstrated that \(K\)-Means clustering can yield useful results, achieving an authentication accuracy of 96.20%. In addition, clustering techniques can also be used as data analysis tool. For example, Robinson, et al. [22] use hierarchical clustering to evaluate the effect of hold times on the homogeneity of valid user timing vectors. This use of hierarchical clustering helped to establish the relative homogeneity of valid user timing vectors and improve the accuracy of subsequent experiment. Clustering can also be applied to keystroke dynamics for the purpose of detecting account sharing. Hwang, et al. [12], show that user's keystroke patterns forms a distinctive clusters in Euclidean space and the number of shared accounts can then be estimated by the number of clusters. The optimal number of clusters is estimated using a Bayesian model-selection framework, and the results show a 2% false alarm rate, a 2% miss rate, and a 93% accuracy. Clustering methods such as Expectation Conditional Maximization (ECM) have also been combined with other approaches, including Extreme Learning Machines (ELM), to improve accuracy and stability. ELM is a single hidden layer feedforward network that is extremely fast to train and achieves good generalization performance for some problems. Sriram, et al. [20] used a clustering-based, semi-supervised ECM-ELM approach to achieve an accuracy of 87% with the popular CMU Keystroke Dataset. Deep learning techniques for analyzing keystroke dynamics have shown promise in recent studies, with CNN being employed to achieve notable results. A novel approach by Liu and Guan [16] involves converting keystroke data into images-like features, which allows for the mining of spatial information and results in an accuracy of 96.8%, with an FAR of 0.04%. In contrast, Piugie, et al. [19] concentrate on using deep learning for passphrase-based user authentication, and surpass the performance of state-of-the-art methods in terms of the Equal Error Rate (EER). As researchers explore various deep learning model architectures for keystroke dynamics authentication, recent studies have investigated the application of recurrent neural networks. For example, an architecture based on a hybrid CNN and Gated Recurrent Unit (GRU) is proposed and analyzed by Lu, et al. [28], while Mhenni, et al. [17] examine the use of Long Short-Term Memory (LSTM) and Bidirectional Long Short-Term Memory (BiLSTM) architectures. Both papers illustrate the potential of deep learning models in this domain. In particular, Mhenni, et al., show that BiLSTM outperforms LSTM, achieving an accuracy of 86% and 71% for the GREYC-2009 and WEBGREYC databases, respectively; in comparison their LSTM model has an accuracy of 68% and 53% over these same datasets. The research presented in this paper is motivated by previous work involving image-like data structures for keystroke data [15, 4]. These image-like representations can leverage the powerful capabilities of CNNs, which are known for their success in image classification tasks. In this context, the work of Li, et al. [15] is particularly relevant, as it introduced a unique Keystroke Dynamic Image (KDI) that lead to improved state-of-the-art results, as compared to previous work. We consider this same KDI image-like feature in our multiclass experiments. In summary, the related work in the field of keystroke dynamics spans a wide range of techniques and methodologies, including classic machine learning, deep learning, feature engineering, threshold-based techniques, clustering, and various ensembles. Building upon this rich body of research, the current study aims to advance the state-of-the-art, particularly within the relatively neglected area of user identification. ### Dataset For our experiments, we are use a free-text keystroke dataset collected by researchers at SUNY Buffalo [27], which is referred to as the Buffalo Keystroke Dataset in the literature, or more simply, the Buffalo dataset. This dataset is a collection of free-text keystroke dynamics data obtained from 148 research participants. The participants were asked to complete two typing tasks in a laboratory setting over the course of three separate sessions. The first task involved transcribing Steve Jobs' Commencement Speech, split into three parts, while the second task included free-text responses to questions. To ensure the generalizability, there was a 28-day interval between each session. Out of the 148 participants, 75 completed the typing test with the same keyboard across all three sessions, while the remaining 73 participants used three different keyboards in each session. The dataset contains the timestamp of key presses (key-down) and key releases (key-up), organized in a tabular format with three columns; the first column indicates the key, the second column denotes whether the event is a key-press or key-release, and the third column records the timestamp of the event. The dataset includes information about the gender of each participant, and on average, each participant has a total of over 17,000 keystrokes across their three sessions. The Buffalo Keystroke Dataset has been widely used in the research literature. ### 2.3 Machine Learning and Deep Learning Algorithms Despite the rapid growth of neural networks, classic machine learning algorithms remain competitive in the field of keystroke dynamics. Such algorithms are based on statistical and mathematical techniques, and have been used with success for many years in various fields. Among deep learning techniques, we consider Convolutional Neural Networks. #### Support Vector Machines Support Vector Machine (SVM) [10] is a powerful supervised machine learning technique, which has its theoretical foundation solidly rooted in computational and mathematical principles, SVM is designed to identify a hyperplane in an \(N\)-dimensional space that can accurately separate labeled data points into their respective classes. The algorithm aims to maximize the minimum distance, or "margin," between the hyperplane and the data. SVM is generally recognized for its practical effectiveness, as it can efficiently handle large and complex datasets. It has been used in a wide range of fields, including image classification, text classification, and bioinformatics. #### Random Forest Random Forest classifiers consist of ensembles of decision trees. During training, a Random Forest uses a divide and conquer strategy by sampling small subsets of the data and features, with a simple decision tree constructed for each such subset. The Random Forest classification is based on the predictions of its component decision trees, usually using a simple voting strategy [5]. Important hyperparameters in a Random Forest include the number of estimators (i.e., decision trees), maximum features (maximum number of features to sample in any one decision tree), among others. #### 2.3.3 Convolutional Neural Network CNNs [2] are a specialized type of neural network that utilize convolution kernels to deal with local information, often from image-like data. Unlike traditional neural networks, CNNs share weights at different locations, resulting in more efficient and shift-invariant models with fewer parameters. Their multi-layer convolutional architecture enables them to extract information at different resolutions in computer vision tasks, making them ideal for image processing. CNNs can analyze images and extract important features, such as edges, shapes, and textures, in a highly effective manner. Additionally, the use of convolution kernels in CNNs enables the network to learn spatial features, such as orientation and scale, which is especially useful in image recognition tasks. CNNs have proven to be highly effective in a surprisingly wide variety of applications, including object recognition, face recognition, and image classification. CNNs have also been successfully applied to non-image data, such as audio and text. Dropout regularization [25] is widely used to prevent overfitting in feedforward neural networks. However, this approach is less effective in convolutional layers due to their shared information and lower parameter count. To overcome this limitation, Cutout regularization is used [7]. As the name suggests, Cutout regularization consists of blocking out parts of the input image at various stages in the training process. This forces the model to focus on areas of the image that might otherwise be ignored during training, resulting in a more robust model. Cutouts also improve a model's ability to generalize and perform well with limited training data. Overall, Cutout is a versatile and effective technique for image analysis that can enhance the performance of CNNs. ## 3 Feature Engineering As mentioned above, we use the Buffalo Keystroke Dataset, a free-text dataset with limited information. Feature engineering is critical to our analysis, as we will be exploring image-like features that are derived from the features existing in the dataset. These features capture the timing information of individual keystrokes and their relationships to other keystrokes, allowing us to build a detailed sequence of keystrokes for each user. By carefully engineering these features, we hope to gain additional insight into how keystroke dynamics can be successfully used as a biometric for user identification and authentication. ### Keystroke Features Keystroke dynamics datasets often provide two kinds of features, namely, time-based information and pressure-based information. Both types of features can provide valuable insights into typing behavior, but pressure-based features are not available on many modern keyboards. Therefore, our research will focus solely on time-based information. The data in the Buffalo Keystroke Dataset can be understood by examining the following five time-based features, which are depicted in Figure 1. * Duration: The time that the user holds a key in the down position * Down-down time (DD-time): The time between the press of a key and the press of the subsequent key * Up-down time (UD-time): The time between the release of a key and the press of the subsequent key * Up-up time (UU-time): The time between the release of a key and the release of the subsequent key * Down-up time (DU-time): The time between the press of a key and the release of the subsequent key Note that for any two consecutive keystroke events, denoted Key 1 and Key 2, six features can be extracted: duration of Key 1, duration of Key 2, DD-time, UD-time, UU-time, and DU-time. By carefully analyzing these features, we hope to gain insights into the unique patterns of typing behavior exhibited by individual users and determine how these patterns can be used for user identification. ### Keystroke Sequence A keystroke sequence refers to the entire series of keystrokes entered by a user. To better analyze these sequences, they are often divided into smaller subsequences. Each subsequence can be viewed as an independent keystroke sequence from the same user. In our research, we will experiment with different lengths of keystroke subsequences, which we treat as a hyperparameter of the system. A longer keystroke sequence can provide more information, but it can be more resource-intensive to process, and also delays the analysis until the sequence has been collected. Shorter subsequences may not capture enough information and thereby result in decreased accuracy. Therefore, we will carefully select the length of keystroke Figure 1: Time based features subsequences to ensure that they are optimized for accuracy while being mindful of resource usage. Additionally, the length of the subsequences will impact the creation of image-like features that we will be using in our analysis. By selecting the optimal length of keystroke subsequences, we aim to improve the accuracy and practicality of our results. ### Keystroke Data Image In the previous section, we discussed dividing the keystroke sequence into multiple subsequences. As discussed in Section 3.1, there are six types of timing features. Therefore, for a subsequence of \(N\) keystrokes, we can determine \(6(N-1)\) features from consecutive pairs of keystrokes. Repeated pairs are averaged and treated as a single pair. For instance, a subsequence of length 50 would yield at most \(6\cdot 49=294\) features. We consider each keystroke subsequence as an independent input sequence for the corresponding user. In this section, we discuss a feature engineering structure originally developed in [15], that enables us to effectively organize these features. The features UD-time, DD-time, DU-time, and UU-time are determined by consecutive keystroke events. We organize these four features into a transition matrix with four channels, inspired by the structure of RGB images, which have a depth of three (R, G, and B channels). Each row and column in our four-channel \(n\times n\) feature matrix corresponds to a key on the keyboard, with each channel representing one kind of feature. We organize these into transition matrices as shown in Figure 2, which we refer to as the KDI feature. For example, in the first channel of the matrix, the value at row \(i\) and column \(j\) refers to the UD-time between any key presses of \(i\) followed by \(j\) within the current observation window. We add the final feature, duration, as a diagonal matrix to the transition matrix, creating a fifth channel. If a key or key-pair is pressed more than once, we use the average duration for that key or key-pair. In this fifth channel, only diagonal locations have values because the duration feature is relevant for one key at a time. We can use this transition matrix as an image input for machine learning models. To avoid sparsity in the transition matrix, we only consider time-based features for the following 42 most common keystrokes. 1. The 26 English characters (A-Z) 2. The 10 Arabic numerals (0-9) 3. The following 6 meta keys: space, back, left-shift, right-shift, tab, and capital Thus, the shape of the transition matrix is \(42\times 42\times 5\), with the five channels as described above. In order to prevent overfitting in our CNN, we make use of cutout regularization, as discussed above. The dark blocks in Figure 2 represent cutouts. ## 4 Model Architectures In this section, we outline the learning architectures used in our experiments. We also discuss hyperparameter tuning for each of our models. ### Multiclass CNN The architecture shown in Figure 3 is a sequential CNN-based neural network. The input shape of the model is \((5,42,42)\), indicating that the input is a 3-D array with a depth of 5 and a width and height of 42. The model includes five convolutional layers, each followed by a batch normalization layer and a max pooling layer. The first convolutional layer has 32 filters of size \((5,5)\), while the subsequent convolutional layers have 64, 128, 256, and 256 filters of size \((3,3)\), respectively. All convolutional layers use the Rectified Linear Unit (ReLU) activation function. The max pooling layers have a pool size of \((2,2)\) and a stride of 2, ensuring that the output size of each layer remains the same. Figure 2: Image-like features from keystrokes After the five convolutional layers, the model includes a flatten layer, followed by two fully connected layers. The first fully connected layer has 128 units with the ReLU activation function. The final output layer has 148 units with the softmax activation function. The model is compiled using categorical crossentropy as the loss function, the adam optimizer, and we use accuracy as the evaluation metric. To identify the best combination of hyperparameters, we employed a grid search over reasonable values of various parameters. The hyperparameters tested for our CNN are given Table 1, where the selected values are in boldface We use these hyperparameters in all CNN models discussed in Section 5, below. Note that our best model uses reduceLROnPlateau (from Keras) callback to dynamically reduce the learning rate when the model was unable to improve during training. Also, we utilize earlyStopping (also from Keras) callback to halt training if the model showed signs of overfitting. Our experimental results indicated that the model tended to overfit after the 20 th epoch, which confirmed the results of our grid search. ### SVM Classifier We consider the classic SVM learning technique. As discussed above, for our SVM classifier we use the flattened KDI, based on a one-hot encoding to key events. These features are then standardized to have zero mean and unit variance. \begin{table} \begin{tabular}{c|c} \hline \hline Parameter & Values \\ \hline Number of epochs & 10, **20**, 30, 40 \\ Learning rate & 0.1, **0.01**, 0.001, 0.0001 \\ Optimizer & **Adam**, SGD, SGD with Momentum \\ Learning schedule & StepLR, **reduceLROnPlateau** \\ \hline \hline \end{tabular} \end{table} Table 1: Hyperparameter tuning for multiclass CNN Figure 3: Architecture of sequential CNN The SVM classifier used for multiclass classification is a one-vs-one (OVO) classifier, which trains a separate SVM for each pair of classes. Since this is costly to train, we restrict our attention to the 48 users that are most difficult to classify using our CNN. This requires that we train a total of \(\binom{48}{2}=1128\) SVM classifiers. To train the SVM classifier, the dataset is split into training and testing sets using stratified random sampling, with an 80-20 split, that is, 80% of the data is used for training and 20% for testing. To further reduce the training time we use the following hyperparameters: \(C=1\), kernel = rbf, and \(\gamma=\text{scale}\). ### Random Forest Classifier We also train and test Random Forest classifiers. As with our SVM model, for our Random Forest classifier we use the flattened KDI, based on a one-hot encoding to key events. The Random Forest hyperparameters tested for this model are listed in Table 2, with the values selected in boldface. As with our SVM classifier, we initially restrict our Random Forest to the 48 most challenging to identify users. However, given the strong results that we obtain, and since there is no significant efficiency issue when training on a larger dataset, we also train and test this Random Forest on the entire set of 148 users. ## 5 Experiments and Results In this section, we first discuss our experimental design. Then we present our experimental results, and provide some discussion of these results. ### Experiment Strategy We train three multiclass CNN classifiers over all 148 users, based on the KDI data structure and keystroke subsequence lengths of 50, 75 and 100. Once we establish our best model, we generate the confusion matrix, and sort based on the diagonal (i.e., true positive) elements. This enables us to split the users into three categories, namely, those that are easiest to classify, those that are of moderate difficulty to classify, and those that are the most difficult to classify. \begin{table} \begin{tabular}{c|c} \hline \hline Parameter & Values \\ \hline n\_estimators & 100, 500, **1000** \\ max\_features & **auto**, sqrt \\ min\_samples\_split & 2, **5** \\ min\_samples\_leaf & 1, **2** \\ \hline \hline \end{tabular} \end{table} Table 2: Hyperparameter tuning for Random Forest We then apply classic machine learning techniques to difficult-to-classify users, based on a flattened KDI, that is, we convert the \(5\times 42\times 42\) KDI into a feature vector of length \(5\cdot 42\cdot 42=8820\). The performance of the classic techniques on these challenging cases leads us to further analyze the best of the models over the entire dataset. ### Metrics We use accuracy as the primary means of measuring the quality of our results. We also present confusion matrices to better visualize the distribution of correct and incorrect predictions across all classes, and to distinguish users, based on the difficulty of correct classification. The accuracy of a binary classifier is simply the number of correct classifications divided by the total number of classifications, that is, \[\text{Accuracy}=\frac{\text{TP}+\text{TN}}{\text{TP}+\text{TN}+\text{FP}+ \text{FN}},\] where TP is the number of true positive samples (samples correctly classified as positive), TN is the number of true negative samples (samples correctly classified as negative), FP is the number of false positive samples (samples incorrectly classified as positive), and FN is the number of false negative samples (samples incorrectly classified as negative). In a multiclass classification problem, accuracy is the proportion of correctly classified samples to the total number of samples in the dataset. We can calculate the accuracy for multiclass classifier as \[\text{Accuracy}=\frac{\sum_{i=1}^{n}\text{TP}_{i}}{M},\] where \(\text{TP}_{i}\) represents the number of samples of class \(i\) that are correctly classified and \(M\) is the total number of samples in the dataset. In a confusion matrix, each row and column corresponds to a class in the dataset. We follow the convention that the rows represent the actual classes of the samples, and the columns represent the predicted classes. To determine the accuracy for class \(i\), we simply divide the \(i^{\text{th}}\) diagonal element by the sum of the elements in row \(i\). As noted above, the overall accuracy is the sum of all diagonal elements, divided by the sum of all elements in the matrix. ### Multiclass CNN Experiments As discussed above, we determined our CNN hyperparameters via a grid search over the number of epochs, learning rate, optimizer, and learning schedule callbacks. We found that training for 20 epochs, with a learning rate of 0.01, along with adam and reduceLROnPlateau as optimizer and callback, respectively, yielded the best results. We also experimented with different architecture of the model itself, and settled on a model with five convolutional layers, where each convolutional layer is followed by batch normalization and max pooling layers, as illustrated in Figure 3, above. After determining the hyperparameters and model architecture, we experimented with the keystroke sequence length for the KDIs. The model with keystroke length 50 shows signs of overfitting, as can be seen from the Figure 4(a). On the other hand, models which were trained on keystrokes with length 75 and 100 are more robust against overfitting, as can be seen in Figures 4(b) and (c), respectively--for both of these cases, the validation loss is continuously dropping and both validation and training accuracies are steadily climbing. The comparative analysis of training, testing and validation accuracies for keystrokes of length 50, 75 and 100 respectively is shown in Table 3. Since the model trained on keystroke sequence 100 gave us the best test and validation accuracies, we will extract the confusion matrix from this model for additional experiments in the next section. Figure 4: Training of models #### 5.3.1 CNN Confusion Matrix As established in Section 5.3, the model trained on keystroke length 100, provides the best results. For this keystroke length 100 model, a bar graph of the accuracy for each user is is given in Figure 5, where we have sorted by accuracy. Next, we use the bar graph in Figure 5 to partition the users into three subsets, based on the accuracy attained when identifying them using a CNN trained on the KDI features. We use the "slope" of the bar graph to identify these subsets. A slight "elbow" in the slope occurs after about a fifth of the users, and another is towards the last fourth of the users. Based on these observations, we establish two accuracy thresholds for authenticating users. Those users who are classified with 0.90 or greater accuracy, we consider relatively easy-to-authenticate, while those who are classified at accuracies below 0.75 are deemed the difficult-to-classify subset, while all of those in between these two thresholds are referred to as moderate-to-classify. The number of users in each of these subsets can be found in Figure 6. We further analyze the difficult-to-classify subset in the next section. \begin{table} \begin{tabular}{c|c c c} \hline \hline \multirow{2}{*}{Length} & \multicolumn{3}{c}{Accuracy} \\ \cline{2-4} & Train & Test & Validation \\ \hline 50 & 0.9074 & 0.6700 & 0.5800 \\ 75 & 0.9500 & 0.7400 & 0.7300 \\ 100 & 0.9700 & 0.7900 & 0.7800 \\ \hline \hline \end{tabular} \end{table} Table 3: Accuracy as a function of keystroke sequence length Figure 5: Bar graph for 148 users in dataset ### Experiments on Difficult-to-Identify Users In the previous section, we categorized 48 of the users as difficult to identify using a CNN trained on KDI features. Here, we consider additional experiments on this subset of users, to see if we can improve on the poor results for these users. Specifically, we apply Support Vector Machines, Decision Trees, and Random Forest, based on the flattened KDI features. For the 48 users that comprise the difficult-to-identify subset, we achieve the accuracies listed in Table 4. The confusion matrix for best of these experiments, namely, the Random Forest model, appears in Figure 7. In Figure 8, we provide a bar graph of per-user accuracies, sorted in descending order. These graphs serve to reinforce the strong results that we have obtained classifying the most challenging users. ### Random Forest Model for all Users Our surprisingly strong results on the difficult-to-classify users lead us to test the Random Forest model trained on the flattened KDI features over the entire dataset of 148 users. We find that the accuracy in this case is 0.93. Figure 9 shows the confusion matrix for our Random Forest model for all 148 users. This outcome signifies a substantial improvement over the original multiclass CNN that was trained on the 5-channel KDI, as the CNN model only achieved an accuracy of 0.78. Figure 10 displays the sorted per-user accuracy bar graph for our Random Forest model. \begin{table} \begin{tabular}{c|c} \hline \hline Model & Test accuracy \\ \hline SVM & 0.50 \\ Decision Tree & 0.88 \\ Random Forest & 0.92 \\ \hline \hline \end{tabular} \end{table} Table 4: Accuracy of models for difficult-to-identify users Figure 6: Number of users in each cluster Figure 8: Random Forest results for difficult-to-identify users Figure 7: Random Forest confusion matrix for difficult-to-identify users Figure 11 shows the number of users for each classification decile for our Random Forest model. We find that only a small fraction of users, specifically 8, now have an accuracy of 0.75 or lower. In addition, 27 users have classification accuracies in the range of 0.75 to 0.90, while the vast majority of users, 109 to be precise, are classified with an accuracy of at least 0.90. These results underscore the success of the Random Forest model trained on the flattened KDI features. As far as the authors are aware, this Random Forest model yields the best identification results yet achieved for the 148 users in the Buffalo dataset. Figure 10: Random Forest results for all 148 users Figure 9: Random Forest confusion matrix for all 148 users ## 6 Conclusion In this research, we expanded on previous work on user identification based on keystroke dynamics. We used an image-like data structure and obtained good results for the identification problem, based on a Convolutional Neural Network. While using this keystroke image-like features with binary CNN classifiers is not new, one innovative aspect of the research in this paper lies in the application of a multiclass CNN for identifying users, which enabled us to categorize users into easy-, moderate-, and difficult-to-identify subsets. This approach enabled us to focus extra attention on the users that are most difficult to identify. When experimenting with the most challenging cases, we discovered that a Random Forest trained on a flattened KDI feature yielded surprisingly strong results. Even more surprising, testing this same model over the entire dataset yielded much better results that our multiclass CNN model. Future research could explore incorporating additional features, such as digraph and trigraph latencies, or even other biometric data, to improve model performance. Also, the Buffalo keystroke dataset that we employed for our experiments was created using mechanical keyboards. It would be valuable to obtain keystroke data from mobile devices and apply a similar analysis to that data. The dynamics of touch-based interactions likely differ substantially from those of traditional mechanical keyboard input. Another potential area of future research is the development of an efficient strategy for adding new users to an existing keystroke dynamics-based authentication or identification system. Currently, incorporating new users into a multiclass model generally requires retraining the entire model, which may not be practical in real-world scenarios. To overcome this issue, we could explore methods for determining the cluster that a user's keystroke patterns most closely match. It might then be possible to achieve good results in a two-stage process, where users are first assigned to a cluster, and subsequently distinguished Figure 11: Random Forest accuracy ranges for all 148 users from the other users in their cluster. By assigning a user to the nearest cluster, we could avoid the problem of retraining a multiclass model for all users. This approach could facilitate the seamless integration of new users into the system, while maintaining the efficiency and accuracy of the identification or authentication process. In summary, we have achieved strong results for the challenging problem of user identification based on keystroke dynamics, using classic machine learning models. Our results indicate that practical, real-world identification based on keystroke dynamics may be feasible.
2304.04367
Angle-dependent pair production in the polarized two-photon Breit-Wheeler process
The advent of laser-driven high-intensity $\gamma$-photon beams has opened up new opportunities for designing advanced photon-photon colliders. Such colliders have the potential to produce a large yield of linear Breit-Wheeler (LBW) pairs in a single shot, which offers a unique platform for studying the polarized LBW process. In our recent work [Phys. Rev. D 105, L071902(2022)], we investigated the polarization characteristics of LBW pair production in CP $\gamma$-photon collisions. To fully clarify the polarization effects involving both CP and LP $\gamma$-photons, here we further investigate the LBW process using the polarized cross section with explicit azimuthal-angle dependence due to the base rotation of photon polarization vectors. We accomplished this by defining a new spin basis for positrons and electrons, which enables us to decouple the transverse and longitudinal spin components of $e^\pm$. By means of analytical calculations and Monte Carlo simulations, we find that the linear polarization of photon can induce the highly angle-dependent pair yield and polarization distributions. The comprehensive knowledge of the polarized LBW process will also open up avenues for investigating the higher-order photon-photon scattering, the laser-driven quantum electrodynamic plasmas and the high-energy astrophysics.
Qian Zhao, Yan-Xi Wu, Mamutjan Ababekri, Zhong-Peng Li, Liang Tang, Jian-Xing Li
2023-04-10T03:38:12Z
http://arxiv.org/abs/2304.04367v1
# Angle-dependent pair production in the polarized two-photon Breit-Wheeler process ###### Abstract The advent of laser-driven high-intensity \(\gamma\)-photon beams has opened up new opportunities for designing advanced photon-photon colliders. Such colliders have the potential to produce a large yield of linear Breit-Wheeler (LBW) pairs in a single shot, which offers a unique platform for studying the polarized LBW process. In our recent work [Phys. Rev. D 105, L071902(2022)], we investigated the polarization characteristics of LBW pair production in CP \(\gamma\)-photon collisions. To fully clarify the polarization effects involving both CP and LP \(\gamma\)-photons, here we further investigate the LBW process using the polarized cross section with explicit azimuthal-angle dependence due to the base rotation of photon polarization vectors. We accomplished this by defining a new spin basis for positrons and electrons, which enables us to decouple the transverse and longitudinal spin components of \(e^{+}\). By means of analytical calculations and Monte Carlo simulations, we find that the linear polarization of photon can induce the highly angle-dependent pair yield and polarization distributions. The comprehensive knowledge of the polarized LBW process will also open up avenues for investigating the higher-order photon-photon scattering, the laser-driven quantum electrodynamic plasmas and the high-energy astrophysics. ## I Introduction The quantum electrodynamics (QED) theory predicts the interaction between photons, wherein the collision of two real \(\gamma\) photons can produce the \(e^{+}e^{-}\) pair, known as the linear Breit-Wheeler (LBW) process. The LBW process is the second-order process in QED [1], and its validation in terrestrial laboratories requires an ultrahigh-brilliance \(\gamma\)-ray beam with a peak cross-section of \(\sim 1.6\times 10^{-29}\) m\({}^{2}\)[2]. To date, the only verified interaction of real photons in experiments is multiphoton Breit-Wheeler pair production [3]. However, the production of LBW pairs via virtual photons has been demonstrated in high-energy collider experiments by utilizing the photons of a highly Lorentz-contracted Coulomb field [4]. The LBW process is a fundamental ingredient in high-energy astrophysics, playing a crucial role in the production of pair plasma in \(\gamma\)-ray bursts [5; 6; 7] and black hole activity [8; 9]. Furthermore, the LBW process can dominate the laser-driven QED plasmas, as demonstrated by numerical simulations [10; 11]. In recent years, several \(\gamma\gamma\) colliders have been proposed in the platform of laser-plasma interactions to produce LBW pairs [12; 13; 14; 15; 16; 17; 18; 19; 20]. Despite the calculation of the total LBW cross section \(\sigma_{\gamma\gamma}\) for increasing the luminosity of \(\gamma\gamma\) colliders, the photon polarization and \(e^{+}\) spins, which are the fundamental quantum nature of the photons and leptons [21], entail the exclusive characteristics of the LBW process [22; 23; 24; 25; 26; 2]. When utilizing two linearly polarized (LP) photons, \(\sigma_{\gamma\gamma}\) can be expressed in terms of mutual-parallel or mutual-perpendicular polarization vectors, and the non-polarized cross section can be obtained via a simple average over the two LP cross sections [22; 26]. The LBW process has been demonstrated in high-energy colliders by means of the equivalent photon approximation of ultra-relativistic heavy-ions, which presents the first measurement of the unique \(\cos 4\varphi\) modulation of the distinct differential cross section with two LP photons, where \(\varphi\) is the angle between the lepton pair transverse momentum and the individual lepton transverse momentum [27; 28]. Fundamentally, the angular modulations, both in polar and azimuthal directions, originate from the total helicity of the \(\Lambda_{\gamma}=0,\pm 2\) two-photon system, which must be transferred to the orbital angular momentum of the \(e^{+}e^{-}\) pair [2; 4]. Therefore, the LBW differential cross section can be extended to the fundamental level of helicity amplitudes to more deeply investigate the effects of photon polarization [29]. Moreover, to clarify the effects of arbitrary photon polarization, the polarization states of photons and pairs can be described by the corresponding density matrices, leading to the completely polarized LBW cross section in terms of Stokes parameters, which is applicable to perform the Monte Carlo (MC) simulation incorporating the beam effects [2; 30]. The collision of quasi-energetic circularly polarized (CL) \(\gamma\)-photon beams can produce distinct energy-angle spectra. When the colliding photons have the same right-hand or left-hand helicities, the collision produces the quadrupole angular spectrum imprinted by the longitudinal polarization. Conversely, the opposite helicities lead to the dipole one imprinted by the transverse polarization [2]. Although the effects of independent linear polarization or circular polarization have been elucidated, their coupling effects are still elusive when the colliding photons are partially circular polarization and linear polarization. In this paper, we undertake an investigation of the fully angle-dependence in the polarized LBW process using both analytical cross-section and MC numerical simulation. By
2306.05600
Comment on the feasibility of carbon burning in Betelgeuse: a response to "The evolutionary stage of Betelgeuse inferred from its pulsation periods," arXiv:2306.00287
The recent pre-print by Saio et al. 2023 argues that the supergiant Betelgeuse is already undergoing carbon burning, based on the assumption that all of its light variations are caused by radial pulsations. However, the angular diameter measurements of the star are in conflict with the stellar radius required by their models, as we show in this note. We discuss the feasibility that the Great Dimming was caused by constructive mode interference using long-term brightness measurements and comment on differences in modeling frameworks adopted in Saio et al. 2023 vs Joyce et al. 2020.
László Molnár, Meridith Joyce, Shing-Chi Leung
2023-06-09T12:44:19Z
http://arxiv.org/abs/2306.05600v1
Comment on the feasibility of carbon burning in Betelgeuse: a response to "The evolutionary stage of Betelgeuse inferred from its pulsation periods", arXiv:2306.00287 ###### Abstract The recent pre-print1 by Saio et al. (2023) argues that the supergiant Betelgeuse is already undergoing carbon burning, based on the assumption that all of its light variations are caused by radial pulsations. However, the angular diameter measurements of the star are in conflict with the stellar radius required by their models, as we show in this note. We discuss the feasibility that the Great Dimming was caused by constructive mode interference using long-term brightness measurements and comment on differences in modeling frameworks adopted in Saio et al. (2023) vs Joyce et al. (2020). Footnote 1: version v1 published to arxiv on June 3rd, 2023. 0000-0002-4882-7888]Laszlo Molnar 0000-0002-4882-7888]Meridith Joyce 0000-0002-4883-0888]Shing-Chi Leung ## 1 Introduction A recent pre-print by Saio et al. (2023) postulates that Betelgeuse is already in the carbon-burning phase and is therefore close to a supernova explosion. Crucially, they set the constraint that the Long Secondary Period (LSP) in Betelgeuse's light variations is in fact the fundamental radial mode (FM). LSPs are observed in many red giants and supergiants and are generally connected to external processes like dust and/or binarity (see, e.g., Wood & Nicholls, 2009; Soszynski et al., 2021), although internal processes like pulsation (Derekas et al., 2006; Saio et al., 2015) have also been proposed. The light variation of Betelgeuse is complex and comprises multiple signals. Usually the longest periodicity, at 2200 days, is considered to be an LSP with an unknown origin, whereas the 380-430 d period is identified as the fundamental radial _p_-mode. Further signals, such as the 185 d periodicity identified by Joyce et al. (2020) and discussed in Dupree et al. (2022) and MacLeod et al. (2023), are identified as overtones. Saio et al. (2023) instead propose that all signals are _p_-mode pulsations, and therefore the 2200 d period is the fundamental mode. This requires a much larger stellar radius--on the order of 1400 \(R_{\odot}\)--than those inferred by other studies, between 600-1000 \(R_{\odot}\)(Dolan et al., 2016). ## 2 Comments on observational constraints The diameter of Betelgeuse has been measured in several wavelengths. However, measurements do not necessarily detect the visible-light photosphere of the star directly. They are affected by factors like limb darkening, spots, molecular layers and circumstellar dust. Therefore the raw numbers cannot be used as a constraint for the stellar radius. Haubois et al. (2009) found a diameter of 44.3-46.74 mas in the _H_-band depending on modeling assumptions. Montarges et al. (2014) found a limb-darkened diameter of 42.28\(\pm\)0.43 mas, using continuum _K_-band observations that are the least affected by molecular layers. Other studies using near-infrared observations also place the photospheric diameter to between 42-45 mas (Dolan et al., 2016, and references therein). The picture gets more complicated in the mid-infrared. As Cannon et al. (2023) explains, the apparent stellar diameter peaks at around 11 \(\mu m\), where the contribution of hot circumstellar dust is the greatest. They specifically state that given the complexities in the mid-infrared emission of the star, "values for the disk diameter from \(\lambda>8.75\,\mu m\), should not be taken at face value." We conclude therefore that the angular diameter of the visible-light photosphere of Betelgeuse has been constrained, by multiple authors, to be below 45 mas. This limits the physical radius to \(R<1100\,R_{\odot}\) even when using the farthest distance value of 222 pc (Harper et al., 2017), and gives greater validity to sub-\(1000\,R_{\odot}\) values based on most other distance measurements and inferences, some of which are summarized in Figure 10 of Joyce et al. (2020). These are in direct conflict with the radius required for a 2200-d pulsation period. In Fig. 1 we update the light curve published in Joyce et al. (2020) with new measurements. We can see the slow undulation caused by the LSP, but it is clear that the dimming far exceeded the extrema of its usual brightness variations. The LSP has always been present, and it was pronounced in the 1930s to 1960s. The long-term light curve therefore does not support the hypothesis that the 2200-d variation has been increasing recently and that the cause of the Great Dimming was constructive mode interference. That would cause a more even variation between cycles, as the synthesized light curves of Saio et al. (2023) themselves suggest. Instead, a sudden dimming can be naturally explained by the condensation of a dense dust cloud in our line of sight (Montarges et al., 2021). ## 3 Comments on Modeling Saio et al. (2023) attempt to explain the cause of the Great Dimming using a fully linear theoretical framework. Their models, like all of those computed in Joyce et al. (2020), do not capture behavior beyond the outer physical boundary of the star, yet many have argued that dust-gas interactions are necessary considerations. The need to invoke constructive interference of modes only exists if the cause of the Great Dimming was intrinsic, yet there is little ambiguity that the Great Dimming was caused by a dust cloud (Montarges et al., 2021). The right-hand panel of Figure 12 in Joyce et al. (2020) shows a synthetic period-radius relationship based on extracted cycle lengths from 1D hydrodynamic simulations (MESA). This relation does indeed associate an FM period of 2200 d to a much larger radius: roughly \(1600R_{\odot}\). Figure 1: Visual (grey) and photometric (blue: \(V\)-band; red: SMEI) light curves of Betelgeuse. Updated from Fig. 1 in Joyce et al. (2020). Likewise, the linear, adiabatic seismic calculations computed with GYRE in Joyce et al. (2020) are shown in terms of O1/FM ratio in their Figure 9. Based on this plot, Saio et al. (2023)'s O1/FM ratio of 0.19 would imply a present-day mass of more than \(24M_{\odot}\). While there are structural differences between helium- and carbon-burning models, these differences are interior, whereas the pressure modes probe the convective envelope. Key differences in the modeling formalisms are as follows: * Saio et al. (2023) calculate the perturbation of luminosity using (Eq. 2) \(\delta L/L=4\delta T_{\rm eff}/T_{\rm eff}+2\delta R/R\), which only applies in the condition that the photosphere is fixed, which is not the case for the extended and dilute atmosphere. * The non-adiabatic calculations in Saio et al. (2023) do not capture the non-linear density and temperature dependence of opacity, thus neglecting the complication of kappa mechanism and its mode excitation. * They use the non-adiabatic oscillation which linearly extends the adiabatic radial oscillation and includes heat transport perturbation from convection, whereas the Joyce et al. (2020) models are in fully consistent hydrodynamics. * Joyce et al. (2020) used hydrodynamics to model initial amplitude growth. Saio et al. (2023) assumed a mode amplitude growth profile. Nonetheless, it is our stance that the main source of tension between our earlier work and Saio et al. (2023) stems from their treating the LSP as radial pulsation and applying inadequate observational constraints to justify that assumption. _Acknowledgements:_ This research was supported by the 'SeismoLab' KKP-137523 Elvonal grant of the NKFIH. M.J. acknowledges funding from the European Union's Horizon 2020 research and innovation programme. Twitter discussions with Dr. Miguel Montares, Dr. Thayne Currie and Prof. Chris Lintott are gratefully acknowledged. _Facilities:_ AAVSO
2310.09904
Structural and physical properties of the chiral antiferromagnet CeRhC$_2$
We report a study of the structural, magnetic, transport, and thermodynamic properties of polycrystalline samples of CeRhC$_2$. CeRhC$_2$ crystallizes in a tetragonal structure with space group $P4_1$ and it orders antiferromagnetically below $T_\textrm{N1} \approx$ 1.8 K. Powder neutron diffraction measurements reveal a chiral magnetic structure with a single propagation vector $Q_m = (1/2,1/2,0.228(5))$, indicating an antiferromagnetic arrangement of Ce magnetic moments in the $ab$-plane and incommensurate order along the $c$-axis with a root-mean-square ordered moment of $m_\textrm{ord}$= 0.68 $\mu_\textrm{B}$/Ce. Applying a magnetic field suppresses the N\'{e}el temperature $T_\textrm{N1}$ to zero near $\mu_0H_\textrm{c1}\sim$0.75 T. A second antiferromagnetic phase ($T_\textrm{N2}$), however, becomes apparent in electrical resistivity, Hall and heat capacity measurements in fields above 0.5 T and extrapolates to zero temperature at $\mu_0H_\textrm{c2}\sim$ 1 T. Electrical resistivity measurements reveal that LaRhC$_2$ is a semiconductor with a bandgap of $E_\textrm{g}\sim24$ meV; whereas, resistivity and Hall measurements indicate that CeRhC$_2$ is a semimetal with a low carrier concentration of $n\sim10^{20}$ cm$^{-3}$. With applied hydrostatic pressure, the zero-field antiferromagnetic transition of CeRhC$_2$ is slightly enhanced and CeRhC$_2$ becomes notably more metallic up to 1.36 GPa. The trend toward metallicity is in line with density-functional calculations that indicate that both LaRhC$_2$ and CeRhC$_2$ are semimetals, but the band overlap is larger for CeRhC$_2$, which has a smaller unit cell volume that its La counterpart. This suggests that the bandgap closes due to a lattice contraction when replacing La with Ce in RRhC$_2$ (R = rare-earth), in agreement with experimental results.
Yu Liu, M. O. Ajeesh, A. O. Scheie, C. R. dela Cruz, P. F. S. Rosa, S. M. Thomas, J. D. Thompson, F. Ronning, E. D. Bauer
2023-10-15T18:10:52Z
http://arxiv.org/abs/2310.09904v1
# Structural and physical properties of the chiral antiferromagnet CeRhC\({}_{2}\) ###### Abstract We report a study of the structural, magnetic, transport, and thermodynamic properties of polycrystalline samples of CeRhC\({}_{2}\). CeRhC\({}_{2}\) crystallizes in a tetragonal structure with space group \(P4_{1}\) and it orders antiferromagnetically below \(T_{\rm N1}\approx 1.8\) K. Powder neutron diffraction measurements reveal a chiral magnetic structure with a single propagation vector \(Q_{m}=(1/2,1/2,0.228(5))\), indicating an antiferromagnetic arrangement of Ce magnetic moments in the \(ab\)-plane and incommensurate order along the \(c\)-axis with a root-mean-square ordered moment of \(m_{\rm ord}\)\(=0.68\)\(\mu_{\rm B}\)/Ce. Applying a magnetic field suppresses the N\(\acute{e}\)el temperature \(T_{\rm N1}\) to zero near \(\mu_{0}\)\(H_{\rm c1}\)\(\sim 0.75\) T. A second antiferromagnetic phase (\(T_{\rm N2}\)), however, becomes apparent in electrical resistivity, Hall and heat capacity measurements in fields above 0.5 T and extrapolates to zero temperature at \(\mu_{0}\)\(H_{\rm c2}\)\(\sim 1\) T. Electrical resistivity measurements reveal that LaRhC\({}_{2}\) is a semiconductor with a bandgap of \(E_{\rm g}\)\(\sim\) 24 meV; whereas, resistivity and Hall measurements indicate that CeRhC\({}_{2}\) is a semimetal with a low carrier concentration of \(n\sim 10^{20}\) cm\({}^{-3}\). With applied hydrostatic pressure, the zero-field antiferromagnetic transition of CeRhC\({}_{2}\) is slightly enhanced and CeRhC\({}_{2}\) becomes notably more metallic up to 1.36 GPa. The trend toward metallicity is in line with density-functional calculations that indicate that both LaRhC\({}_{2}\) and CeRhC\({}_{2}\) are semimetals, but the band overlap is larger for CeRhC\({}_{2}\), which has a smaller unit cell volume that its La counterpart. This suggests that the bandgap closes due to a lattice contraction when replacing La with Ce in RRhC\({}_{2}\) (R = rare-earth), in agreement with experimental results. ## I Introduction Symmetry\(-\) or the lack of it\(-\) plays an important role in dictating the properties of materials [1]. Chiral materials with crystal structures that do not have inversion, mirror, or roto-inversion symmetries often show exotic quantum phenomena [2]. For example, the chiral material CoSi, which crystallizes in the cubic \(B20\) structure (space group \(P2_{1}3\)), hosts unconventional chiral fermions with a large topological charge that is connected by giant Fermi arcs [3; 4]. Likewise, isostructural MnSi exhibits an electrical magnetochiral effect (EMC), i.e., the non-reciprocal transport of conduction electrons depending on the inner product of current and magnetic field [5]. Moreover, broken inversion symmetry in chiral magnetic materials allows for the Dzyaloshinskii-Moriya (DM) interaction [6; 7], which tends to induce canting of the spins away from a collinear arrangement. The competition between the DM and other magnetic interactions that favor collinear spins may lead to modulated spin structures, such as chiral helimagnetic order, conical structures, chiral soliton lattices [8; 9; 10], and chiral magnetic skyrmion lattices [11; 12; 13]. To realize these emergent quantum states and to understand the underlying physics that dictates their properties, exploring new materials that might host these states is highly desired. The ternary rare-earth carbides RTC\({}_{2}\) (R = rare-earth metal, T = transition metal) are interesting because they exhibit a diversity of inversion and/or time-reversal symmetry breaking phenomena [14; 15; 16; 17; 18; 19; 20], and are potential candidates for the above quantum states. The RNiC\({}_{2}\) (R=rare earth) compounds crystallize in the CeNiC\({}_{2}\)-type orthorhombic structure with the \(Amm2\) space group that lacks inversion symmetry. RNiC\({}_{2}\) materials have been widely studied and exhibit a variety of magnetic ground states [21; 22; 23], superconductivity (SC) [24; 25; 26; 27] that includes the possibility of spin-triplet superconductivity in CeNiC\({}_{2}\) under pressure [28], and a complex interplay between these phases and charge density waves (CDW) [29; 30; 31; 32; 33; 34; 35]. CeNiC\({}_{2}\) is an example of the complex interplay of degrees of freedom in the RNiC\({}_{2}\) series. CeNiC\({}_{2}\) undergoes three successive magnetic phase transitions, from a paramagnetic (PM) to an incommensurate antiferromagnetic (AFM) phase at \(\sim\) 20 K, a commensurate AFM phase below 10 K and finally to a ferromagnetic/fermagnetic state below \(\sim\) 2 K [36; 37; 38]. RCoC\({}_{2}\) compounds adopt the same CeNiC\({}_{2}\)-type structure for heavy rare-earth members (R = Gd-Lu) and exhibit ordering temperatures comparable to RNiC\({}_{2}\) with the same rare-earth metal [39; 40]. However, compounds with light rare-earth (R = La-Nd) members adopt the monoclinic CeCoC\({}_{2}\)-type structure with the \(C_{1}c_{1}\) space group, which is a distorted modification of the CeNiC\({}_{2}\)-type structure involving tilting of the C\({}_{2}\) pairs [23]. CeCoC\({}_{2}\) is a Kondo-lattice material with a characteristic Kondo temperature \(T_{\rm K}\)\(\sim\) 30 K [41]. Replacing Ni or Co with larger Rh produces RRhC\({}_{2}\), which crystallizes in a tetragonal CeRhC\({}_{2}\)-type structure for R = La, Ce with the chiral \(P4_{1}\) space group, but adopts the CeNiC\({}_{2}\) structure-type for R = Pr-Sm [15]. CeRhC\({}_{2}\) forms in an interesting chiral structure with Ce atoms forming a helix along the \(c\)-axis. Aside from structural details, the physical properties of CeRhC\({}_{2}\) have not been investigated in detail, especially at low temperatures [42]. In this paper, we report a detailed study on polycrystalline samples of CeRhC\({}_{2}\), along with its nonmagnetic analog LaRhC\({}_{2}\), by dc magnetic susceptibility, isothermal magnetization, heat capacity, longitudinal and Hall resistivity, and powder neutron diffraction measurements. ## II Experimental details Polycrystalline samples of LaRhC\({}_{2}\) and CeRhC\({}_{2}\) were prepared by arc melting high purity La or Ce, Rh, and C mixtures with a total mass \(\sim\) 1 g on a water-cooled copper hearth under an argon atmosphere with a Zr getter. A stoichiometric ratio of La:Rh:C=1:1:2 and a ratio of Ce:Rh:C = 1:0.9:2 or 1:1:2.4 produced the highest quality samples of LaRhC\({}_{2}\) and CeRhC\({}_{2}\), respectively, with the least amount of impurities. The samples were wrapped in tantalum foil, sealed in an evacuated silica tube, and then annealed for two weeks at 1000 \({}^{\circ}\)C for LaRhC\({}_{2}\). An annealing temperature of 1100 \({}^{\circ}\)C yielded higher quality CeRhC\({}_{2}\), and small single crystals (\(\sim\)50 microns) were obtained when annealed at 1200 \({}^{\circ}\)C. Powder X-ray diffraction (XRD) patterns were collected for two hours for each using a Malvern Panalytical Empyrean diffractometer in the Bragg-Brentano geometry using Cu K-\(\alpha\) radiation. Single crystal XRD of CeRhC\({}_{2}\) was collected at room temperature on a \(\sim\) 10-\(\mu\)m single crystal by a Bruker D8 Venture single-crystal X-ray diffractometer equipped with Mo radiation at room temperature. Magnetization measurements were performed in a Quantum Design Magnetic Property Measurement System (MPMS) from 1.8 to 350 K and magnetic fields up to 6.5 T. Specific heat measurements were performed using a Quantum Design Physical Property Measurement System (PPMS) from 0.35 to 20 K and magnetic fields up to 9 T that utilizes a quasi-adiabatic thermal relaxation method. Longitudinal resistivity and Hall resistivity measurements on a parallelopiped cut from an arc-melted button were carried in a PPMS using a standard four-probe configuration with an ac resistance bridge (Lake Shore, model 372). If the voltage contacts were misaligned, it was possible to determine the Hall resistivity by taking the difference of transverse resistivity measured at positive and negative fields, i.e., \(\rho_{\text{xy}}=(\rho_{\text{H}+}-\rho_{\text{H}}\).)/2, to eliminate the longitudinal resistivity contribution. Resistivity measurements under hydrostatic pressure were carried out using a double-layered piston-cylinder-type pressure cell with Daphne 7373 oil as the pressure-transmitting medium. The pressure inside the sample space was determined at low temperatures by the shift of the superconducting transition temperature of a piece of lead. Neutron diffraction experiments were carried out at the HB2A powder diffractometer [43] at Oak Ridge National Laboratory\({}^{\prime}\)s HFIR. A 6 g powder sample of CeRhC\({}_{2}\) was mounted in an aluminum can with a copper lid, pressurized under 10 bar helium gas, and placed in a \({}^{3}\)He refrigerator. The diffraction pattern with neutron wavelength \(\lambda=2.41\) A was collected at 4 K and at 0.3 K for 16 hours at each temperature. Density functional theory (DFT) calculations were performed using the Perdew, Burke, and Ernzerhof exchange correlation functional [44] and included spin-orbit coupling without relativistic local orbitals through a second variational method using the WIEN2K code [45]. For CeRhC\({}_{2}\), the Ce f-electron was treated as an open core (localized) state. ## III Results and discussion The crystal structure of LaRhC\({}_{2}\) and CeRhC\({}_{2}\), first determined by Tsokol\({}^{\prime}\)_et al._[15], is comprised of trigonal prisms of rare-earth atoms that are filled by Rh atoms and C\({}_{2}\) pairs [16]. Compared to the CeNiC\({}_{2}\)-type structure with two-dimensional (2D) C\({}_{2}\) sheets, LaRhC\({}_{2}\) and CeRhC\({}_{2}\) crystallize in a chiral structure with a more three-dimensional (3D) network resulting from tilting half of the C\({}_{2}\) pairs [47]. Figure 1(a) depicts the chiral structure of LaRhC\({}_{2}\) and CeRhC\({}_{2}\) from both side and top views with a clear fourfold screw axis along the \(c\) direction. Figure 1(b) shows the powder XRD patterns of LaRhC\({}_{2}\) and CeRhC\({}_{2}\); peaks can be well indexed in the \(P4_{1}\) space group for the majority phase but a small amount of impurities (\(\leq 7\%\) total for LaRhC\({}_{2}\) and \(\leq 3\%\) total for CeRhC\({}_{2}\)) also could be identified. Bragg peaks of impurities along with their volume percentages are included in Fig. 1(b). Rietveld refinement determined lat Figure 1: (Color online). (a) Crystal structure from both side and top views and (b) powder x-ray diffraction (XRD) patterns of LaRhC\({}_{2}\) and CeRhC\({}_{2}\). The peak marked with \(*\) represents an unknown impurity phase. tice parameters are \(a\) = 3.970(1) A and \(c\) = 15.331(1) A for LaRhC\({}_{2}\). The unit cell decreases to \(a\) = 3.932(1) A and \(c\) = 15.305(1) A for CeRhC\({}_{2}\), indicating a volume contraction \(\sim\) 2.1% relative to the La analog. Powder XRD parameters match well with \(a\) = 3.93 A and \(c\) = 15.29 A obtained from single crystal XRD of CeRhC\({}_{2}\) and are close to the published lattice parameters [16]. Figure 2(a) shows the temperature dependence of the dc magnetic susceptibility \(\chi\)(T)=\(M/\mu_{0}H\) measured in a field \(\mu_{0}H\) = 0.1 T. A peak at \(T_{\rm N}\)\(\approx\) 2.1 K is observed, in line with a previous report [42], reflecting an AFM transition. The temperature-dependent inverse susceptibility 1/\(\chi\)(T) is plotted in the inset of Fig. 2(a). A linear fit from 200 to 350 K to a Curie-Weiss law, \(\chi=C/(T-\theta)\), where \(C\) is the Curie constant and \(\theta\) is the paramagnetic Curie-Weiss temperature, yields \(\theta\) = -121 K, but this does not reflect the sign of the magnetic exchange because of crystal field effects. The derived effective moment \(\mu_{\rm eff}\) = 2.60 \(\mu_{\rm B}\)/Ce is close to the Ce\({}^{3+}\) free-ion moment of 2.54 \(\mu_{\rm B}\). Below about 150 K, \(\chi\)(T) deviates from Curie-Weiss behavior, most likely due to the effect of the crystalline electric field (CEF) splitting of the \(J=5/2\) manifold. A crude estimate of the energy \(\Delta_{1}\) of the first excited CEF doublet is estimated by fitting \(\chi\)(T) to \(\chi\) = \(p_{0}\chi_{1}+p_{\Delta_{1}}\chi_{2}\), where \(p_{n}\) is the Boltzmann population factor and \(\chi_{\rm n}\) is the Curie-Weiss susceptibility of the ground and excited multiplet. This function reproduces the temperature dependence of \(\chi\)(T) of CeRhC\({}_{2}\) (not shown) with \(\Delta_{1}\) = 27 meV and high- (low-) temperature effective moment of \(\mu_{\rm eff}\) = 2.4 \(\mu_{\rm B}\) (1.7 \(\mu_{\rm B}\)). Therefore, the ground state doublet is well separated from the first excited CEF doublet. Figure 2(b) displays the isothermal magnetization \(M\)(H) measured at \(T\) = 1.8 K along with its first derivative \(dM/dH\) in the inset. Above about 1 T, \(M\)(H) starts to saturate and reaches \(\sim\) 0.75 \(\mu_{\rm B}\)/Ce at 6.5 T. As discussed later, this high-field value of \(M\) is close to the value of the ordered moment determined by neutron diffraction. \(M\)(H) does not follow a simple Brillouin function, which is reflected in a maximum in \(dM/d\mu_{0}H\) near 0.4 T. On the basis of other results discussed below, this maximum is attributed to a field-induced transition to a different magnetic state at \(T_{\rm N2}\). Finally, a weak anomaly occurs in \(dM/dH\) near 3.75 T as well as in \(\chi\)(T) and \(C_{\rm p}/T\) [Fig. 2(c)] near \(T\)\(\sim\) 30 K. These features are due to a small amount (\(<\) 2%) of magnetic CeC\({}_{2}\) impurity [48; 49], which orders antiferromagnetically at \(T_{\rm N}\) = 30 K and was also present in CeNiC\({}_{2}\)[37]. The magnetic entropy associated with the magnetic ordering of CeC\({}_{2}\) near \(\sim\) 30 K is about 72 mJ/mol K, suggesting \(\sim\) 0.5 at.% of CeC\({}_{2}\) in the CeRhC\({}_{2}\) sample. Figure 2: (Color online). (a) Temperature-dependent magnetic susceptibility \(\chi\)(T) of CeRhC\({}_{2}\) measured in a magnetic field of \(\mu_{0}H\) = 0.1 T in zero-field-cooled (ZFC) mode. The inset of (a) shows the temperature-dependent inverse magnetic susceptibility 1/\(\chi\)(T) and a linear fit from 200 to 350 K. (b) Field dependence of magnetization \(M\)(\(\mu_{0}\)H) of CeRhC\({}_{2}\) at \(T\) = 1.8 K. The inset plots the field derivative of \(M\)(\(\mu_{0}\)H) and reveals a second magnetic transition at \(T_{\rm N2}\). A weak feature near 3.75 T is due to a second phase as discussed in the text. (c) Temperature dependence of specific heat \(C_{\rm p}/T\) of LaRhC\({}_{2}\) and CeRhC\({}_{2}\) in zero field. The inset of (c) shows the 4\(f\) contribution to the entropy \(S_{4f}\)(T)/Rln2 of CeRhC\({}_{2}\). (d) Temperature dependence of electronic specific heat \(\Delta C_{\rm p}/T\) in various magnetic fields. Figure 3: (Color online). (a) Powder neutron diffraction pattern of CeRhC\({}_{2}\) at 0.3 and 4 K, with the temperature-subtracted (0.3 K - 4 K) data shown in green below. Several (magnetic) Bragg peaks appear at low temperatures. (b) The temperature scan of the largest magnetic Bragg intensity at \(Q\) = 1.13 Å\({}^{-1}\), showing the onset of intensity is the same as the bulk anomalies. The specific heat divided by temperature, \(C_{\rm p}/T\) [Fig. 2(c)], shows a pronounced peak at \(T_{\rm N1}=1.8\) K, which is absent in the nonmagnetic, isostructural compound LaRhC\({}_{2}\). The magnetic entropy \(S_{4f}/\)Rln2 [Fig. 2(c) inset] is determined by integrating the \(4f\) contribution to the specific heat \(S_{4f}=\int{(\Delta C_{\rm p}/T)dT}\), where \(\Delta C_{\rm p}\) is the \(4f\) contribution obtained by subtracting the specific heat of LaRhC\({}_{2}\) from the measured specific heat of CeRhC\({}_{2}\). Only about 33%Rh2 of entropy is released below \(T_{\rm N1}\). While the Kondo effect could cause a significant reduction in the entropy released below \(T_{\rm N1}\), the Kondo effect appears to be weak in CeRhC\({}_{2}\), given that the \(C_{\rm p}/T\) as \(T\to 0\) in the AFM phase is small (\(<10\) mJ/mol\({}^{-1}\)K\({}^{-2}\) extrapolated from 0.35 K), and the carrier concentration is low (see below). This suggests that magnetic frustration may be responsible for the small amount of entropy released below \(T_{\rm N1}\). Figure 2(d) presents the temperature dependence of magnetic contribution \(\Delta C_{\rm p}/T\) of CeRhC\({}_{2}\) in various applied magnetic fields. The symmetric-like shape of \(\Delta C_{\rm p}/T\) at the AFM transition in zero field may indicate a first-order transition. To check for this possibility, a heat pulse applied just below the transition temperature to raise the temperature through the AFM transition revealed a small anomaly that may be consistent with latent heat from a first-order transition [50]; however, no hysteresis was observed. We speculate that the transition may be weakly first-order, but further measurements are required to reach a firm conclusion regarding the nature of the transition. For \(H<1\) T [Fig. 2(d) inset], \(T_{\rm N1}\) is gradually suppressed but a second anomaly, labelled \(T_{\rm N2}\), emerges in a field of 0.5 T. Both \(T_{\rm N1}\) and \(T_{\rm N2}\) move to lower temperatures with increasing field and become undetectable at fields of 1 T and greater. Instead of well-defined transitions, a broad maximum develops in \(\Delta C_{\rm p}/T\) around 1.6 K at \(\mu_{0}H=1\) T, and it continues to shift to higher temperature and broaden with increasing field. To shed light on the magnetic structure below \(T_{\rm N1}=1.8\) K in zero field, powder neutron diffraction data were collected at 0.3 and 4 K. As displayed in Fig. 3(a), new Bragg peaks appear below \(T_{\rm N1}\), which is illustrated by the difference curve of 0.3 and 4 K data. The intensity of the strongest magnetic Bragg peak versus temperature [Fig. 3(b)] shows an onset of \(\sim 2\) K, consistent with the AFM transition at \(T_{\rm N1}\) observed in bulk magnetic susceptibility and specific heat measurements. We therefore associate these temperature-dependent peaks with the onset of magnetic long-range order. The magnetic Bragg peaks can all be indexed by a single propagation vector \(Q_{m}=(1/2,1/2,0.228(5))\), indicating commensurate antiferromagnetic order in the \(ab\)-plane, and incommensurate magnetic order along the \(c\)-axis. To examine the magnetic structure in more detail, a Rietveld refinement was performed using the FULPROF software package [51], first to the 4 K nuclear scattering pattern to determine the normalization and resolution parameters, and then to fit the temperature-subtracted data as shown in Fig. 4. First, a magnetic refinement was carried out using an irreducible representation decomposition based on the derived propagation vector via the BasIrpes program in FULLPROF. This approach yielded four possible irreducible representations, only one of which (\(\Gamma 1\)) fit the observed diffraction pattern with a reduced \(\chi^{2}\approx 1\). However, given the sharpness of the heat capacity peak at \(T_{\rm N1}\), it is possi Figure 5: (Color online). Temperature dependence of electrical resistivity \(\rho\)(T) of (a) LaRhC\({}_{2}\) and (b) CeRhC\({}_{2}\) (two samples from the same batch are labeled as S1 and S2) in zero field. The inset in (a) shows an activated behavior, ln\(\rho\) versus \(1/T\), along with a linear fit between 100-300 K. (c,d) Electrical resistivity \(\rho\)(T) for CeRhC\({}_{2}\) (S1) at low temperatures for several magnetic fields. Figure 4: (Color online). Rietveld refinement of the nuclear (a) and magnetic (b) structure from powder neutron diffraction measurements of CeRhC\({}_{2}\). Panel (a) shows several peaks which have anomalous intensity, which could possibly be due to additional impurity phases. The temperature-subtracted data between 4 and 0.3 K in (b) was fit by the \(\Gamma 1\) irrep magnetic structure (see text), shown in panel (c). ble that the magnetic transition is first-order, in which case the magnetism need not be of a single irrep. In that case, there are a variety of statistically indistinguishable magnetic structures which fit the data, including spiral structures and uniaxial moment-modulated sinusoidal order (see Figs. 10 and 11 in APPENDIX A). Although the precise nature of the \(c\)-axis modulation can not be uniquely determined from the neutron powder diffraction data, the magnetic ground state clearly has: (i) commensurate AFM order in the \(ab\)-plane, (ii) (mostly) coplanar moments (a slight out-of-plane canting is possible), (iii) incommensurate magnetism along the \(c\)-axis, and (iv) a root-mean-square (RMS) ordered moment of \(m_{\rm ord}\)=0.682(7) \(\mu_{\rm B}\), close to the value obtained in the magnetization measurement [Fig. 2(b)]. These observations are true of all refined magnetic structures. Despite the ambiguity in the nature of the \(c\)-axis modulation, the CeRhC\({}_{2}\) magnetically ordered state at zero field is chiral, as the possible magnetic structures all lack inversion and mirror symmetry. Having delineated the salient features of the complex magnetic structure of CeRhC\({}_{2}\), we now turn to an investigation of electrical transport properties. Figure 5(a) displays the temperature dependence of the electrical resistivity \(\rho\)(T) of LaRhC\({}_{2}\), which exhibits typical semiconducting behavior. The resistivity may be fit to an activated behavior, given by \(\rho(T)=\rho_{0}\)exp(\(E_{\rm g}/k_{\rm B}T\)), from 100 to 300 K [Fig. 5(a) inset], and the fit yields a small gap of \(E_{\rm g}\sim 24\) meV, consistent with a previous result of \(\approx\) 18 - 32 meV [16]. In contrast, samples of polycrystalline CeRhC\({}_{2}\) from the same batch show metallic behavior with similar features [Fig. 5(b)]; a broad hump between 100 and 150 K, a minimum around \(T_{\rm min}\sim\) 40 K, and a cusp at \(T_{\rm N1}\) = 1.9 K. It should be noted that the difference of absolute values of \(\rho\)(T) may be related to the relative amount of metallic impurities as stronger metallic behavior was observed (not shown) in samples with larger amounts of metallic impurities (e.g., CeC\({}_{2}\)[52], CeRh\({}_{2}\)[53], CeRh\({}_{3}\)C\({}_{0.7}\)[54]). With increasing magnetic field, \(T_{\rm N1}\) shifts to lower temperatures [Fig. 5(c)]. A second anomaly \(T_{\rm N2}\) emerges just above \(T_{\rm N1}\sim\) 1.25 K in a magnetic field of 0.5 T. The \(T_{\rm N1}\) transition disappears and an upturn appears at \(T_{\rm N2}\approx\) 1.3 K in a magnetic field of 0.7 T [Fig. 5(d)], in agreement with specific heat measurements [Fig. 2(d) inset]. At low temperatures, a large negative magnetoresistance is observed, reaching about -48% for higher fields (\(>\) 1 T) at 2 K. The magnetoresistance becomes positive for temperatures above \(\sim\) 20 K. In the absence of evidence for a substantial Kondo effect in CeRhC\({}_{2}\), we would expect, and indeed find, a weak, positive response of magnetic order to applied pressure. Figure 6 shows the evolution of \(\rho\)(T) at low temperature with increasing hydrostatic pressures up to Figure 6: (Color online). Temperature dependence of electrical resistivity \(\rho\)(T) for CeRhC\({}_{2}\) (S2) under various pressures up to 1.36 GPa.(No offset applied to the data.) Figure 7: (Color online). (a) The field dependence of Hall resistivity \(\rho_{\rm xy}\)(\(\mu_{\rm 0}\)H) at various temperatures for CeRhC\({}_{2}\). The inset shows a magnification of the low field data. (b) The derived electron concentration \(n\) by a linear fit to the \(\rho_{\rm xy}\)(\(\mu_{\rm 0}\)H) data from 3 to 9 T. 1.36 GPa. At 0.02 GPa, the data resemble those at ambient pressure [Fig. 5(b)], exhibiting an AFM transition at \(T_{\rm N1}\approx 1.79\) K as determined from the peak value of \(d\rho/dT\) (see APPENDIX B). With increasing pressure, the transition broadens and \(T_{\rm N1}\) increases linearly with a slope of 0.043 K/GPa, reaching \(T_{\rm N1}\sim 1.85\) K at 1.36 GPa. As shown in Fig. 12 in APPENDIX B, a weak shoulder in \(d\rho/dT\) emerges just above the \(T_{\rm N1}\) at 1.08 and 1.36 GPa, pointing to a pressure-induced transition in CeRhC\({}_{2}\). Possibly, this second transition is related or identical to the field-induced transition \(T_{\rm N2}\), but this remains to be established. Nevertheless, what is rather remarkable about the data plotted in Fig. 6 is the overall decrease in the magnitude of \(\rho\)(T) with applied pressure, decreasing by about 0.5 m\(\Omega\) cm at 1.36 GPa from its ambient-pressure value. This trend with modest applied pressure follows that in going from semiconducting LaRhC\({}_{2}\) to poorly metallic CeRhC\({}_{2}\) as the cell volume decreases by \(\sim 2\%\). To determine the nature of transport carriers in CeRhC\({}_{2}\), the field dependence of Hall resistivity \(\rho_{\rm xy}(\mu_{0}\)H) was measured at various temperatures. \(\rho_{\rm xy}\) is almost linear with \(\mu_{0}H\) at high temperatures, as shown in Fig. 7(a). The Hall coefficient (\(R_{\rm H}=\rho_{\rm xy}/\mu_{0}H\)) is negative, indicating dominant electron-like carriers in CeRhC\({}_{2}\). With decreasing temperature, \(\rho_{\rm xy}\) becomes nonlinear at low fields (\(<1\) T), as magnified in the inset of Fig. 7(a), pointing to the likely multi-band nature of CeRhC\({}_{2}\). In the ordered state below 2 K, two sharp slope changes are clearly observed with a local minimum and maximum corresponding to \(T_{\rm N1}\) and \(T_{\rm N2}\), respectively. Assuming a simple single-band picture, a crude estimate of the carrier concentration \(n\) from \(R_{\rm H}=1/ne\), where \(e\) is the electron charge, is obtained from a linear fit to \(\rho_{\rm xy}\) between 3 and 9 T. The so-derived carrier concentration as a function of temperature is plotted in Fig. 7(b), where \(n\) decreases with decreasing temperature, reaches a minimum at \(T=40\) K and shows a rather weak temperature dependence at lower temperatures. In spite of the over-simplification in estimating \(n\), it seems likely that low values of carrier concentration are consistent with the semimetallic resistivity for CeRhC\({}_{2}\), which also features a local minimum around 40 K [Fig. 5(b)]. Transport properties of LaRhC\({}_{2}\) and CeRhC\({}_{2}\) are consistent with DFT band structure calculations of both compounds (Fig. 8). Due to the lack of inversion symmetry and the presence of spin-orbit coupling, all bands are singly degenerate except when they cross at time-reversal invariant momenta as well as the \(k_{z}=\pi/c\) plane. Both compounds show a weak overlap between conduction and valence bands, which indicates compensated semimetallic behavior, with the band overlap notably stronger for CeRhC\({}_{2}\) than for LaRhC\({}_{2}\). Given that DFT calculations are known to overestimate bandwidths, it is anticipated that a reduction of the bandwidth would first open a small gap in LaRhC\({}_{2}\), consistent with the experimental observation that LaRhC\({}_{2}\) is semiconducting, while CeRhC\({}_{2}\) is a semimetal. Such corrections could be accomplished by including a modified Becke-Johnson (BJ) potential [55], but are outside the scope of the present manuscript. We summarize results of bulk properties measurements on CeRhC\({}_{2}\) in the temperature-magnetic field phase diagram given in Fig. 9. The solid and open symbols represent the transition into the AFM1 phase at \(T_{\rm N1}\) and the transition into the AFM2 phase at \(T_{\rm N2}\), respectively. Though neutron diffraction measurements reveal a chiral spin structure for AFM1, the nature of \(T_{\rm N2}\) remains unknown. Muon-spin rotation or neutron diffraction measurements in an applied field would provide use Figure 8: (Color online). Band structure calculated with spin-orbit coupling (SOC) for LaRhC\({}_{2}\) and CeRhC\({}_{2}\). Figure 9: (Color online). The temperature-magnetic field (\(T-\mu_{0}H\)) phase diagram constructed from bulk properties for CeRhC\({}_{2}\). The PM and AFM represent paramagnetic and antiferromagnetic phases, respectively. The lines are guides to the eye. ful information about the nature of the field-induced AFM2 phase. The growth and study of single crystalline CeRhC\({}_{2}\) will be useful for providing insight into the competition of magnetic interactions that leads to the stabilization of the chiral AFM1 phase and magnetic frustration in this interesting material. ## Conclusions Our study of magnetic, transport, and thermodynamic properties of polycrystalline samples of chiral CeRhC\({}_{2}\) show that it orders in a chiral magnetic structure in zero magnetic field and that another AFM phase appears at fields above \(\sim 0.5\) T. The nonmagnetic analog LaRhC\({}_{2}\) is semiconducting with a small bandgap that closes when replacing La with Ce. CeRhC\({}_{2}\) exhibits semi-metallic behavior with a low carrier concentration. Pressure appears to increase the band overlap in CeRhC\({}_{2}\) causing it to become more metallic up to 1.36 GPa, and reveals a possible second phase transition above \(\sim 1\) GPa. In future studies, it will be particularly interesting to search for exotic quantum states that arise from the chiral crystal and magnetic structures of CeRhC\({}_{2}\). ## Acknowledgments Work at Los Alamos National Laboratory was performed under the auspices of the U.S. Department of Energy, Office of Basic Energy Sciences, Division of Materials Science and Engineering under project "Quantum Fluctuations in Narrow-Band Systems". Y.L. and M.O.A. acknowledge a Director's Postdoctoral Fellowship through the Laboratory Directed Research and Development program. The diffraction experiment conducted at the High Flux Isotope Reactor of Oak Ridge National Laboratory was sponsored by the Scientific User Facilities Division, Office of Basic Energy Sciences, U.S. Department of Energy.
2304.14174
Analogue Hawking radiation as a tunneling in a two-level $\mathcal{PT}$-symmetric system
In the light of a general scenario of a two-level non-Hermitian $\mathcal{PT}$-symmetric Hamiltonian we apply the tetrad-based method to analyze the possibility of analogue Hawking radiation. It is done by making use of the conventional null-geodesic approach wherein the associated Hawking radiation is described as a quantum tunneling process across a classically forbidden barrier which the event horizon imposes. An interesting aspect of our result is that our estimate for the tunneling probability is independent of the non-Hermitian parameter that defines the guiding Hamiltonian.
Bijan Bagchi, Rahul Ghosh, Sauvik Sen
2023-04-27T13:23:32Z
http://arxiv.org/abs/2304.14174v2
# Analogue Hawking radiation in a two-level Weyl semimetal system ###### Abstract In the light of a general scheme of non-Hermitian \(\mathcal{PT}\)-symmetric Hamiltonian we apply the tetrad-based method to probe the idea of Weyl semimetal black hole analogy. We evaluate the tunneling probability by making use of the conventional null-geodesic approach wherein the associated Hawking radiation is described as a quantum tunneling process across a classically forbidden potential barrier which the event horizon imposes. Our estimate for the tunneling probability is independent of the non-Hermitian parameter that appears in the guiding Hamiltonian. + Footnote †: E-mails: [email protected], [email protected], [email protected] Keywords: Weyl semimetals, \(\mathcal{PT}\)-symmetry, tunneling probability, exceptional points ## I Introduction The physics of black holes has continually attracted interest after Bekenstein-Hawking's pioneering works in the 1970s in trying to interpret them as thermodynamical objects which release radiation outside their event horizon [1, 2, 3, 4] (see, for a review of literature, [5]). The idea of Hawking radiation exploits the concept of creation of pair production next to the event horizon (out of the vacuum) with one of the particles running away to the infinite space from the boundary while the other with negative energy getting sucked into the black hole resulting in the decrease in its mass until the whole black hole disappears in a cloud of radiation. A natural question has been asked as to whether viable information could be gathered at temperatures near the scale of Planckian mass when the quantum gravitational effects become substantial [6]. However, because of the near impossibility to observe Hawking radiation in a real black hole [7], seeking black-hole analogues has become an interesting alternative because these could reveal properties akin to gravitational black holes, emission of Hawking-like radiation being a specific one. In this regard, construction of experimental set-ups has been undertaken to identify quantum fluctuations that might emerge [8]. Also, very recently, observation of stimulated Hawking radiation was reported which occurs in a regime of extreme nonlinear fiber optics [9]. In condensed matter physics too, simulation of event horizon and seeking analogue radiation in a Weyl semimetal has gained considerable attention while trying to focus on the interplay of different topological invariants [10, 11]. Black hole similarities emerge with the event horizon separating two topologically different types of the Weyl materials [12, 13]. The experimental discovery of a Weyl semimetal [14, 15], which is a new state of matter with Weyl fermions acting as emergent quasiparticles, has provided a lot of impetus in this direction. A Weyl semimetal has a Weyl point around the Fermi energy at which two bands cross, and its electronic structure reveals rich topology [16, 17] that points to classification of phases according to topological insulators [18, 19, 20, 21, 22, 23, 24, 11] and topological semimetals [26, 27, 28, 29, 30, 31]. Furthermore, the underlying unique topological properties could be explained naturally in a non-Hermitian setup where exceptional points arise when symmetry breaking takes place rather than in a Hermitian background. In this paper our primary focus is to look at a general structure of a non-Hermitian two-level system and to map it in terms of a set of tetrad fields to illustrate the possibility of artificial Hawking radiation in a coordinate setting. We also give an estimate of the tunneling probability by employing the standard classical approach of WKB approximation. The organisation of this paper is as follows. ## II. Background of Weyl semimetals A typical \(2\times 2\) Hermitian Hamiltonian (with Fermi velocity taken to be unity) describing a Weyl semimetal in the linear Weyl node \(\vec{k}=(k_{x},k_{y},k_{z})\), can be expanded in the form [32] \[H=d_{0}(\vec{k})+\sum_{i=x,y,z}d_{i}(\vec{k})\sigma_{i} \tag{2.1}\] where \(d_{i}\)'s are suitable coefficients, and \(\sigma_{i}\)'s are Pauli matrices. H which characterizes conduction and valence bands supports a pair of eigenvalues whose separation is removed when the coefficients vanish simultaneously. This is at once obvious from the energy dispersion \[E_{\pm}=d_{0}\pm\sqrt{d_{x}^{2}+d_{y}^{2}+d_{z}^{2}} \tag{2.2}\] We see that degeneracy requires tuning all three \(d_{i}\)'s to become zero together i.e. \(d_{x}=d_{y}=d_{z}=0\), which is not symmetry protected [33, 34, 26]. In Weyl semimetals, in which the conduction and valence bands energy coincide over a certain region of the Brillouin zone, a linear crossing of two bands takes place at the forming of nondegenerate Dirac cones corresponding to the conduction and valance bands. Extension of topological phases from the Hermitian region to the non-Hermitian sector has been sought in a variety of papers [35] and the presence of gains and losses investigated [36, 37]. A point was made a few years ago about the question of whether real black holes can emit Hawking radiation and whether non-trivial information can be gathered about Planckian physics [6]. Very recently, De Beule et al [38] made an explicit analysis of the existence of artificial event horizon in Weyl semimetal heterostructures. In their work, the electronic analogs of stimulated Hawking emission was studied and physical observables were addressed. Sabsovich et al [39] examined black and white hole analogs in Weyl semimetals subjected to inhomogeneous nodal tilts and explored experimentally viable consequences. Specifically, the presence of a microscopic lattice was shown to affect Weyl Hamiltonians and general relativity analogy. Analogy was also drawn in some papers between the low-energy Hamiltonian of tilted Weyl nodes and black hole metric [40, 41, 11]. In a somewhat similar context, the possibility of the emission of Hawking radiation was analysed [42, 25]. Further, imitation of black hole Hawking radiation was found in a purely classical-mechanical system by employing a coupled double chain model admitting frequency dispersion [43]. The tied up issue of the tunneling probability across the event horizon points was explored in the framework of a two-level non-Hermitian topologically insulated Weyl-type Hamiltonian which contained titing in one of the directions [44]. A similar two-dimensional structure of a non-Hermitian dissipative Hamiltonian also featured in an elaborate investigation of analogue Schwarzschild black holes emitting Hawking radiation [45]. We notice that if a non-Hermitian term \(i\vec{\tau}\cdot\vec{\sigma}\), with \(\vec{\tau}=(\tau_{x},\tau_{y},\tau_{z})\), is added in (2) and \(d_{0}\) is made complex [46], the eigenvalues turn out to be \[E_{\pm}(\vec{k})=d_{0}\pm\sqrt{\Delta} \tag{2.3}\] where \(\Delta=d_{\rho}^{2}+d_{z}^{2}-\tau^{2}+2i(\tau_{x}d_{\rho}\cos\phi-\tau_{y}d_ {\rho}\sin\phi+\tau_{z}d_{z})\). In getting to the form (3) we adopted cylindrical coordinates \((d_{\rho},\phi,d_{z})\) such that \(d_{\rho}^{2}=d_{x}^{2}+d_{y}^{2}\), and parametrized \(d_{x}=d_{\rho}\cos\phi,d_{y}=d_{\rho}\sin\phi\). A consequence is that the eigenvalues become equal when the following two relations are satisfied \[d_{\rho}^{2}+d_{z}^{2}=\tau^{2}\quad\text{and}\quad\tau_{x}d_{\rho}\cos\phi- \tau_{y}d_{\rho}\sin\phi+\tau_{z}d_{z}=0 \tag{2.4}\] The eigenvectors for the eigenvalues \(E_{\pm}\) are separately \[\left[\frac{(\tau_{z}-id_{z})-\sqrt{\Delta}}{(\tau_{x}+i\tau_{y})-(d_{y}+id_ {x})},1\right]^{T} \tag{2.5}\] \[\left[\frac{(\tau_{z}-id_{z})+\sqrt{\Delta}}{(\tau_{x}+i\tau_{y})-(d_{y}+id_{x})}, 1\right]^{T} \tag{2.6}\] while we can define for the right and left eigenvectors the relationships \[H|\psi^{R}_{\pm}\rangle=E_{\pm}|\psi^{R}_{\pm}\rangle \tag{2.7}\] \[\langle\psi^{L}_{\pm}|H=E^{*}_{\pm}\langle\psi^{L}_{\pm}|=(H^{ \dagger}|\psi^{L}_{\pm})\rangle^{\dagger} \tag{2.8}\] where \[|\psi^{R}_{\pm}\rangle=(E_{\pm}-d_{0}+d_{z},\quad d_{x}+id_{y})^ {T} \tag{2.9}\] \[\langle\psi^{L}_{\pm}|=(E_{\pm}-d_{0}+d_{z},\quad d_{x}-id_{y})^ {T} \tag{2.10}\] We can work out the inner product in the following simple steps \[\langle\psi^{L}_{\pm}|\psi^{R}_{\pm}\rangle=2(d_{x}^{2}+d_{y}^{2 }+d_{z}^{2})+2d_{z}(E_{\pm}-d_{0})\] \[\quad=2(E_{\pm}^{2}+d_{0}^{2}-2E_{\pm}d_{0})+2d_{z}(E_{\pm}-d_{0})\] \[\quad=2(E_{\pm}-d_{0})(E_{\pm}-d_{0}+d_{z})\] \[\quad=2\det[H-d_{0}I]\pm 2d_{z}\sqrt{\det[H-d_{0}I]} \tag{2.11}\] Normalising the eigenstates such as \[|\psi^{R}_{\pm}\rangle=\frac{1}{\sqrt{2(E_{\pm}-d_{0})(E_{\pm}-d_ {0}+d_{z})}}\] \[\quad(E_{\pm}-d_{0}+d_{z},\quad d_{x}+id_{y})^{T}\] \[\langle\psi^{L}_{\pm}|=\frac{1}{\sqrt{2(E_{\pm}-d_{0})(E_{\pm}-d_ {0}+d_{z})}}\] \[\quad(E_{\pm}-d_{0}+d_{z},\quad d_{x}-id_{y})\] we can project the biorthogonality relation \[\langle\psi^{L}_{\alpha}|\psi^{R}_{\beta}\rangle=\delta_{\alpha\beta}\quad \alpha,\beta=\pm \tag{2.12}\] At the exceptional point [47, 48, 49, 50] when the Hamiltonian ceases to be diagonalizable, the eigenvalues merge to \(d_{0}\) and the eigenvectors coalesce to the common form \(\left[\frac{(\tau_{z}-id_{z})}{(\tau_{x}+i\tau_{y})-(d_{y}+id_{x})},1\right]^ {T}\). More precisely, as Cerjan et al noted [46], the result for the amplitude can be cast as \(2(E_{\pm}-d_{0})(E_{\pm}-d_{0}+d_{z}+i\tau_{z})\) which vanishes on substitution of the conditions (4). It implies self-orthogonality of the eigenstates, a criterion for an exceptional point. ### A two-level PT-symmetric model #### ii.a.1 The Hamiltonian \(\mathcal{PT}\)-symmetric models, where \(\mathcal{P}\) and \(\mathcal{T}\) stand respectively for space and time reflection, constitute a subclass of non-Hermitian configurations. The conjecture of \(\mathcal{PT}\)-symmetry was made by Bender and Boettcher [51] to point out that if is unbroken then the underlying system can support real bound-state eigenvalues whereas when it is broken the eigenvectors of the Hamiltonian cease to be simultaneously that of the \(\mathcal{PT}\) operator. As a result complex conjugate eigenvalues appear in conjugate pairs indicating the happening of spectral bifurcation. The possibility of \(\mathcal{PT}\)-symmetry residing in mechanical systems has been well explored from different perspectives [52] and is a rapidly growing field. Their position was soon found out to be intermediate between open and closed systems. While the role of non-Hermiticity in understanding stable phases has been pursued in the literature for last several years [53, 54], the character of \({\cal PT}\)-symmetry for stable nodal points concerning gapped and gapless semimetals [26, 55], where the invariants are constituted by Bloch bands, is a somewhat recent realisation. Indeed due to such a symmetry prevailing, one finds stable nodal points to exist in lesser dimensions [56]. Consider the following arrangement of (1) which was recently proposed by Longhi and Feng [57] to enquire into the spectral phase transition as the system transited to exhibiting complex eigenvalues from the real ones \[d_{0}=G(k),\quad d_{1}=R(k)\cos\psi,\] \[d_{2}=R(k)\sin\psi(k),\quad d_{3}=i\lambda W(k) \tag{13}\] where \(\hat{\cal H}\neq\hat{\cal H}^{\dagger}\), and k is a real parameter. The complex \(\hat{\cal H}(k)\) is equipped with a set of real, nonzero functions \(G(k),R(k),W(k),\psi(k)\), that contains the coupling parameter \(\lambda\in\Re^{+}\). The Hermitian counterpart of \(\hat{\cal H}(k)\) corresponds to \(k=0\). Enforcing \({\cal PT}\)-symmetry requires the diagonal elements of the matrix generated by (13) to be complex conjugate of each other and similarly for the off-diagonal elements too. \(\hat{\cal H}(k)\) is easily seen to commute with the joint operator \(PT\) i.e. \([\hat{H},PT]=0\) with the \({\cal P}\) operator represented by \(\sigma_{x}\), and \(T\) standing for the usual complex conjugation operation. The effect of the non-Hermitian term was analyzed for a slowly and periodically smooth cycled system as dictated the complex Berry phase. The eigenvalues of \(\hat{H}\) are easily seen to satisfy the relation \[{\cal E}_{\pm}(k)=G(k)\pm W(k)\sqrt{\lambda_{c}^{2}(k)-\lambda^{2}},\quad \lambda_{c}(k)\equiv\frac{R(k)}{W(k)} \tag{14}\] Introducing \(\tan\theta=\frac{R(k)}{i\lambda W(k)}\) where \(\theta=\theta(k)\), one can classify the accompanying right eigenvectors to be \[\left[\cos(\frac{\theta}{2}),\,\sin(\frac{\theta}{2})e^{-i\phi}\right]^{T},\, \left[\sin(\frac{\theta}{2}),\,-\cos(\frac{\theta}{2})e^{-i\phi}\right]^{T} \tag{15}\] while their left partners are \[\left[\cos^{*}(\frac{\theta}{2}),\,\sin^{*}(\frac{\theta}{2})e^{-i\phi}\right] ^{T},\,\left[\sin^{*}(\frac{\theta}{2}),\,-\cos^{*}(\frac{\theta}{2})e^{-i \phi}\right]^{T} \tag{16}\] These obey the biorthogonal conditions \(\langle\phi_{n}(k)|\psi_{m}(k)\rangle=\delta_{n,m}\), with \(n,m=+,-\). The expression (14) clearly shows that the eigenvalues stay real when the inequality \(\lambda<\lambda_{c}\) holds. This corresponds to the situation when \({\cal PT}\) is unbroken. However, when opposite is the case, i.e. \(\lambda>\lambda_{c}\), a broken \({\cal PT}\) phase is encountered. At the critical value \(\lambda=\lambda_{c}\), the exceptional points appear when both the eigenvalues \({\cal E}_{+}\) and \({\cal E}_{-}\) coincide to become \(G(k)\) and the associated eigenvectors coalesce to form a single entity. In other words, at the exceptional points we have \(R=\pm\lambda W\) which in turn gets converted to the condition \(r=\pm isec\theta\). It is worthwhile to mention that a simpler form of \({\cal H}\), proposed by Bender et al [58] a few years ago, discusses the type \(\hat{H}=r\cos\psi\mbox{\Large$\vrule height 6.45pt width 0.4pt depth 0.0pt\kern-3.0pt{ \vrule height 6.45pt width 0.4pt depth 0.0pt\kern-3.0pt{\vrule height 6.45pt width 0.4pt depth 0.0pt\kern-3.0pt{ \vrule height 6.45pt width 0.4pt depth 0.0pt\kern-3.0pt{\vrule height 6.45pt width 0.4pt depth 0.0pt\kern-3.0pt{ \vrule height 6.45pt width 0.4pt depth 0.0pt\kern-3.0pt{\vrule height 6.45pt width 0.4pt depth 0.0pt\kern-3.0pt{ \vrule height 6.45pt width 0.4pt depth 0.0pt\kern-3.0pt{\vrule height 6.45pt width 0.4pt depth 0.0pt\kern-3.0pt{ \vrule height 6.45pt width 0.4pt depth 0.0pt\kern-3.0pt{\vrule height 6.45pt width 0.4pt depth 0.0pt\kern-3.0pt{ \vrule height 6.45pt width 0.4pt depth 0.0pt\kern-3.0pt{\vrule height 6.45pt width 0.4pt depth 0.0pt\kern-3.0pt{ \vrule height 6.45pt width 0.4pt depth 0.0pt\kern-3.0pt{\vrule height 6.45pt width 0.4pt depth 0.0pt\kern-3.0pt{ \vrule height 6.45pt width 0.4pt depth 0.0pt\kern-3.0pt{\vrule height 6.45pt width 0.4pt depth 0.0pt\kern-3.0pt{ \vrule height 6.45pt width 0.4pt depth 0.0pt\kern-3.0pt{\vrule height 6.45pt width 0.4pt depth 0.0pt\kern-3.0pt{ \vrule height 6.45pt width 0.4pt depth 0.0pt\kern-3.0pt{\vrule height 6.45pt width 0.4pt depth 0.0pt\kern-3.0pt{ \vrule height 6.45pt width 0.4pt depth 0.0pt\kern-3.0pt{\vrule height 6.45pt width 0.4pt depth 0.0pt\kern-3.0pt{ \vrule height 6.45pt width 0.4pt depth 0.0pt\kern-3.0pt{\vrule height 6.45pt width 0.4pt depth 0.0pt\kern-3.0pt{ \vrule height 6.45pt width 0.4pt depth 0.0pt\kern-3.0pt{\vrule height 6.45pt width 0.4pt depth 0.0pt\kern-3.0pt{ \vrule height 6.45pt width 0.4pt depth 0.0pt\kern-3.0pt{\vrule height 6.45pt width 0.4pt depth 0.0pt\kern-3.0pt{ \vrule height 6.45pt width 0.4pt depth 0.0pt\kern-3.0pt{\vrule height 6.45pt width 0.4pt depth 0.0pt\kern-3.0pt{ \vrule height 6.45pt width 0.4pt depth 0.0pt\kern-3.0pt{\vrule height 6.45pt width 0.4pt depth 0.0pt\kern-3.0pt{ \vrule height 6.45pt width 0.4pt depth 0.0pt\kern-3.0pt{\vrule height 6.45pt width 0.4pt depth 0.0pt\kern-3.0pt{ \vrule height 6.45pt width 0.4pt depth 0.0pt\kern-3.0pt{\vrule height 6.45pt width 0.4pt depth 0.0pt\kern-3.0pt{ \vrule height 6.45pt width 0.4pt depth 0.0pt\kern-3.0pt{\vrule height 6.45pt width 0.4pt depth 0.0pt\kern-3.0pt{ \vrule height 6.45pt width 0.4pt depth 0.0pt\kern-3.0pt{\vrule height 6.45pt width 0.4pt depth 0.0pt\kern-3.0pt{ \vrule height 6.45pt width 0.4pt depth 0.0pt\kern-3.0pt{\vrule height 6.45pt width 0.4pt depth 0.0pt\kern-3.0pt{ \vrule height 6.45pt width 0.4pt depth 0.0pt\kern-3.0pt{\vrule height 6.45pt width 0.4pt depth 0.0pt\kern-3.0pt{ \vrule height 6.45pt width 0.4pt depth 0.0pt\kern-3.0pt{\vrule height 6.45pt width 0.4pt depth 0.0pt\kern-3.0pt{ \vrule height 6.45pt width 0.4pt depth 0.0pt\kern-3.0pt{\vrule height 6.45pt width 0.4pt depth 0.0pt\kern-3.0pt{ \vrule height 6.45pt width 0.4pt depth 0.0pt\kern-3.0pt{\vrule height 6.45pt width 0.4pt depth 0.0pt\kern-3.0pt{ \vrule height 6.45pt width 0.4pt depth 0.0pt\kern-3.0pt{\vrule height 6.45pt width 0.4pt depth 0.0pt\kern-3.0pt{ \vrule height 6.45pt width 0.4pt depth 0.0pt\kern-3.0pt{ \vrule height 6.45pt width 0.4pt depth 0.0pt\kern-3.0pt{\vrule height 6.45pt width 0.4pt depth 0.0pt\kern-3.0pt{ \vrule height 6.45pt width 0.4pt depth 0.0pt\kern-3.0pt{\vrule height 6.45pt width 0.4pt depth 0.0pt\kern-3.0pt{ \vrule height 6.45pt width 0.4pt depth 0.0pt\kern-3.0pt{\vrule height 6.45pt width 0.4pt depth 0.0pt\kern-3.0pt{ \vrule height 6.45pt width 0.4pt depth 0.0pt\kern-3.0pt{ \vrule height 6.45pt width 0.4pt depth 0.0pt\kern-3.0pt{ \vrule height 6.45pt width 0.4pt depth 0.0pt\kern-3.0pt{\vrule height 6.45pt width 0.4pt depth 0.0pt\kern-3.0pt{ \vrule height 6.45pt width 0.4pt depth 0.0pt\kern-3.0pt{\vrule height 6.45pt width 0.4pt depth 0.0pt\kern-3.0pt{ \vrule height 6.45pt width 0. \[g^{\mu\nu}=e^{\mu}{}_{\alpha}e^{\nu}{}_{\beta}\eta^{\alpha\beta} \tag{2.18}\] where \(\eta^{\alpha\beta}=diag(-1,1,1,1)\) is the Minkowski metric of flat spacetime. To proceed with (12), let us first write the vielbiens \(e_{a}{}^{\mu}\) in terms of four functions \(f_{1},f_{2},g_{1},g_{2}\), and then seek to relate them with the known functions at hand namely, \(G(k),R(k),W(k),\psi(k)\). A particular choice of the nonzero components could be chosen in the manner [59] \[\begin{gathered} e_{0}{}^{0}=f_{1},\quad e_{1}{}^{0}=f_{2}, \quad e_{0}{}^{1}=g_{2},\quad e_{1}{}^{1}=g_{1},\\ e_{2}{}^{2}=\frac{1}{r},\quad e_{3}{}^{3}=\frac{1}{r\sin\theta} \end{gathered} \tag{2.19}\] with their inverses \[\begin{gathered} e^{0}{}_{0}=\frac{g_{1}}{D},\quad e^{0}{}_{1}= -\frac{f_{2}}{D},\quad e^{1}{}_{0}=-\frac{g_{2}}{D},\\ e^{1}{}_{1}=\frac{f_{1}}{D},\quad e^{2}{}_{2}=r,\quad e^{3}{}_{3}=r \sin\theta\end{gathered} \tag{2.20}\] where \(D=f_{1}g_{1}-f_{2}g_{2}\). In a particular gauge, the above set of veilbiens yields the line-element [59] \[ds^{2}=\frac{g_{1}^{2}-g_{2}^{2}}{f_{1}g_{1}^{2}}dt^{2}+\frac{2g_{2}}{f_{1}g_ {1}^{2}}dtdr-\frac{1}{g_{1}^{2}}dr_{2}-r^{2}d\Omega^{2} \tag{2.21}\] The specific case of \[f_{1}=1,\quad f_{2}=0,\quad g_{1}=1,\quad g_{2}=-\sqrt{\frac{2\mathcal{M}}{r}} \tag{2.22}\] corresponds to the Schwarzschild black hole metric in Painleve-Gullstrand coordinates [60, 61] \[ds^{2}=\left(1-\frac{2\mathcal{M}}{r}\right)dt^{2}-2\sqrt{\frac{2\mathcal{M}} {r}}drdt-dr^{2}-r^{2}d\Omega^{2} \tag{2.23}\] with the evident identifications of the coordinate x with r, and taking t to represent the Painleve time. The metric (18) is stationary (i.e. invariant under translation of t) but not static (i.e. not invariant under time-reversal). With the help of (15), the general form of the Hamiltonian emerges in the manner \[\begin{gathered}\hat{H}=\frac{g_{1}h_{0}-g_{2}h_{1}}{D}\hbox to 0.0pt{1} \mskip 4.0mu {\rm l}+\frac{-f_{2}h_{0}+f_{1}h_{1}}{D}\sigma_{x}\\ +rh_{2}\sigma_{y}+r\sin\theta h_{3}\sigma_{z}\end{gathered} \tag{2.24}\] and comparing with (8), we can easily derive the coordinate-space correspondence \[\begin{gathered} G(k)\rightarrow\frac{g_{1}h_{0}-g_{2}h_{1}}{D}, \quad R(k)\cos\psi\rightarrow\frac{-f_{2}h_{0}+f_{1}h_{1}}{D},\\ R(k)\sin\psi\to r,\quad\lambda W(k)\to r\sin\theta \end{gathered} \tag{2.25}\] where we set \(h_{0}=h_{1}=h_{2}=1\) and \(h_{3}=i\). Using the values in (17), we have the mapping to the \((r,\theta)\) variables \[\begin{gathered} G(k)\to 1+\sqrt{\frac{2\mathcal{M}}{r}}, \quad R\cos\psi\to 1,\\ R\sin\psi\to r,\quad\lambda W\to r\sin\theta \end{gathered} \tag{2.26}\] implying the correspondences \(R\rightarrow\sqrt{r^{2}+1}\) and \(\psi\rightarrow\tan^{-1}(r)\). As a result, the Hamiltonian assumes the form \[\hat{H}=\left(1+\sqrt{\frac{2\mathcal{M}}{r}}\right)\hbox to 0.0pt{1} \mskip 4.0mu {\rm l}+\sigma_{x}+r\sigma_{y}+ir\sin\theta\sigma_{z} \tag{2.27}\] ## III Tunneling estimate We estimate the probability transmission amplitude of the Hawking radiation by making use of the correspondence set up in (20). First of all, the energy eigenvalues acquire read \[\mathcal{E}_{\pm}=\left(1+\sqrt{\frac{2\mathcal{M}}{r}}\right)\pm\sqrt{1+r^{2} \cos^{2}\theta} \tag{3.1}\] on using (9). The exceptional points correspond to \(r=\pm i\sec\theta\) located on the imaginary axis. In what follows we will adhere to the positive sign. We then have \[d\mathcal{E}=\frac{dM}{\sqrt{2Mr}} \tag{3.2}\] Before we calculate the tunneling probability, let us note that when the particle escapes from the black hole with an energy \(\omega\), the mass of black hole decreases from \(\mathcal{M}\) to \(\mathcal{M}-\omega\). Indeed, as the pair production takes place around the event horizon, the positive energy particle when breaks free [62] has to transit the separating region defined between \(r_{in}\), which is the radius of the black hole before the emission of the particle, and \(r_{out}\), which is the radius of the black hole after the emission of the particle, acting as a possible barrier wall. This is possible if the particle can tunnel through such a barrier. Actually, a classically inaccessible zone is replicated with the particle possessing energy below such a resistance. The tunneling problem has been widely considered but we follow the elegant procedure of Parikh and Wilczek [62] who pointed out that since the action \(\zeta\) in the tunneling region being imaginary, the probability of trasmission can be straightforwardly calculated by making use of the semiclassical WKB approximation. Their approach is to consider an s-wave particle going outwards from \(r_{in}\) to \(r_{out}\) enabling us to cast \(\zeta\) in the form \[\text{Im}\zeta=\text{Im}\int_{r_{in}}^{r_{out}}p_{r}dr=\text{Im}\int_{r_{in}}^ {r_{out}}\int_{0}^{p_{r}}dp_{r}dr \tag{3.3}\] where the relationship \(dp_{r}=\frac{dH}{\dot{r}}\) is imposed according to the classical Hamilton's equations. Noting that \(H\) assumes the respective values \(M\) and \(M-\omega\) for \(p_{r}=0\) and \(p_{r}=p_{r}\), \(\text{Im}\zeta\) can be transformed to \[\text{Im}\zeta =\text{Im}\int_{r_{in}}^{r_{out}}\int_{M}^{M-\omega}\frac{dM}{ \dot{r}}dr\] \[=-\text{Im}\int_{r_{in}}^{r_{out}}\int_{0}^{\omega}\frac{d\omega }{\dot{r}}dr \tag{3.4}\] At this point, we recognize from (18) that horizons can be found from the radial null geodesics \(ds^{2}=0\) along with putting \(d\Omega=0\). The resulting differential equation \[\dot{r}^{2}+2\sqrt{\frac{2\mathcal{M}}{r}}\dot{r}-(1-\frac{2M}{r})=0 \tag{3.5}\] admits of the following acceptable solution \[\dot{r}=1-\sqrt{\frac{2\mathcal{M}}{r}} \tag{3.6}\] Substituting in (25) results in \[\text{Im}\zeta=-\text{Im}\int_{2M}^{2(M-\omega)}\int_{0}^{\omega}\frac{d \omega dr}{1-\sqrt{\frac{2(M-\omega)}{r}}} \tag{3.7}\] The above can be reduced to an integrable form after using the residue theorem of complex analysis \[\mathrm{Im}\zeta=-\mathrm{Im}\int_{2M}^{2(M-\omega)}dr\int_{0}^{\omega}\frac{d \alpha}{1-\alpha}=(2\pi i)\mathrm{Im}\int_{2M}^{2(M-\omega)}dr \tag{3.8}\] Indeed, extracting the imaginary part from the right side, we estimate for the tunneling probability \[\Gamma\approx\exp(-2\mathrm{Im}\zeta)=\exp(-8\pi\rho) \tag{3.9}\] A similar estimate was made by Parikh and Wilczek [62] by accounting for global conservation laws. It can be noticed that our estimate (30) is independent of the parameters of the model Hamiltonian. Actually the connecting relation (23) is independent of the variable \(\theta\). ## IV. Summary Non-Hermitian Hamiltonians are found to play a central role in diverse physical problems. In this paper we studied a generally non-Hermitian Hamiltonian of two-level \(\mathcal{PT}\)-symmetric system which depends on a real parameter k and exhibits a period of \(2\pi\) from the coordinate space perspective. Adopting the approach of tetrad formalism a correspondence is established between such a Hamiltonian and the one constructed in terms of vielbeins thereby enabling us to connect to the metric of curved spacetime. By suitably writing the tetrad components in terms of four unknown functions and making a specific choice of them such that the Schwarzschild metric could be set up in Painleve-Gullstrand coordinates, we could put in a one to one correspondence with our chosen form of the Hamiltonian. We computed the probability transmission amplitude of Hawking radiation by looking at it as a tunneling process using the semiclassical WKB approximation. Our result turned out to be independent of the non-Hermitian parameter \(\lambda\) so that the nature of phase transitions that the system supports does not influence the tunneling probability estimate. ## V. Acknowledgment Two of us (RG, SS) thank Shiv Nadar IOE for the grant of research fellowships. ## VI. Data availability statement All data supporting the findings of this study are included in the article.
2304.03408
Dynamics of Finite Width Kernel and Prediction Fluctuations in Mean Field Neural Networks
We analyze the dynamics of finite width effects in wide but finite feature learning neural networks. Starting from a dynamical mean field theory description of infinite width deep neural network kernel and prediction dynamics, we provide a characterization of the $O(1/\sqrt{\text{width}})$ fluctuations of the DMFT order parameters over random initializations of the network weights. Our results, while perturbative in width, unlike prior analyses, are non-perturbative in the strength of feature learning. In the lazy limit of network training, all kernels are random but static in time and the prediction variance has a universal form. However, in the rich, feature learning regime, the fluctuations of the kernels and predictions are dynamically coupled with a variance that can be computed self-consistently. In two layer networks, we show how feature learning can dynamically reduce the variance of the final tangent kernel and final network predictions. We also show how initialization variance can slow down online learning in wide but finite networks. In deeper networks, kernel variance can dramatically accumulate through subsequent layers at large feature learning strengths, but feature learning continues to improve the signal-to-noise ratio of the feature kernels. In discrete time, we demonstrate that large learning rate phenomena such as edge of stability effects can be well captured by infinite width dynamics and that initialization variance can decrease dynamically. For CNNs trained on CIFAR-10, we empirically find significant corrections to both the bias and variance of network dynamics due to finite width.
Blake Bordelon, Cengiz Pehlevan
2023-04-06T23:11:49Z
http://arxiv.org/abs/2304.03408v3
# Dynamics of Finite Width Kernel and Prediction Fluctuations in Mean Field Neural Networks ###### Abstract We analyze the dynamics of finite width effects in wide but finite feature learning neural networks. Starting from a dynamical mean field theory description of infinite width deep neural network kernel and prediction dynamics, we provide a characterization of the \(\mathcal{O}(1/\sqrt{\text{width}})\) fluctuations of the DMFT order parameters over random initializations of the network weights. Our results, while perturbative in width, unlike prior analyses, are non-perturbative in the strength of feature learning. In the lazy limit of network training, all kernels are random but static in time and the prediction variance has a universal form. However, in the rich, feature learning regime, the fluctuations of the kernels and predictions are dynamically coupled with a variance that can be computed self-consistently. In two layer networks, we show how feature learning can dynamically reduce the variance of the final tangent kernel and final network predictions. We also show how initialization variance can slow down online learning in wide but finite networks. In deeper networks, kernel variance can dramatically accumulate through subsequent layers at large feature learning strengths, but feature learning continues to improve the signal-to-noise ratio of the feature kernels. In discrete time, we demonstrate that large learning rate phenomena such as edge of stability effects can be well captured by infinite width dynamics and that initialization variance can decrease dynamically. For CNNs trained on CIFAR-10, we empirically find significant corrections to both the bias and variance of network dynamics due to finite width. ## 1 Introduction Learning dynamics of deep neural networks are challenging to analyze and understand theoretically, but recent progress has been made by studying the idealization of infinite-width networks. Two types of infinite-width limits have been especially fruitful. First, the kernel or lazy infinite-width limit, which arises in the standard or neural tangent kernel (NTK) parameterization, gives prediction dynamics which correspond to a linear model [1, 2, 3, 4, 5]. This limit is theoretically tractable but fails to capture adaptation of internal features in the neural network, which are thought to be crucial to the success of deep learning in practice. Alternatively, the mean field or \(\mu\)-parameterization allows feature learning at infinite width [6, 7, 8, 9]. With a set of well-defined infinite-width limits, prior theoretical works have analyzed finite networks in the NTK parameterization perturbatively, revealing that finite width both enhances the amount of feature evolution (which is still small in this limit) but also introduces variance in the kernels and the predictions over random initializations [10, 11, 12, 13, 14, 15]. Because of these competing effects, in some situations wider networks are better, and in others wider networks perform worse [16]. In this paper, we analyze finite-width network learning dynamics in the mean field parameterization. In this parameterization, wide networks are empirically observed to outperform narrow networks [7, 17, 18]. Our results and framework provide a methodology for reasoning about detrimental finite-size effects in such feature-learning neural networks. We show that observable averages involving kernels and predictions obey a well-defined power series in inverse width even in rich training regimes. We generally observe that the leading finite-size corrections to both the bias and variance components of the square loss are increased for narrower networks, and diminish performance. Further, we show that richer networks are closer to their corresponding infinite-width mean field limit. For simple tasks and architectures the leading \(\mathcal{O}(1/\text{width})\) corrections to the error can be descriptive, while for large sample size or more realistic tasks, higher order corrections appear to become relevant. Concretely, our contributions are listed below: 1. Starting from a dynamical mean field theory (DMFT) description of infinite-width nonlinear deep neural network training dynamics, we provide a complete recipe for computing fluctuation dynamics of DMFT order parameters over random network initializations during training. These include the variance of the training and test predictions and the \(\mathcal{O}(1/\text{width})\) variance of feature and gradient kernels throughout training. 2. We first solve these equations for the lazy limit, where no feature learning occurs, recovering a simple differential equation which describes how prediction variance evolves during learning. 3. We solve for variance in the rich feature learning regime in two-layer networks and deep linear networks. We show richer nonlinear dynamics improve the signal-to-noise ratio (SNR) of kernels and predictions, leading to closer agreement with infinite-width mean field behavior. 4. We analyze in a two-layer model why larger training set sizes in the overparameterized regime enhance finite-width effects and how richer training can reduce this effect. 5. We show that large learning rate effects such as edge-of-stability [19, 20, 21] dynamics can be well captured by infinite width theory, with finite size variance accurately predicted by our theory. 6. We test our predictions in Convolutional Neural Networks (CNNs) trained on CIFAR-10 [22]. We observe that wider networks and richly trained networks have lower logit variance as predicted. However, the timescale of training dynamics is significantly altered by finite width even after ensembling. We argue that this is due to a detrimental correction to the mean dynamical NTK. ### Related Works Infinite-width networks at initialization converge to a Gaussian process with a covariance kernel that is computed with a layerwise recursion [23, 24, 25, 26, 13]. In the large but finite width limit, these kernels do not concentrate at each layer, but rather propagate finite-size corrections forward through the network [27, 28, 29, 30, 14]. During gradient-based training with the NTK parameterization, a hierarchy of differential equations have been utilized to compute small feature learning corrections to the kernel through training [10, 11, 12, 13]. However the higher order tensors required to compute the theory are initialization dependent, and the theory breaks down for sufficiently rich feature learning dynamics. Various works on Bayesian deep networks have also considered fluctuations and perturbations in the kernels at finite width during inference [31, 32]. Other relevant work in this domain are [33, 34, 35, 36, 37, 38, 39]. An alternative to standard/NTK parameterization is the mean field or \(\mu P\)-limit where features evolve even at infinite width [6, 7, 8, 9, 40, 41, 42]. Recent studies on two-layer mean field networks trained online with Gaussian data have revealed that finite networks have larger sensitivity to SGD noise [43, 44]. Here, we examine how finite-width neural networks are sensitive to initialization noise. Prior work has studied how the weight space distribution and predictions converge to mean field dynamics with a dynamical error \(\mathcal{O}(1/\sqrt{\text{width}})\)[40, 45], however in the deep case this requires a probability distribution over couplings between adjacent layers. Our analysis, by contrast, focuses on a function and kernel space picture which decouples interactions between layers at infinite width. A starting point for our present analysis of finite-width effects was a previous set of studies [9, 46] which identified the DMFT action corresponding to randomly initialized deep NNs which generates the distribution over kernel and network prediction dynamics. These prior works discuss the possibility of using a finite-size perturbation series but crucially failed to recognize the role of the network prediction fluctuations on the kernel fluctuations which are necessary to close the self-consistent equations in the rich regime. Using the mean field action to calculate a perturbation expansion around DMFT is a long celebrated technique to obtain finite size corrections in physics [47, 48, 49, 50] and has been utilized for random untrained recurrent networks [51, 52], and more recently to calculate variance of feature kernels \(\Phi^{\ell}\) at initialization \(t=0\) in deep MLPs or RNNs [53]. We extend these prior studies to the dynamics of training and to probe how feature learning alters finite size corrections. ## 2 Problem Setup We consider wide neural networks where the number of neurons (or channels for a CNN) \(N\) in each layer is large. For a multi-layer perceptron (MLP), the network is defined as a map from input \(\mathbf{x}_{\mu}\in\mathbb{R}^{D}\) to hidden activations \(\mathbf{h}_{\mu}^{\ell}\in\mathbb{R}^{N}\) in layers \(\ell\in\{1,...,L\}\) and finally output \(f_{\mu}\) \[f_{\mu}=\frac{1}{\gamma N}\mathbf{w}^{L}\cdot\phi(\mathbf{h}_{\mu}^{L})\,\quad\mathbf{h}_{\mu}^{\ell+1}= \frac{1}{\sqrt{N}}\mathbf{W}^{\ell}\phi(\mathbf{h}_{\mu}^{\ell})\,\quad\mathbf{h}_{\mu}^{1}=\frac{1}{\sqrt{D}}\mathbf{W}^{0}\mathbf{x}_{ \mu}, \tag{1}\] where \(\gamma\) is a scale factor that controls feature learning strength, with large \(\gamma\) leading to rich feature learning dynamics and the limit of small \(\gamma\to 0\) (or generally if \(\gamma\) scales as \(N^{-\alpha}\) for \(\alpha>0\) as \(N\to\infty\), NTK parameterization corresponds to \(\alpha=\frac{1}{2}\)) gives lazy learning where no features are learned [4, 7, 9]. The parameters \(\mathbf{\theta}=\{\mathbf{W}^{0},\mathbf{W}^{1},...,\mathbf{w}^{L}\}\) are optimized with gradient descent or gradient flow \(\frac{d}{dt}\mathbf{\theta}=-N\gamma^{2}\nabla_{\mathbf{\theta}}\mathcal{L}\) where \(\mathcal{L}=\mathbb{E}_{\mathbf{x}_{\mu}\in\mathcal{D}}\left({f(\mathbf{x}_{\mu},\mathbf{ \theta}),y_{\mu}}\right)\) is a loss computed over dataset \(\mathcal{D}=\{(\mathbf{x}_{1},y_{1}),(\mathbf{x}_{2},y_{2}),\ldots(\mathbf{x}_{P},y_{P})\}\). This parameterization and learning rate scaling ensures that \(\frac{d}{dt}f_{\mu}\sim\mathcal{O}_{N,\gamma}(1)\) and \(\frac{d}{dt}\mathbf{h}_{\mu}^{\ell}=\mathcal{O}_{N,\gamma}(\gamma)\) at initialization. This is equivalent to maximal update parameterization (\(\mu\)P)[8], which can be easily extended to other architectures including neural networks with trainable bias parameters as well as convolutional, recurrent, and attention layers [8, 9]. ## 3 Review of Dynamical Mean Field Theory The infinite-width training dynamics of feature learning neural networks was described by a DMFT in [9, 46]. We first review the DMFT's key concepts, before extending it to get insight into finite-widths. To arrive at the DMFT, one first notes that the training dynamics of such networks can be rewritten in terms of a collection of dynamical variables (or _order parameters_) \(\mathbf{q}=\mathrm{Vec}\{f_{\mu}(t),\Phi_{\mu\nu}^{\ell}(t,s),G_{\mu\nu}^{\ell}(t,s ),...\}\)[9], which include feature and gradient kernels [9, 54] \[\Phi_{\mu\nu}^{\ell}(t,s)=\frac{1}{N}\phi(\mathbf{h}_{\mu}^{\ell}(t))\cdot\phi(\bm {h}_{\nu}^{\ell}(s))\,\quad G_{\mu\nu}^{\ell}(t,s)=\frac{1}{N}\mathbf{g}_{\mu}^{\ell}(t) \cdot\mathbf{g}_{\nu}^{\ell}(s), \tag{2}\] where \(\mathbf{g}_{\mu}^{\ell}(t)=\gamma N\frac{\partial f_{\mu}(t)}{\partial\mathbf{h}_{\mu }^{\ell}(t)}\) are the back-propagated gradient signals. Further, for width-\(N\) networks the distribution of these dynamical variables across weight initializations (from a Gaussian distribution \(\mathbf{\theta}\sim\mathcal{N}(0,\mathbf{I})\)) is given by \(p(\mathbf{q})\propto\exp{(NS(\mathbf{q}))}\), where the action \(S(\mathbf{q})\) contains interactions between neuron activations and the kernels at each layer [9]. The DMFT introduced in [9] arises in the \(N\to\infty\) limit when \(p(\mathbf{q})\) is strongly peaked around the saddle point \(\mathbf{q}_{\infty}\) where \(\frac{\partial S}{\partial\mathbf{q}}|_{\mathbf{q}_{\infty}}=0\). Analysis of the saddle point equations reveal that the training dynamics of the neural network can be alternatively described by a stochastic process. A key feature of this process is that it describes the training time evolution of the distribution of neuron pre-activations in each layer (informally the histogram of the elements of \(\mathbf{h}_{\mu}^{\ell}(t)\)) where each neuron's pre-activation behaves as an i.i.d. draw from this _single-site_ stochastic process. We denote these random processes by \(h_{\mu}^{\ell}(t)\). Kernels in (2) are now computed as _averages_ over these infinite-width single site processes \(\Phi_{\mu\nu}^{\ell}(t,s)=\left\langle\phi(h_{\mu}^{\ell}(t))\phi(h_{\nu}^{ \ell}(s))\right\rangle\), \(G_{\mu\nu}^{\ell}(t,s)=\left\langle g_{\mu}^{\ell}(t)g_{\nu}^{\ell}(s)\right\rangle\), where averages arise from the \(N\to\infty\) limit of the dot products in (2). DMFT also provides a set of self-consistent equations that describe the complete statistics of these random processes, which depend on the kernels, as well as other quantities. ## 4 Dynamical Fluctuations Around Mean Field Theory We are interested in going beyond the infinite-width limit to study more realistic finite-width networks. In this regime, the order parameters \(\mathbf{q}\) fluctuate in a \(\mathcal{O}(1/\sqrt{N})\) neighborhood of \(\mathbf{q}_{\infty}\)[55, 51, 53, 46]. Statistics of these fluctuations can be calculated from a general cumulant expansion (see App. C) [55, 56, 51]. We will focus on the leading-order corrections to the infinite-width limit in this expansion. **Proposition 1**_The finite-width \(N\) average of observable \(O(\mathbf{q})\) across initializations, which we denote by \(\left\langle O(\mathbf{q})\right\rangle_{N}\), admits an expansion of the form whose leading terms are_ \[\left\langle O(\mathbf{q})\right\rangle_{N}=\frac{\int d\mathbf{q}\exp{(NS[\mathbf{q}])}O( \mathbf{q})}{\int d\mathbf{q}\exp{(NS[\mathbf{q}])}}=\left\langle O(\mathbf{q})\right\rangle _{\infty}+N\left[\left\langle V(\mathbf{q})O(\mathbf{q})\right\rangle_{\infty}- \left\langle V(\mathbf{q})\right\rangle_{\infty}\left\langle O(\mathbf{q})\right\rangle_ {\infty}\right]+..., \tag{3}\] _where \(\left\langle\right\rangle_{\infty}\) denotes an average over the Gaussian distribution \(\mathbf{q}\sim\mathcal{N}\left(\mathbf{q}_{\infty},-\frac{1}{N}\left(\nabla_{\mathbf{q}}^{2} S[\mathbf{q}_{\infty}]\right)^{-1}\right)\) and the function \(V(\mathbf{q})\equiv S(\mathbf{q})-S(\mathbf{q}_{\infty})-\frac{1}{2}(\mathbf{q}-\mathbf{q}_{\infty}) ^{\top}\nabla_{\mathbf{q}}^{2}S(\mathbf{q}_{\infty})(\mathbf{q}-\mathbf{q}_{\infty})\) contains cubic and higher terms in the Taylor expansion of \(S\) around \(\mathbf{q}_{\infty}\). The terms shown include all the leading and sub-leading terms in the series in powers of \(1/N\). The terms in ellipses are at least \(\mathcal{O}(N^{-1})\) suppressed compared to the terms provided._ The proof of this statement is given in App. C. The central object to characterize finite size effects is the unperturbed covariance (the _propagator_): \(\mathbf{\Sigma}=-\left[\nabla^{2}S(\mathbf{q}_{\infty})\right]^{-1}\). This object can be shown to capture leading order fluctuation statistics \(\left\langle\left(\mathbf{q}-\mathbf{q}_{\infty}\right)\left(\mathbf{q}-\mathbf{q}_{\infty} \right)^{\top}\right\rangle_{N}=\frac{1}{N}\mathbf{\Sigma}+\mathcal{O}(N^{-2})\) (App. C.1), which can be used to reason about, for example, expected square error over random initializations. Correction terms at finite width may give a possible explanation of the superior performance of wide networks at fixed \(\gamma\)[7, 17, 18]. To calculate such corrections, in App. D, we provide a complete description of Hessian \(\nabla_{\mathbf{q}}^{2}S(\mathbf{q})\) and its inverse (the propagator) for a depth-\(L\) network. This description constitutes one of our main results. The resulting expressions are lengthy and are left to App. D. Here, we discuss them at a high level. Conceptually there are two primary ingredients for obtaining the full propagator: * Hessian sub-blocks \(\kappa\) which describe the _uncoupled variances_ of the kernels, such as \[\kappa_{\mu\nu\alpha\beta}^{\Phi^{\ell}}(t,s,t^{\prime},s^{\prime})\equiv \left\langle\phi(h_{\mu}^{\ell}(t))\phi(h_{\nu}^{\ell}(s))\phi(h_{\alpha}^{ \ell}(t^{\prime}))\phi(h_{\beta}^{\ell}(s^{\prime}))\right\rangle-\Phi_{\mu \nu}^{\ell}(t,s)\Phi_{\alpha\beta}^{\ell}(t^{\prime},s^{\prime})\] (4) Similar terms also appear in other studies on finite width Bayesian inference [13, 31, 32] and in studies on kernel variance at initialization [27, 14, 29, 53]. * Blocks which capture the _sensitivity_ of field averages to pertubations of order parameters, such as \[D_{\mu\nu\alpha\beta}^{\Phi^{\ell}\Phi^{\ell-1}}(t,s,t^{\prime},s^{\prime}) \equiv\frac{\partial\left\langle\phi(h_{\mu}^{\ell}(t))\phi(h_{\nu}^{\ell}(s ))\right\rangle}{\partial\Phi_{\alpha\beta}^{\ell-1}\left(t^{\prime},s^{\prime }\right)}\,\quad D_{\mu\nu\alpha}^{G^{\ell}\Delta}(t,s,t^{\prime})\equiv\frac{ \partial\left\langle g_{\mu}^{\ell}(t)g_{\nu}^{\ell}(s)\right\rangle}{ \partial\Delta_{\alpha}(t^{\prime})},\] (5) where \(\Delta_{\mu}(t)=-\frac{\partial\ell(f_{\mu},y_{\mu})}{\partial f_{\mu}}|_{f_{ \mu}(t)}\) are error signal for each data point. Abstracts, we can consider the uncoupled variances \(\mathbf{\kappa}\) as "sources" of finite-width noise for each order parameter and the \(\mathbf{D}\) blocks as summarizing a directed causal graph which captures how this noise propagates in the network (through layers and network predictions). In Figure 1, we illustrate this graph showing directed lines that represent causal influences of order parameters on fields and vice versa. For instance, if \(\Phi^{\ell}\) were perturbed, \(D^{\Phi^{\ell+1},\Phi^{\ell}}\) would quantify the resulting perturbation to \(\Phi^{\ell+1}\) through the fields \(h^{\ell+1}\). In App. D, we calculate \(\mathbf{\kappa}\) and \(\mathbf{D}\) tensors, and show how to use them to calculate the propagator. As an example of our results: **Proposition 2**: _Partition \(\mathbf{q}\) into primal \(\mathbf{q}_{1}=\text{Vec}\{f_{\mu}(t),\Phi_{\mu\nu}^{\ell}(t,s)...\}\) and conjugate variables \(\mathbf{q}_{2}=\text{Vec}\{\hat{f}_{\mu}(t),\hat{\Phi}_{\mu\nu}^{\ell}(t,s)...\}\). Let \(\mathbf{\kappa}=\frac{\partial^{2}}{\partial\mathbf{q}_{2}\partial\mathbf{q}_{2}^{2}}S[ \mathbf{q}_{1},\mathbf{q}_{2}]\) and \(\mathbf{D}=\frac{\partial^{2}}{\partial\mathbf{q}_{2}\partial\mathbf{q}_{1}^{2}}S[\mathbf{q}_ {1},\mathbf{q}_{2}]\), then the propagator for \(\mathbf{q}_{1}\) has the form \(\mathbf{\Sigma}_{\mathbf{q}_{1}}=\mathbf{D}^{-1}\mathbf{\kappa}\left[\mathbf{D}^{-1}\right]^{\top}\) (App D). The variables \(\mathbf{q}_{1}\) are related to network observables, while conjugates \(\mathbf{q}_{2}\) arise as Lagrange multipliers in the DMFT calculation. From the propagator \(\mathbf{\Sigma}_{\mathbf{q}_{1}}\) we can read off the variance of network observables such as \(N\text{Var}(f_{\mu})\sim\Sigma_{f_{\mu}}\)._ The necessary order parameters for calculating the fluctuations are obtained by solving the DMFT using numerical methods introduced in [9]. We provide a pseudocode for this procedure in App. E. We proceed to solve the equations defining \(\mathbf{\Sigma}\) in special cases which are illuminating and numerically feasible including lazy training, two layer networks and deep linear NNs. ## 5 Lazy Training Limit To gain some initial intuition about why kernel fluctuations alter learning dynamics, we first analyze the static kernel limit \(\gamma\to 0\) where features are frozen. To prevent divergence of the network in this limit, we Figure 1: The directed causal graph between DMFT order parameters (blue) and fields (green) defines the \(D\) tensors of our theory. Each arrow represents a causal dependence. \(K\) denotes the NTK. use a background subtracted function \(\tilde{f}(\mathbf{x},\mathbf{\theta})=f(\mathbf{x},\mathbf{\theta})-f(\mathbf{x},\mathbf{\theta}_{0})\) which is identically zero at initialization [4]. For mean square error, the \(N\rightarrow\infty\) and \(\gamma\to 0\) limit is governed by \(\frac{\partial\tilde{f}(\mathbf{x})}{\partial t}=\mathbb{E}_{\mathbf{x}^{\prime} \sim\mathcal{D}}\Delta(\mathbf{x}^{\prime})K(\mathbf{x},\mathbf{x}^{\prime})\) with \(\Delta(\mathbf{x})=y(\mathbf{x})-\tilde{f}(\mathbf{x})\) (for MSE) and \(K\) is the static NTK. The finite-\(N\) initial covariance of the NTK has been analyzed in prior works [27, 13, 14], which reveal a dependence on depth and nonlinearity. Since the NTK is static in the \(\gamma\to 0\) limit, it has constant initialization variance through training. Further, all sensitivity blocks of the Hessian involving the kernels and the prediction errors \(\mathbf{\Delta}\) (such as the \(D^{\Phi^{\prime},\Delta}\)) vanish. We represent the covariance of the NTK as \(\kappa(\mathbf{x}_{1},\mathbf{x}_{2},\mathbf{x}_{2},\mathbf{x}_{3})=N\text{Cov}(K(\mathbf{x}_{1}, \mathbf{x}_{2}),K(\mathbf{x}_{3},\mathbf{x}_{4}))\). To identify the dynamics of the error \(\mathbf{\Delta}\) covariance, we consider the eigendecomposition of the infinite-width NTK \(K_{\infty}(\mathbf{x},\mathbf{x}^{\prime})=\sum_{k}\lambda_{k}\psi_{k}(\mathbf{x})\psi_{k }(\mathbf{x}^{\prime})\) with respect to the training distribution \(\mathcal{D}\), and decompose \(\kappa\) in this basis \[\kappa_{k\ell mn}=\left\langle\kappa(\mathbf{x}_{1},\mathbf{x}_{2},\mathbf{x}_{3},\mathbf{x}_ {4})\psi_{k}(\mathbf{x}_{1})\psi_{\ell}(\mathbf{x}_{2})\psi_{n}(\mathbf{x}_{3})\psi_{m}( \mathbf{x}_{4})\right\rangle_{\mathbf{x}_{1},\mathbf{x}_{2},\mathbf{x}_{3},\mathbf{x}_{4}\sim \mathcal{D}}, \tag{6}\] where averages are computed over the training distribution \(\mathcal{D}\). **Proposition 3**: _For MSE loss, the prediction error covariance \(\mathbf{\Sigma}^{\Delta}(t,s)=N\text{Cov}_{0}(\mathbf{\Delta}(t),\mathbf{\Delta}(s))\) satisfies a differential equation (App. G)_ \[\left(\frac{\partial}{\partial t}+\lambda_{k}\right)\left(\frac{\partial}{ \partial s}+\lambda_{\ell}\right)\Sigma_{k\ell}^{\Delta}(t,s)=\sum_{nm}\kappa _{km\ell n}\Delta_{n}^{\infty}(t)\Delta_{n}^{\infty}(s), \tag{7}\] _where \(\Delta_{k}^{\infty}(t)\equiv\exp\left(-\lambda_{k}t\right)\left\langle\psi_{k }(\mathbf{x})y(\mathbf{x})\right\rangle_{\mathbf{x}}\) are the errors at infinite width for eigenmode \(k\)._ An example verifying these dynamics is provided in App. Fig. A.1. In the case where the target is an eigenfunction \(y=\psi_{k^{*}}\), the covariance has the form \(\Sigma_{k\ell}^{\Delta}(t,s)=\kappa_{k\ell k^{*}k^{*}}\frac{\exp(-\lambda_{k^ {*}}(t+s))}{(\lambda_{k}-\lambda_{k^{*}})(\lambda_{\ell}-\lambda_{k^{*}})}\). If the kernel is rank one with eigenvalue \(\lambda\), then the dynamics have the simple form \(\Sigma^{\Delta}(t,s)=\kappa y^{2}\ t\ s\ e^{-\lambda(t+s)}\). We note that similar terms appear in the prediction dynamics obtained by truncating the Neural Tangent Hierarchy [10, 11], however those dynamics concerned small feature learning corrections rather than from initialization variance (App. G.1). Corrections to the mean \(\left\langle\Delta\right\rangle\) are analyzed in App. G.2. We find that the variance and mean correction dynamics involves non-trivial coupling across eigendirections with a mixture of exponentials with timescales \(\{\lambda_{k}^{-1}\}\). ## 6 Rich Regime in Two-Layer Networks In this section, we analyze how feature learning alters the variance through training. We show a denoising effect where the signal to noise ratios of the order parameters improve with feature learning. ### Kernel and Error Coupled Fluctuations on Single Training Example In the rich regime, the kernel evolves over time but inherits fluctuations from the training errors \(\mathbf{\Delta}\). To gain insight, we first study a simplified setting where the data distribution is a single training example \(\mathbf{x}\) and single test point \(\mathbf{x_{\star}}\) in a two layer network. We will track \(\Delta(t)=y-f(\mathbf{x},t)\) and the test prediction \(f_{\star}(t)=f(\mathbf{x_{\star}},t)\). To identify the dynamics of these predictions we need the NTK \(K(t)\) on the train point, as well as the train-test NTK \(K_{\star}(t)\). In this case, all order parameters can be viewed as scalar functions of a single time index (unlike the deep network case, see App. D). **Proposition 4**: _Computing the Hessian of the DMFT action and inverting (App. H), we obtain the following covariance for \(\mathbf{q}_{1}=\text{Vec}\{\Delta(t),f_{\star}(t),K(t),K_{\star}(t)\}_{t\in\mathbb{R }_{+}}\)_ \[\mathbf{\Sigma}_{\mathbf{q}_{1}}=\begin{bmatrix}\mathbf{I}+\mathbf{\Theta}_{K}&0&\mathbf{ \Theta}_{\Delta}&0\\ -\mathbf{\Theta}_{K_{\star}}&\mathbf{I}&0&-\mathbf{\Theta}_{\Delta}\\ -\mathbf{D}&0&\mathbf{I}&0\\ -\mathbf{D}_{\star}&0&0&\mathbf{I}\end{bmatrix}^{-1}\begin{bmatrix}0&0&0&0\\ 0&0&0&0\\ 0&0&\mathbf{\kappa}&\mathbf{\kappa}_{\star\star}^{\top}\\ 0&0&\mathbf{\kappa}_{\star}&\mathbf{\kappa}_{\star\star}\end{bmatrix}\begin{bmatrix} \mathbf{I}+\mathbf{\Theta}_{K}&0&\mathbf{\Theta}_{\Delta}&0\\ -\mathbf{\Theta}_{K_{\star}}&\mathbf{I}&0&-\mathbf{\Theta}_{\Delta}\\ -\mathbf{D}&0&\mathbf{I}&0\\ -\mathbf{D}_{\star}&0&0&\mathbf{I}\end{bmatrix}^{-1\top}, \tag{8}\] _where \([\mathbf{\Theta}_{K}](t,s)=\Theta(t-s)K(s)\), \([\mathbf{\Theta}_{\Delta}](t,s)=\Theta(t-s)\Delta(s)\) are Heaviside step functions and \(D(t,s)=\left\langle\frac{\partial}{\partial\Delta(s)}(\phi(h(t))^{2}+g(t)^{2})\right\rangle\) and \(D_{\star}(t,s)=\left\langle\frac{\partial}{\partial\Delta(s)}(\phi(h(t))\phi(h_ {\star}(t))+g(t)g_{\star}(t))\right\rangle\) quantify sensitivity of the kernel to perturbations in the error signal \(\Delta(s)\). Lastly \(\kappa,\kappa_{\star},\kappa_{\star\star}\) correspond to uncoupled variances (and covariance) for \(\{K(t),K_{\star}(t)\}\)._ Figure 2: An ensemble of \(E=1000\) two layer \(N=256\) tanh networks trained on a single training point. Dashed black lines are DMFT predictions. (a) The square deviation from the infinite width DMFT scales as \(\mathcal{O}(1/N)\) for all order parameters. (b) The ensemble average NTK \(\langle K(t)\rangle\) (solid colors) and (c) ensemble average test point predictions \(f_{\star}(t)\) for a point with \(\frac{\mathbf{x}\cdot\mathbf{x}_{\star}}{D}=0.5\) closely follow the infinite width predictions (dashed black). (d) The variance (estimated over the ensemble) of the train error \(\Delta(t)=y-f(t)\) initially increases and then decreases as the training point is fit. (e) The variance of \(f_{\star}\) increases with time but decreases with \(\gamma\). (f) The variance of the NTK during feature learning experiences a transient increase before decreasing to a lower value. In Fig. 2, we plot the resulting theory (diagonal blocks of \(\mathbf{\Sigma}_{\mathbf{q}_{1}}\) from Equation 8) for two layer neural networks. As predicted by theory, all average squared deviations from the infinite width DMFT scale as \(\mathcal{O}(N^{-1})\). Similarly, the average kernels \(\left\langle K\right\rangle\) and test predictions \(\left\langle f_{\star}\right\rangle\) change by a larger amount for larger \(\gamma\) (equation (64)). The experimental variances also match the theory quite accurately. The variance of the train error \(\Delta(t)\) peaks earlier and at a lower value for richer training, but all variances go to zero at late time as the model approaches the interpolation condition \(\Delta=0\). As \(\gamma\to 0\) the curve approaches \(N\) Var\((\Delta(t))\sim\kappa\ y^{2}\ t^{2}\ e^{-2t}\), where \(\kappa\) is the initial NTK variance (see Section 5). While the train prediction variance goes to zero, the test point prediction does not, with richer networks reaching a lower asymptotic variance. We suspect this dynamical effect could explain lower variance observed in feature learning networks compared to lazy networks [7, 18]. In Fig. A.2, we show that the reduction in variance is not due to a reduction in the uncoupled variance \(\kappa(t,s)\), which increases in \(\gamma\). Rather the reduction in variance is driven by the coupling of perturbations across time. ### Offline Training with Multiple Samples In this section we go beyond the single sample equations of the prior section and explore training with multiple \(P\) examples. In this case, we have training errors \(\{\Delta_{\mu}(t)\}_{\mu=1}^{P}\) and multiple kernel entries \(K_{\mu\nu}(t)\) (App. D). Each of the errors \(\Delta_{\mu}(t)\) receives a \(\mathcal{O}(N^{-1/2})\) fluctuation, the training error \(\sum_{\mu}\left\langle\Delta_{\mu}^{2}\right\rangle\) has an additional variance on the order of \(\mathcal{O}(\frac{P}{N})\). In the case of two-layer linear networks trained on whitened data (\(\frac{1}{D}\mathbf{x}_{\mu}\cdot\mathbf{x}_{\nu}=\delta_{\mu\nu}\)), the equations for the propagator simplify and one can separately solve for the variance of \(\mathbf{\Delta}(t)\in\mathbb{R}^{P}\) along signal direction \(\mathbf{y}\in\mathbb{R}^{P}\) and along each of the \(P-1\) orthogonal directions (App. I). At infinite width, the task-orthogonal component \(\mathbf{\Delta}_{\perp}\) vanishes and only the signal dimension \(\Delta_{y}(t)\) evolves in time with differential equation [9, 46] \[\frac{d}{dt}\Delta_{y}(t)=2\sqrt{1+\gamma^{2}(y-\Delta_{y}(t))^{2}}\ \Delta_{y}(t)\,\ \mathbf{\Delta}_{\perp}(t)=0. \tag{9}\] However, at finite width, both the \(\Delta_{y}(t)\) and the \(P-1\) orthogonal variables \(\mathbf{\Delta}_{\perp}\) inherit initialization variance, which we represent as \(\Sigma_{\Delta_{y}}(t,s)\) and \(\Sigma_{\perp}(t,s)\). In Fig. 3 (a)-(b) we show this approximate solution \(\left\langle|\mathbf{\Delta}(t)|^{2}\right\rangle\sim\Delta_{y}(t)^{2}+\frac{2}{N }\Delta_{y}^{1}(t)\Delta_{y}(t)+\frac{1}{N}\Sigma_{\Delta_{y}}(t,t)+\frac{(P- 1)}{N}\Sigma_{\perp}(t,t)+\mathcal{O}(N^{-2})\) across varying \(\gamma\) and varying \(P\) (see Appendix I for \(\Sigma_{\Delta_{y}}\) and \(\Sigma_{\perp}\) formulas). We see that variance of train point predictions \(f_{\mu}(t)\) increases with the total number of points despite the signal of the target vector \(\sum_{\mu}y_{\mu}^{2}\) being fixed. In this model, the bias correction \(\frac{2}{N}\Delta_{y}^{1}(t)\Delta_{y}(t)\) is always \(\mathcal{O}(1/N)\) but the variance correction is \(\mathcal{O}(P/N)\). The fluctuations along the \(P-1\) orthogonal directions begin to dominate the variance at large \(P\). Fig. 3 (b) shows that as \(P\) increases, the leading order approximation breaks down as higher order terms become relevant. ## 7 Deep Networks In networks deeper than two layers, the DMFT propagator has complicated dependence on non-diagonal (in time) entries of the feature kernels (see App. D). This leads to Hessian blocks with four time and four sample indices such as \(D_{\mu\nu\alpha\beta}^{\Phi^{\ell}}(t,s,t^{\prime},s^{\prime})=\frac{\partial }{\partial\Phi^{\ell-1}_{\alpha\beta}(t^{\prime},s^{\prime})}\left\langle\phi( \hbar_{\mu}^{\ell}(t))\phi(\hbar_{\nu}^{\ell}(s))\right\rangle\), rendering any numerical calculation challenging. However, in deep linear networks trained on whitened data, we can exploit the symmetry in sample space and the Gaussianity of preactivation features to exactly compute derivatives without Monte-carlo sampling as we discuss in App. K. An example set of results for a depth 4 network is provided in Fig. 4. The variance for the feature kernels \(H^{\ell}\) accumulate finite size variance by layer along the forward pass and the gradient kernels \(G^{\ell}\) accumulate variance on the backward pass. The SNR of the kernels \(\frac{\langle H\rangle^{2}}{N\text{Var}(H)}\) improves with feature learning, suggesting that richer networks will be better modeled by their mean field limits. Examples of the off-diagonal correlations obtained from the propagator are provided in App. Fig. A.5. ## 8 Variance can be Small Near Edge of Stability In this section, we move beyond the gradient flow formalism and ask what large step sizes do to finite size effects. Recent studies have identified that networks trained at large learning rates can be qualitatively different than networks in the gradient flow regime, including the catapult [57] and edge of stability (EOS) phenomena Figure 4: Depth 4 linear network with single training point. Black dashed lines are theory. (a) The variance of the training error along the task relevant subspace. We see that unlike the two layer model, more feature learning can lead to larger peaks in the finite size variance. (b) The variance of the NTK in the task relevant subspace. When properly normalized against the square of the mean \(\left\langle K(t)\right\rangle^{2}\), the final NTK variance decreases with feature learning. (c) The gap in feature kernel variance across different layers of the network is amplified by feature learning strength \(\gamma\). Figure 3: Large input dimension or multiple samples amplify finite size effects in a simple two layer model. Black dashed lines are theory. (a) The variance of offline learning with \(P\) training examples in a two layer linear network. (b) The leading perturbative approximation to the train error breaks down when samples \(P\) becomes comparable to \(N\). (c)-(d) Richer training reduces variance. [19, 20, 21]. In these settings, the kernel undergoes an initial scale growth before exhibiting either a recovery or a clipping effect. In this section, we explore whether these dynamics are highly sensitive to initialization variance or if finite networks are well captured by mean field theory. Following [57], we consider two layer networks trained on a single example \(|\mathbf{x}|^{2}=D\) and \(y=1\). We use learning rate \(\eta\) and feature learning strength \(\gamma\). The infinite width mean field equations for the prediction \(f_{t}\) and the kernel \(K_{t}\) are (App. L) \[f_{t+1}=f_{t}+\eta K_{t}\Delta_{t}+\eta^{2}\gamma^{2}f_{t}\Delta_{t}^{2}\,\ K_{t+1}=K_{t}+4\eta\gamma^{2}f_{t}\Delta_{t}+\eta^{2}\gamma^{2}\Delta_{t}^{ 2}K_{t}. \tag{10}\] For small \(\eta\), the equations are well approximated by the gradient flow limit and for small \(\gamma\) corresponds to a discrete time linear model. For large \(\eta\gamma>1\), the kernel \(K\) progressively sharpens (increases in scale) until it reaches \(2/\eta\) and then oscillates around this value. It may be expected that near the EOS, the large oscillations in the kernels and predictions could lead to amplified finite size effects, however, we show in Fig. 5 that the leading order propagator elements decrease even after reaching the EOS threshold, indicating _reduced_ disagreement between finite and infinite width dynamics. ## 9 Finite Width Alters Bias, Training Rate, and Variance in Realistic Tasks To analyze the effect of finite width on neural network dynamics during realistic learning tasks, we studied a vanilla depth-6 ReLU CNN trained on CIFAR-10 (experimental details in App. B, F.2) In Fig. 6, we train an ensemble of \(E=8\) independently initialized CNNs of each width \(N\). Wider networks not only have better performance for a single model (solid), but also have lower bias (dashed), measured with ensemble averaging of the logits. Because of faster convergence of wide networks, we observe wider networks have higher variance, but if we plot variance at fixed ensembled training accuracy, wider networks have consistently lower variance (Fig. 6(d)). We next seek an explanation for why wider networks after ensembling trains at a faster _rate_. Theoretically, this can be rationalized by a finite-width alteration to the ensemble averaged NTK, which governs the convergence timescale of the ensembled predictions (App. F.1). Our analysis in App. F.1 suggests that the rate of convergence receives a finite size correction with leading correction \(\mathcal{O}(N^{-1})\) F.2. To test this hypothesis, we fit the ensemble training loss curve to exponential function \(\mathcal{L}\approx C\exp{(-R_{N}t)}\) where \(C\) is a constant. We plot the fit \(R_{N}\) as a function of \(N^{-1}\) result in Fig. 6(e). For large \(N\), we see the leading behavior is linear in \(N^{-1}\), but begins to deviate at small \(N\) as a quadratic function of \(N^{-1}\), suggesting that second order effects become relevant around \(N\lesssim 100\) on CIFAR-10. In App. Fig. A.6, we train a smaller subset of CIFAR-10 where we find that \(R_{N}\) is well approximated by a \(\mathcal{O}(N^{-1})\) correction, consistent with the idea that higher sample size drives the dynamics out of the leading order picture. We also analyze the effect of \(\gamma\) on variance in this task. In App. Fig. A.7, we train \(N=64\) models with varying \(\gamma\). Increased \(\gamma\) reduces variance of the logits and alters the representation (measured with kernel-task alignment), the training and test accuracy are roughly insensitive to the richness \(\gamma\) in the range we considered. Figure 5: Edge of stability effects do not imply deviations from infinite width behavior. Black dashed lines are theory. (a) The loss dynamics for width \(N=500\) networks (solid colors) compared to the infinite width DMFT (dashed black). (b) The average kernel over an ensemble of several NNs. For small \(\gamma\), the kernel reaches its asymptote before hitting the edge of stability. For large \(\gamma\), the kernel increases and then oscillates around \(2/\eta\). (c)-(d) Remarkably variance due to finite size can reduce during training on both sides of the edge of stability (for \(\gamma\) smaller and larger than the critical value \(\sim 1/\eta\)), suggesting that infinite width theory can be predictive of wide but finite networks even in the large learning rate regime. Figure 6: Depth 6 CNN trained on CIFAR-10 for different widths \(N\) with richness \(\gamma=0.2\), \(E=8\) ensembles. (a)-(b) For this range of widths, we find that smaller networks perform worse in train and test error, not only in terms of the single models (solid) but also in terms of bias (dashed). The delayed training of ensembled finite width models indicates that the correction to the mean order parameters (App. F) is non-negligible. (c) Alignment of the average kernel to test labels is also not conserved across width. (d) The ratio of the test MSE for a single model to the ensembled logit MSE. (e) The fitted rate \(R_{N}\) of training width \(N\) models as a function of \(N^{-1}\). We rescale the time axis by \(R_{N}\) to allow for a fair comparison of prediction variance for networks at comparable performance levels. (f) In rescaled time, ensembled network training losses (dashed) are coincident. Discussion We studied the leading order fluctuations of kernels and predictions in mean field neural networks. Feature learning dynamics can reduce undesirable finite size variance, making finite networks order parameters closer to the infinite width limit. In several toy models, we revealed some interesting connections between the influence of feature learning, depth, sample size, and large learning rate and the variance of various DMFT order parameters. Lastly, in realistic tasks, we illustrated that bias corrections can be significant as rates of learning can be modified by width. Though our full set of equations for the leading finite size fluctuations are quite general in terms of network architecture and data structure, the leading term involving only \(\mathbf{\Sigma}\) does not capture the complete finite size distribution defined in Eq. (3), especially as the sample size becomes comparable to the width. Future work could explore in greater detail the higher order contributions from averages involving powers of \(V(\mathbf{q})\) by examining cubic and higher derivatives of \(S\) in Eq. (3). It could also be worth examining in future work how finite size impacts other biologically plausible learning rules, where the effective NTK can have asymmetric (over sample index) fluctuations [46]. Further, even though we expect our perturbative expressions to give a precise asymptotic description of finite networks in mean field/\(\mu\)P, the resulting expressions are not realistically computable in deep networks trained on large dataset size \(P\) for long times \(T\) since the number of Hessian entries scales as \(\mathcal{O}(T^{4}P^{4})\) and a matrix of this size must be stored in memory and inverted in the general case. Future work could explore solveable special cases or high dimensional limits where the analysis may simplify. ### Acknowledgements This work was supported by NSF Award DMS-2134157. BB thanks Alex Atanasov, Jacob Zavatone-Veth, Boris Hanin and Greg Yang for helpful discussions and for their comments on this manuscript. BB also acknowledges Jeremy Cohen for discussions about large step size dynamical effects and their relationship to network width.
2307.02246
S3C: Self-Supervised Stochastic Classifiers for Few-Shot Class-Incremental Learning
Few-shot class-incremental learning (FSCIL) aims to learn progressively about new classes with very few labeled samples, without forgetting the knowledge of already learnt classes. FSCIL suffers from two major challenges: (i) over-fitting on the new classes due to limited amount of data, (ii) catastrophically forgetting about the old classes due to unavailability of data from these classes in the incremental stages. In this work, we propose a self-supervised stochastic classifier (S3C) to counter both these challenges in FSCIL. The stochasticity of the classifier weights (or class prototypes) not only mitigates the adverse effect of absence of large number of samples of the new classes, but also the absence of samples from previously learnt classes during the incremental steps. This is complemented by the self-supervision component, which helps to learn features from the base classes which generalize well to unseen classes that are encountered in future, thus reducing catastrophic forgetting. Extensive evaluation on three benchmark datasets using multiple evaluation metrics show the effectiveness of the proposed framework. We also experiment on two additional realistic scenarios of FSCIL, namely where the number of annotated data available for each of the new classes can be different, and also where the number of base classes is much lesser, and show that the proposed S3C performs significantly better than the state-of-the-art for all these challenging scenarios.
Jayateja Kalla, Soma Biswas
2023-07-05T12:41:46Z
http://arxiv.org/abs/2307.02246v1
# S3C: Self-Supervised Stochastic Classifiers for Few-Shot Class-Incremental Learning ###### Abstract Few-shot class-incremental learning (FSCIL) aims to learn progressively about new classes with very few labeled samples, without forgetting the knowledge of already learnt classes. FSCIL suffers from two major challenges: (i) _over-fitting_ on the new classes due to limited amount of data, (ii) _catastrophically forgetting_ about the old classes due to unavailability of data from these classes in the incremental stages. In this work, we propose a self-supervised stochastic classifier (S3C)1 to counter both these challenges in FSCIL. The stochasticity of the classifier weights (or class prototypes) not only mitigates the adverse effect of absence of large number of samples of the new classes, but also the absence of samples from previously learnt classes during the incremental steps. This is complemented by the self-supervision component, which helps to learn features from the base classes which generalize well to unseen classes that are encountered in future, thus reducing catastrophic forgetting. Extensive evaluation on three benchmark datasets using multiple evaluation metrics show the effectiveness of the proposed framework. We also experiment on two additional realistic scenarios of FSCIL, namely where the number of annotated data available for each of the new classes can be different, and also where the number of base classes is much lesser, and show that the proposed S3C performs significantly better than the state-of-the-art for all these challenging scenarios. Footnote 1: code: [https://github.com/JAYATEJAK/S3C](https://github.com/JAYATEJAK/S3C) Keywords:few-shot class-incremental learning, stochastic classifiers, self-supervised learning ## 1 Introduction In recent years, Deep Neural Networks (DNN) have shown significant performance improvement on various computer vision applications [19, 27, 29]. Usually, the DNN models require enormous amount of annotated data from all the classes of interest to be available for training. In real-world, since data from different classes may become available at different instants of time, we want the model to learn about the new classes incrementally without forgetting about the old classes, which is precisely the task addressed in Class-Incremental Learning (CIL). CIL approaches are very useful and practical, not only because it is computationally expensive and time-consuming to retrain the model from scratch, but also data from the previous classes may not be available due to storage and privacy issues. Since collecting large number of annotated data from all the new classes is also very difficult, recently, the more challenging but realistic few-shot class-incremental learning (FSCIL) is gaining increasing attention, where the new classes have few labeled samples per class [35]. In FSCIL, a model is first learnt using a set of base classes with large number of labeled examples per class. At each incremental step (task), the model has access to a few labeled samples of the new classes and a single prototype for each of the previously learnt classes. The goal is to learn a unified classifier to recognize the old as well as the new classes, without having access to any task labels. This helps the model to quickly learn about the new classes without requiring to collect and annotate large amounts of data for the new classes. FSCIL faces two major challenges, namely overfitting due to limited samples for the new classes, and catastrophic forgetting of the already learnt classes due to absence of old classes data at the incremental steps. In this work, we propose a novel framework, S3C (**S**elf-**S**upervised **S**tochastic **C**lassifier) to simultaneously address both these challenges in the FSCIL setting. Unlike the standard classifiers, stochastic classifiers (SC) are represented by weight distributions, i.e. a mean and variance vector [24]. Thus, each classifier weight sampled from this distribution is expected to correctly classify the input samples. We show for the first time, that SC learnt for both the base and new classes can significantly reduce the over-fitting problem on the new classes for FSCIL task. It can also arrest the catastrophic forgetting of the previously learnt classes to a certain extent. As is common in most FSCIL approaches [44, 25], we propose to freeze the feature extractor and learn only the SC at each incremental step. In order to compute features from the base classes which generalize to unseen classes, inspired by recent works [22, 47], we use self-supervision along with SC giving our final S3C framework. As expected, this helps to significantly mitigate the effect of catastrophic forgetting, while at the same time retaining the advantage on the new classes. To this end, our contributions are as follows: 1. We propose a novel framework, termed S3C (Self-Supervised Stochastic Classifier) to address the FSCIL task. 2. We show that stochastic classifiers can help to significantly reduce over-fitting on the new classes with limited amount of data for FSCIL. 3. We also show that self-supervision with stochastic classifier can be used to better retain the information of the base classes, without hindering the enhanced performance of the stochastic classifiers for the new classes. 4. We set the new state-of-the-art for three benchmark datasets, namely CIFAR100 [18], CUB200 [37] and miniImageNet [44]. 5. We also propose and evaluate on two additional, realistic FSCIL settings, namely FSCIL-im (FSCIL-imbalanced) - where the new classes may have different number of samples/class and (ii) FSCIL-lb (FSCIL-less base) - where there are less number of base classes, which further justifies the effectiveness of the proposed S3C framework. ## 2 Related Works Here, we provide some pointers to the related work in literature. **Class-Incremental Learning (CIL):** The goal of CIL is to learn new classes progressively without any task information. Due to plenty of annotated new class data, mitigating catastrophic forgetting is a challenging problem. LwF [23] proposed to use knowledge distillation [15] to alleviate catastrophic forgetting. iCaRL [31] showed that nearest classifier mean (NCM) using old class exemplars can generate robust classifiers for CIL. EEIL [7] used knowledge distillation to remember old classes and cross-entropy to learn new classes in an end-to-end training. UCIR [16] proposed cosine-based classifiers and used feature-space distillation and inter-class separation margin loss to mitigate catastrophic forgetting. Several state-of-art-works [38, 45, 2, 12, 3] proposed different techniques to address the class imbalance problem in CIL like rescaling scores or balanced finetuning of classifiers, etc. Some of the recent works [47, 41, 46] have focused on non-exemplar based methods, with no access to exemplars from the old classes. **Few-Shot Class-Incremental Learning (FSCIL):** Recently, there has been a significant focus on the more realistic and challenging FSCIL task, where very few samples per class are available for training at each incremental task. Tao _et al._[35] proposed this protocol and used neural network gas architecture to preserve the feature topologies of the base and new classes. Mazumder _et al._[25] proposed to identify unimportant parameters in the model based on their magnitudes and learn only these parameters during the incremental tasks. The works proposed in [9, 8, 1, 10, 48, 21, 32] focus on learning robust manifolds by regularizing feature space representations. The works in [11, 34, 44] used graph-based networks for old classes' knowledge retention. Recently, CEC [44] proposed a meta-learning strategy and achieved state-of-art results for the FSCIL setting. **Self-Supervised Learning (SSL):** SSL uses predefined pretext tasks to learn features from unlabeled data. Different pretext tasks have been proposed like image rotations [17], image colourization [20], clustering [6], and solving jigsaw puzzles from image patch permutations [28]. These features can notably improve the performance of downstream tasks like few-shot learning [13], semi-supervised learning [43], to improve the model robustness [14], class imbalance [40], etc. Recently, Lee _et al._[22] used SSL to improve the performance for supervised classification, by augmenting the original labels using the input transformations. In this work, we show that SSL [22] can be used very effectively for the FSCIL task. **Stochastic Neural Networks:** Traditional neural networks cannot model uncertainty well due to their deterministic nature [5]. Stochastic neural networks [26] give robust representations in the form of distributions. Subedar _et al._[33] proposed uncertainty aware variational layers for activity recognition. Recently, it has been used for person re-identification [42] and unsupervised domain adaptation [24] tasks. ## 3 Problem Definition and Notations Here, we explain the FSCIL task, which consists of a base task and several incremental stages, and also the notations used in the rest of the paper. In the base task, the goal is to learn a classifier using large number of labeled samples from several base classes. At each incremental step, using a few labeled samples per new class and a single class prototype of the old (previously learnt) classes, the model needs to be updated such that it can classify both the old and the new classes. Let \(\mathcal{D}^{(0)}\) denote the base task which contains large number of annotated data from classes \(\mathcal{C}^{(0)}\). Let the incremental task data be denoted as \(\{\mathcal{D}^{(1)},..,\mathcal{D}^{(t)},..,\mathcal{D}^{(\mathcal{T})}\}\), and the corresponding label spaces be denoted as \(\mathcal{C}^{(t)}\), where \(t=1,\ldots,\mathcal{T}\). Thus, the model will learn a total of \(\mathcal{T}\) tasks incrementally and there is no overlap in the label space between the different tasks, i.e. \(C^{(t)}\cap C^{(s)}=\phi\); (\(t\neq s\)). Once the model has learned on the data \(\mathcal{D}^{(t)}\), it has to perform well on all the classes seen so far i.e \(\{C^{(0)}\cup C^{(1)}\cup\cdots\cup C^{(t)}\}\). ## 4 Proposed Method Here, we describe the proposed S3C framework for the FSCIL task. In many of the initial FSCIL approaches [35, 25, 8], the main focus was to develop novel techniques for the incremental step to prevent catastrophic forgetting and overfitting. Recently, CEC [44] showed that the base network training has a profound effect on the performance of the incremental tasks. Using appropriate modifications while learning the base classifier can significantly enhance not only the base class accuracies, but also the performance for the incrementally added classes. Even without any fine-tuning during the incremental steps, CEC reports the state-of-the-art results for FSCIL. In the proposed S3C framework, we combine the advantages of both these techniques and propose to not only improve the base classifier training, but also update all the classifiers during the incremental steps. First, we describe the two main modules of S3C, namely Stochastic Classifier and Self-Supervision and then discuss how to integrate them. **Stochastic Classifier:** One of the major challenges in FSCIL is the few number of annotated samples that is available per class at each incremental step. This may result in overfitting on the few examples and learning classification boundaries which do not generalize well on the test data. Now, we discuss how stochastic classifiers can be used to mitigate this problem. In this work, we use cosine similarity between the features and the classifier weights to compute the class score for that particular feature. For a given input image \(\mathbf{x}\) from class \(C_{i}\), let us denote its feature vector as \(f_{\theta}(\mathbf{x})\), where the parameters of the feature extractor \(f\) is denoted by \(\theta\). Let the classifier weights corresponding to class \(C_{i}\) be denoted as \(\phi_{i}\). Then the cosine similarity of the feature with this classifier weight can be computed as \(\langle\overline{\phi_{i}},\overline{f_{\theta}(\mathbf{x})}\rangle\), where \(\overline{u}=u/||u||_{2}\) denotes the \(l_{2}\) normalized vector. Fig. 1(a) shows the normalized feature extractor, and classifier weights for two classes, \(C_{i}\) and \(C_{j}\). The green shaded area denotes the region where \(f_{\theta}(\mathbf{x})\) will be correctly classified to class \(C_{i}\), and \(m_{ij}\) is the classification boundary between the two classifiers (considering only the upper sector between \(\phi_{i}\) and \(\phi_{j}\)). Now, instead of a single classifier, let us learn two different classifiers for each class (eg. \(\phi_{i}^{1}\) and \(\phi_{i}^{2}\) for class \(C_{i}\)). In Fig. 1 (b), \(\{m_{ij}^{11},m_{ij}^{12},m_{ij}^{21},m_{ij}^{22}\}\) are the four classification boundaries for four combination of classifiers. To ensure that the input data is correctly classified using all the classifiers, the feature embedding \(f_{\theta}(\mathbf{x})\) has to move closer to the classifier of its correct class, thus making the samples of a class better clustered and further from samples of other classes. But it is difficult to choose (and compute) how many classifiers should be used. By using a stochastic classifier (Fig. 1 (c)), we can ensure that we have infinite such classifiers around the mean classifier. Using a stochastic classifier \(\psi=\{\mu,\sigma\}\) at the classification head resembles the use of multiple classifiers, where \(\mu\) and \(\sigma\) denotes the mean and variance of the classifier \(\psi\). For a given input image \(\mathbf{x}\), the output score of the stochastic classifiers is proportional to \(\langle\widehat{\mu},\overline{f_{\theta}(\mathbf{x})}\rangle\) (\(\hat{\mu}=\mu+\mathcal{N}(0,1)\odot\sigma\)), where the classifier is sampled from the distribution. This has similarity with feature augmentations which are also commonly used [47]. There are two main advantages of using a stochastic classifier instead of feature augmentations: (1) Instead of using a fixed variance for the features (which has to be manually calculated), the means and variances used in the proposed framework are automatically learnt in an end-to-end manner. (2) The means and variances learnt using the base classes also help to initialize the corresponding parameters for the new classes in a semantically meaningful manner as explained later. **Self-supervision:** At the incremental stages, due to presence of few examples from the new classes, in general, most of the FSCIL approaches either fix the feature extractor after learning the base classes [44, 9] or fine-tune it with a very small learning rate [35, 25, 8], so that it does not change significantly. This reduces catastrophic forgetting as well as overfitting. In our work, we fix the fea Figure 1: Figure shows the classification boundary between two classes in (a) standard classifier, (b) two-classifiers per class and (c) stochastic classifier. The margin in (c) results in more discriminative classification boundaries. ture extractor after learning the base classes and only fine-tune the classifiers. To make the base feature extractor generalize well to unseen data, we propose to use self-supervision for the base classifier training as well as during the incremental learning stages. Since self-supervised training does not use class labels, more generic features can be learnt, which can generalize well to unseen classes. SSL has been used successfully for several tasks [43, 14, 40, 13, 47], including the standard class-incremental setting [47]. Here, we use the recently proposed SSL approach [22], where image augmentations are used to generate artificial labels, which are used to train the classification layer. For a given input image \(\mathbf{x}\), let the augmented versions be denoted as \(\mathbf{\widetilde{x}}_{r}=t_{r}(\mathbf{x})\), where \(\{t_{r}\}_{r=1}^{M}\) denotes pre-defined transformations. In this work, we use images rotated by \(\{0^{\circ},90^{\circ},180^{\circ},270^{\circ}\}\), i.e. (M=4) as the augmented images. We show that the feature extractor learnt using self-supervision performs very well in the incremental stages. First, we describe the integrated S3C loss which is used in the training process. Construction of S3C loss:At task \(t\), \(C_{i}^{(s)}\) denotes the \(i^{th}\) class in task \(s\in\{0,1,..,t\}\). Then its corresponding stochastic classifier is denoted as \(\psi_{i}^{(s)}\) with mean \(\mu_{i}^{(s)}\) and variance \(\sigma_{i}^{(s)}\). To integrate the stochastic classifiers with self-supervision, for each class, we create four classifier heads corresponding to each of the four rotations as in [22]. In this work, we want to jointly predict the class and its rotation \(r=\{0^{\circ},90^{\circ},180^{\circ},270^{\circ}\}\), thus we denote the final classifiers as \(\psi_{i,r}^{(s)}\), with individual means \((\mu_{i,r}^{(s)})\), but with the same class-wise variance \((\sigma_{i}^{(s)})\). Since the same data is present in different rotations, we enforce that the classifiers for the same class share the same variances, which reduces the number of parameters to be computed. Thus, the joint softmax output of a given sample \(\mathbf{x}\) for \(C_{i}^{(s)}\) class at \(r^{th}\) rotation is given by \[\rho_{ir}^{(s)}(\mathbf{x};\theta,\psi^{(0:t)})=\frac{exp(\eta\ \langle\ \overline{\hat{\mu}_{ir}^{(s)}},\overline{f_{\theta}(\mathbf{x})}\ \rangle)}{\sum\limits_{j=0}^{t}\ \sum\limits_{k=0}^{|C^{(t)}|}\sum\limits_{l=0}^{M}exp(\eta\ \langle\ \overline{\hat{\mu}_{kl}^{(j)}},\overline{f_{\theta}(\mathbf{x})}\ \rangle)} \tag{1}\] Where \(\hat{\mu}_{ir}^{(j)}=\mu_{ir}^{(j)}+\mathcal{N}(0,1)\odot\sigma_{i}^{(j)}\) represents the sampled weight from the stochastic classifier \(\psi_{ir}^{(j)}\), \(\eta\) is a scaling factor used to control peakiness of the softmax distribution. Finally, the S3C training objective for a training sample \(\mathbf{x}\) with label \(y\) from task \(s\) can be written as \[\mathcal{L}_{S3C}(\mathbf{x},y;\theta,\psi^{(0:t)})=-\frac{1}{M}\sum\limits_{r =1}^{M}\log(\rho_{yr}^{(s)}(\mathbf{\widetilde{x}}_{r};\theta,\psi^{(0:t)})) \tag{2}\] This implies that the input image is transformed using the chosen image transformations (4 rotations in this work) and the loss is combined for that input. Note that the first transformation corresponding to \(0^{\circ}\) is the identity transformation (i.e. the original data itself). We now describe the base and incremental stage training of the S3C framework (Fig. 2). ### Base Network Training of S3C In FSCIL setting, we assume that we have access to several base classes with sufficient number of annotated data for base training. Given the data from the base classes \(C^{(0)}\), we use a base network (ResNet20 for CIFAR100 and ResNet18 for CUB200 and miniImageNet) along with a Graph Attention Network inspired by [44][36]. We train the base network, i.e. the feature extractor with parameters \(\theta\) and the stochastic classifiers corresponding to the base classes (\(\psi^{(0)}\)) with S3C objective \(\mathcal{L}_{base}=\mathcal{L}_{S3C}(\mathbf{x},y;\theta,\psi^{(0)})\), with the base training data given by \(\{\mathbf{x},y\}\in\mathcal{D}^{(0)}\). The proposed objective improves the performance of the base classes, in addition to that of the new classes that will be encountered in the incremental stages as we will observe in the experimental evaluation. ### Preparing for the incremental step After the base classifier training, the training data of the base classes may not be available any longer. This may be due to limited storage capacity, privacy issues, etc. After the first incremental step, we want the unified classifier to perform well on the base as well as on the new classes. For this, to mitigate catastrophic forgetting of the base classes, their class prototypes are stored as is the common practice [35][44]. These stored class prototypes can be treated as class representatives of the base classes and thus can be used for updating the network at the incremental step. The class prototypes are computed by averaging the training features given by the feature extractor (\(f_{\theta}(\cdot)\)) for each class. This is done not only at the end of the base training, but after each incremental step as well, i.e. after incremental step \(t\), we store the class prototypes of all the classes that the model has encountered till step \(t\). The class prototype set \(\mathbf{P}^{(t)}\) contains the classes prototypes encountered in task \(t\). The class prototype \(P_{i}^{(t)}\) after task Figure 2: Illustration of the proposed S3C framework: Left: Base network training, Right: Training at each incremental step. \(t\) for \(i^{th}\) class is calculated as \[P_{i}^{(t)}=\frac{1}{N_{i}^{(t)}}\sum_{n=1}^{N^{(t)}}\mathbb{I}_{(y_{n}=i)}\ f_{ \theta}(\mathbf{x}_{n}) \tag{3}\] Where \(N^{(t)}\) is the number of samples in the dataset \(\mathcal{D}^{(t)}\), \(N_{i}^{(t)}\) is number of samples in \(i^{th}\) class of task \(t\), and \(\{\mathbf{x}_{n},y_{n}\}_{n=1}^{N^{(t)}}\in\mathcal{D}^{(t)}\). The indicator variable \(\mathbb{I}_{(y_{n}=i)}\) will be 1 if the sample belongs to the \(i^{th}\) class (i.e. \(y_{n}=i\)). Thus, the class prototype set is updated at the end of each task. ### Incremental Step Here, we will discuss the training process involved in each incremental step. As in [44][9], we propose to freeze the already learnt feature extractor, since the self-supervision has ensured that it will generalize well to previously unseen classes. This also helps in mitigating the catastrophic forgetting and over-fitting problems. In our work, we propose to update the classifiers of the previous as well as the new classes with the stored class-prototypes and the few examples of the new classes. This will help the model better adapt to the new set of classes. Now, we discuss how to initialize the stochastic classifiers for the new classes. _Initialization of the Stochastic Classifiers of the new classes:_ For the new classes, we need to initialize the stochastic classifiers before fine-tuning. The means are initialized with the centroid of the features for that class (calculated using the previous model). We initialize the variances of the new classes using that of the most semantically similar class from the base set. Semantic similarity is computed using GloVE embeddings [30] of the base and new class names. _Fine-tuning the classifiers:_ With this initialization, we fine-tune the classifiers of the new as well as the previous classes using the few labeled examples of the new classes and the stored class-prototypes of the previous classes. Let \(q\in\mathbf{P}^{(0:t-1)}\) be a prototype from any old class, then the joint softmax output of the stochastic classifier for \(i^{th}\) class and \(r^{th}\) rotation (task \(s\)) is \[\zeta_{ir}^{(s)}(q;\psi^{(0:t)})=\frac{exp(\eta\ \langle\ \overline{\hat{\mu}_{ir}^{(s)}}, \overline{q}\ \rangle)}{\sum\limits_{j=0}^{t}\sum\limits_{k=0}^{|C^{(t)}|}\sum \limits_{l=0}^{M}exp(\eta\ \langle\ \overline{\hat{\mu}_{kl}^{(j)}}, \overline{q}\ \rangle)} \tag{4}\] For fair comparison with the state-of-the-art approaches, we only store a single class-prototype per class corresponding to the original images (i.e. \(0^{\circ}\) rotation). Thus for the previous classes, only the parameters of the stochastic classifier corresponding to the \(0^{\circ}\) rotation are updated. To mitigate catastrophic forgetting, we use cross entropy loss based on the class prototypes as \[\mathcal{L}_{proto}(q,\tilde{y},\psi^{(0:t)})=-\log(\zeta_{\tilde{y}r}^{(s)}( q;\psi^{(0:t)})) \tag{5}\] where \(\tilde{y}\) is the class label of the prototype in task \(s\). For the new classes, very few labeled samples per class is available. Since the few examples cannot cover the entire distribution, generalization to new classes is quite challenging. As discussed before, we propose to use stochastic classifiers which mitigates the problem of overfitting and generalizes well to the new classes even with few examples. To this end, we calculate a loss as in equation (2) on the new task data using stochastic classifiers. Finally, the total loss at each incremental task is given by \[\mathcal{L}_{inc}^{(t)}=\lambda_{1}\cdot\mathcal{L}_{proto}(q,\check{y},\psi^ {(0:t)})+\lambda_{2}\cdot\mathcal{L}_{S3C}(\mathbf{x},y;\theta,\psi^{(0:t)}) \tag{6}\] where \(\{\mathbf{x},y\}\in\mathcal{D}^{(t)}\) and \(t>0\). \(\lambda_{1},\lambda_{2}\) are hyper-parameters to balance the performance between old and new classes. At the end of task \(t\), we have the learnt classifiers for all the classes seen so far, namely \(\psi^{(0)},\ldots,\psi^{(t)}\). ## 5 Testing Phase At inference time, the test image \(\mathbf{x}\) can belong to any of the classes seen so far. To utilize the learnt classifiers effectively, we generate transformed versions of \(\mathbf{x}\) and aggregate all the corresponding scores. Thus, the aggregate score for the \(i^{th}\) class in task \(s\) is computed as \(z_{i}^{(s)}=\frac{1}{M}\sum_{r=1}^{M}\eta\ \langle\ \overline{\mu_{ir}^{(s)}}, \overline{f_{\theta}(\mathbf{\tilde{x}}_{r})}\rangle\). Then the aggregated probability used for predicting the class is given by \[P_{agg}(i,s/\mathbf{x},\theta,\psi^{(0:t)})=\frac{\exp{(z_{i}^{(s)})}}{\sum \limits_{j=0}^{t}\sum\limits_{k=1}^{|C^{(j)}|}\exp{(z_{k}^{(j)})}} \tag{7}\] Thus, the final prediction for the test sample \(\mathbf{x}\) is \[\hat{i},\hat{s}=arg\max_{i,s}P_{agg}(i,s/\mathbf{x}) \tag{8}\] which implies that the input \(\mathbf{x}\) belongs to \(\hat{i}^{th}\) class of task \(\hat{s}\). This aggregation scheme improves the model performance significantly. ## 6 Experimental Evaluation Here, we describe the extensive experiments performed to evaluate the effectiveness of the proposed S3C framework. Starting with a brief introduction of the datasets, we will discuss the performance of the proposed framework on three standard benchmark datasets. In addition, we also discuss its effectiveness on two real and challenging scenarios, where (i) the data may be imbalanced at each incremental step and (ii) fewer classes may be available during base training. We also describe the ablation study to understand the usefulness of each module. Datasets Used:To evaluate the effectiveness of the proposed S3C framework, we perform experiments on three benchmark datasets, namely CIFAR100 [18], miniImageNet [19] and CUB200 [37]. **CIFAR100**[18] contains \(32\times 32\) RGB images from 100 classes, where each class contains 500 training and 100 testing images. We follow the same FSCIL dataset splits as in [44], where the base task is trained with 60 classes and the remaining 40 classes is trained in eight incremental tasks in a 5-_way_\(5\)-_shot_ setting. Thus, there are a total of 9 training sessions (i.e., base + 8 incremental). **MiniImageNet**[19] is a subset of the ImageNet dataset and contains 100 classes with images of size \(84\times 84\). Each class has 600 images, 500 for training and 100 for testing. We follow the same task splits as in [44], where 60 classes are used for base task training and the remaining 40 classes are learned incrementally in 8 tasks. Each task contains 5 classes with 5 images per class. **CUB200**[37] is a fine-grained birds dataset with 200 classes. It contains a total of 6000 images for training and 6000 images for testing. All the images are resized to \(256\times 256\) and then cropped to \(224\times 224\) for training. We used the the same data splits proposed in [44], where there are 100 classes in the base task, and each of the 10 incremental tasks are learned in a \(10\)-_way_\(5\)-_shot_ manner. **Implementation details:** For fair comparison, we use the same backbone architecture as the previous FSCIL methods [44]. We use ResNet20 for CIFAR100 and ResNet18 for miniImageNet and CUB200 as in [44]. Inspired by CEC [44], we used the same GAT layer at the feature extractor output for better feature representations. We trained the base network for 200 epochs with a learning rate of 0.1 and reduced it to 0.01 and 0.001 after 120 and 160 epochs for CIFAR100 and miniImageNet datasets. For CUB200, the initial learning rate was 0.03 and was decreased to 0.003, 0.0003 after 40 and 60 epochs. We freeze the backbone network and fine-tune the stochastic classifiers for 100 epochs with a learning rate of 0.01 for CIFAR100 and miniImageNet and 0.003 for CUB200 at each incremental step. The base network was trained with a batch size of 128, and for the newer tasks, we used all the few-shot samples in a mini-batch for incremental learning. All the experiments are run on a single NVIDIA RTX A5000 GPU using PyTorch. We set \(\eta=16\), \(\lambda_{1}=5\) and \(\lambda_{2}=1\) for all our experiments. Evaluation protocol:We evaluate the proposed framework using the following three evaluation metrics as followed in the FSCIL literature: (1) First, at the end of each task, we report the **Top1 accuracy**[35, 25, 44, 8] of all the classes seen so far, which is the most commonly used metric; (2) To be practically useful, the model needs to perform well on all the tasks seen so far (i.e. have a good performance balance between the previous and new tasks). To better capture this performance balance, inspired from [39], recent FSCIL works [4, 9] propose to use the **Harmonic Mean** (HM) of the performance of the previous and new classes at the end of each incremental task. If \(t\) denotes the task id, \(t\in\{0,1,...,\mathcal{T}\}\), let \(Acc_{n}^{t}\) denote the model accuracy on test data of task \(n\) after learning task \(t\), where \(n\in\{0,1,2,...,t\}\). Then at the end of task \(t\), to analyze the contribution of base and novel classes in the final accuracy, harmonic mean is calculated between \(Acc_{0}^{t}\) and \(Acc_{1:t}^{t}\). Inspired by CEC, we also report **performance dropping rate** (\(PD=Acc_{0}^{0}-Acc_{0:\mathcal{T}}^{\mathcal{T}}\)) that measures the absolute difference between initial model accuracy after task 0 and model accuracy at the end of all tasks \(\mathcal{T}\). Here, we report the performance of S3C framework for the standard FSCIL setting on all the three benchmark datasets. Note that all the compared approaches have used the same backbone architecture, i.e. ResNet20 for CIFAR100 and ResNet18 for miniImageNet and CUB200 datasets. As mentioned earlier, most of the FSCIL approaches like TOPIC [35], Ft-CNN [35], EEIL [7], iCaRL [31], UCIR [16], adopted this classifier as it is and proposed different techniques in the incremental stage. Thus they have the same base task accuracy as can be observed from the results. The current state-of-the-art in FSCIL, CEC [44] showed that using the same backbone along with appropriate modifications for learning the base classifier can significantly enhance not only the base class accuracies, but also the performance on the incrementally added classes. We combine the advantages of both these techniques, i.e. making the base classifier better (using the same backbone), and at the same time, effectively fine-tuning the stochastic-classifiers in S3C. Thus the base accuracy of CEC and the proposed S3C is better than the other approaches. \begin{table} \begin{tabular}{l l c c c c c c c c} \hline \hline \multirow{2}{*}{Dataset} & \multirow{2}{*}{Method} & \multicolumn{8}{c}{Harmonic Mean (\%) \(\uparrow\)} \\ \cline{3-10} & & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\ \hline \hline \multirow{2}{*}{CIFAR100} & CEC [44] & 41.57 & 38.75 & 32.36 & 31.53 & 32.55 & 32.40 & 32.25 & 31.27 \\ & **S3C (Ours)** & **61.60** & **54.57** & **48.94** & **47.60** & **47.00** & **46.75** & **45.96** & **45.22** \\ \hline \multirow{2}{*}{miniImageNet} & CEC [44] & 31.68 & 30.86 & 29.52 & 29.01 & 26.75 & 24.46 & 26.14 & 26.24 \\ & **S3C (Ours)** & **35.30** & **38.18** & **40.62** & **38.86** & **35.02** & **34.49** & **36.06** & **36.20** \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of S3C with the state-of-the-art CEC in terms of Harmonic Mean on CIFAR100 and miniImageNet datasets. On both datasets S3C outperforms CEC by a considerable margin. Figure 3: Comparison of S3C with the state-of-art approaches on CIFAR100 and miniImageNet datasets using the backbone given in the caption. ### Results on standard FS-CIL propocol Here, we report the results on the three benchmark datasets. Fig. 3 compares the proposed SC3 framework with the state-of-the-art approaches in terms of top1 accuracy on CIFAR100. We observe that the modifications while learning the base classifier improves the performance for both CEC and S3C significantly. At the end of all tasks, S3C achieves a top1 accuracy of 53.96% compared to 49.14% obtained by the state-of-art CEC (relative improvement is 4.82%). The performance of all the compared approaches are directly taken from [44]. Table 1 shows the HM of S3C at the end of each incremental task. We observe that S3C obtains a relative improvement of 13.95% compared to CEC in terms of HM. This shows the effectiveness of S3C in achieving a better balance between the base and new class performance. Fig. 4 shows the t-SNE plot for new classes after task 1, where we observe that the new classes in S3C are relatively well clustered compared to CEC. In terms of PD, S3C is close to CEC (higher by 0.7%), but it outperforms CEC in terms of the other two metrics, namely top1 accuracy and HM. From Fig. 3 (right), we observe that S3C achieves 52.14% top1 accuracy on minImageNet, with a relative improvement of 4.51% over the second best of 47.63% obtained by CEC. In terms of HM (Table 1) S3C achieves 9.96% relative improvement over CEC. Performance dropping rate (PD) of CEC is slightly lower (0.35%) than S3C. We observe from Table 2 and Table 3 that S3C outperforms CEC by 6.67% and 11.72% respectively in terms of top1 accuracy and HM for CUB200 dataset. \begin{table} \begin{tabular}{l c c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{8}{c}{Accuracy in each session (\%) \(\uparrow\)} & \multicolumn{8}{c}{\(\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{ \text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{ }}}}}}}}}}}}}}}}} \\ \cline{2-10} & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & \multicolumn{1}{c}{\(\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{ \text{\text{\text{\text{\text{\text{\text{\text{ \text{ \texttexttexttext{ }}}}}}}}}}}}}}}} \\ \hline \hline FT-CNN [35] & 68.68 & 43.7 & 25.05 & 17.72 & 18.08 & 16.95 & 15.1 & 10.6 & 8.93 & 8.93 & 8.47 & 60.21 & **+39.83** \\ iCaRL [31] & 68.68 & 52.65 & 48.61 & 44.16 & 36.62 & 29.52 & 27.83 & 26.26 & 24.01 & 23.89 & 21.16 & 47.52 & **+26.69** \\ EEIL [7] & 68.68 & 53.63 & 47.91 & 44.2 & 36.3 & 27.46 & 25.93 & 24.7 & 23.95 & 24.13 & 22.11 & 46.57 & **+25.74** \\ UCIR [16] & 68.68 & 57.12 & 44.21 & 28.78 & 26.71 & 25.66 & 24.62 & 21.52 & 20.12 & 20.06 & 19.87 & 48.81 & **+27.98** \\ TOPIC [35] & 68.68 & 62.79 & 54.81 & 49.99 & 45.25 & 41.4 & 38.35 & 35.36 & 32.22 & 28.31 & 26.28 & 42.40 & **+21.97** \\ CEC [44] & 75.85 & 71.94 & 68.50 & 63.50 & 62.43 & 58.27 & 57.73 & 55.81 & 54.83 & 53.52 & 52.28 & 23.57 & **+2.74** \\ \hline **S3C (Ours)** & **80.62** & **77.55** & **73.19** & **68.54** & **68.05** & **64.33** & **63.58** & **62.07** & **60.61** & **59.79** & **58.95** & **20.83** \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison of S3C with other approaches on CUB200 dataset. All the compared results are directly taken from [44]. Figure 4: Figure shows t-SNE plot (of test samples) from 5 new classes after task 1 for CIFAR100 dataset. For this dataset, the proposed S3C has the least performance dropping (PD) rate compared to all the other approaches. ### Analysis and Ablation Here, we perform additional experiments and ablation studies on the CIFAR100 dataset to evaluate the effectiveness of the proposed S3C framework. Experiments on More Realistic and Challenging Scenarios:First, we show the effectiveness of S3C for two realistic scenarios, (i) where there is class imbalance at each incremental task; (ii) where the number of base classes is less. 1. FSCIL-im (imbalance in new classes):The standard FSCIL setting assumes that equal number of images per new class are available at each incremental task. For example, 5 images for each of the 5 new classes are available at each incremental task in a 5-way 5-shot setting, In real-world, number of samples per class can vary, since for some classes, it is easier to collect data compared to others. Obviously, one can collect more samples from the minority classes, or select a sub-set from the majority classes. But it is more practical if the algorithm can satisfactorily work without this constraint. To create the data imbalance, at each incremental step, we consider the number of training samples for the 5 new classes as \(\{5,4,3,2,1\}\). Few samples along with the imbalance makes this setting very challenging. Fig. 5 (left) shows the top 1 accuracy and HM of S3C and CEC for this scenario without any modification of the algorithms. We observe that S3C performs very well for both the metrics, thus showing its effectiveness in handling imbalanced new class data. \begin{table} \begin{tabular}{l c c c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{8}{c}{Harmonic Mean (\%) \(\uparrow\)} \\ \cline{2-11} & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\ \hline \hline CEC [44] & 57.63 & 52.83 & 45.08 & 45.97 & 44.44 & 45.63 & 45.10 & 43.76 & 45.77 & 44.69 \\ **S3C (Ours)** & **76.29** & **65.12** & **57.30** & **60.63** & **56.59** & **57.79** & **56.73** & **55.43** & **55.48** & **56.41** \\ \hline \hline \end{tabular} \end{table} Table 3: Harmonic mean comparison of S3C with CEC on CUB200 dataset. Figure 5: Comparison of S3C and CEC in terms of top1 accuracy and harmonic mean for two challenging scenarios, namely (a) FSCIL-im and (b) FSCIL-lb. 2. FSCIL-lb (fewer base classes):The standard FSCIL setting assumes that the number of base classes is quite high, with many annotated samples per class. Here, we analyze the performance of S3C when the number of base classes is lower. A similar setting has been explored in [31] for CIL. The advantage of having lesser number of base classes is that the base learner becomes ready for incremental learning quickly (with fewer classes requiring many annotated samples) and the remaining classes can be learnt incrementally with fewer number of labeled samples per class. For the CIFAR100 experiments conducted so far, there were 60 base and 40 new classes. For this experiment, we use only 40 base classes, and keep the incremental tasks unchanged. From Fig. 5 (right), we observe that S3C obtains a relative improvement of 5.29% in top1 accuracy (18.83% in HM) over CEC. This shows that S3C can start learning incrementally at an early stage of data collection, which makes it more suited for real-world scenarios. Ablation studies:Table 4 shows the effect of self-supervision and type of classifier on CIFAR100 base task accuracy. The top 1 accuracy and HM after all the incremental stages are also reported. We observe that both the modules help in improving the performance of the base and incremental classes. Though the top 1 accuracy of both linear and stochastic classifiers are close after the incremental stages, there is significant improvement in HM with the stochastic classifier. This implies that both the modules help in achieving very good performance on the new classes, in addition to retaining the performance on the base, thus achieving a great performance balance between the two. ## 7 Conclusions In this paper, we proposed a novel S3C framework, which integrates self-supervision with stochastic classifiers seamlessly for the FSCIL task. We show that this framework not only reduces overfitting on the few labeled samples of the new classes, but also mitigates catastrophic forgetting of the previously learnt classes. Extensive experiments on three benchmark datasets, namely CIFAR100, CUB200 and miniImageNet and additional analysis show that the proposed S3C significantly outperforms the state-of-art approaches. **Acknowledgements:** This work is partly supported through a research grant from SERB, Department of Science and Technology, Govt. of India and Google Research, India. \begin{table} \begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{self-supervision} & \multirow{2}{*}{classifier} & After task 0 & After task \(\mathcal{T}\) & After task \(\mathcal{T}\) \\ & & base task accuracy & top1 accuracy & harmonic mean \\ \hline ✗ & linear & 74.70 & 48.98 & 26.76 \\ ✓ & linear & 76.14 & 53.55 & 41.80 \\ ✓ & stochastic & 78.03 & 53.96 & 45.22 \\ \hline \hline \end{tabular} \end{table} Table 4: Ablation Study: We observe that both self-supervision and stochastic classifiers help to improve the performance significantly.
2306.16132
High-quality Unknown Object Instance Segmentation via Quadruple Boundary Error Refinement
Accurate and efficient segmentation of unknown objects in unstructured environments is essential for robotic manipulation. Unknown Object Instance Segmentation (UOIS), which aims to identify all objects in unknown categories and backgrounds, has become a key capability for various robotic tasks. However, current methods struggle with over-segmentation and under-segmentation, leading to failures in manipulation tasks such as grasping. To address these challenges, we propose QuBER (Quadruple Boundary Error Refinement), a novel error-informed refinement approach for high-quality UOIS. QuBER first estimates quadruple boundary errors-true positive, true negative, false positive, and false negative pixels-at the instance boundaries of the initial segmentation. It then refines the segmentation using an error-guided fusion mechanism, effectively correcting both fine-grained and instance-level segmentation errors. Extensive evaluations on three public benchmarks demonstrate that QuBER outperforms state-of-the-art methods and consistently improves various UOIS techniques while maintaining a fast inference time of less than 0.1 seconds. Additionally, we demonstrate that QuBER improves the success rate of grasping target objects in cluttered environments. Code and supplementary materials are available at https://sites.google.com/view/uois-quber.
Seunghyeok Back, Sangbeom Lee, Kangmin Kim, Joosoon Lee, Sungho Shin, Jemo Maeng, Kyoobin Lee
2023-06-28T12:01:51Z
http://arxiv.org/abs/2306.16132v3
# INSTA-BEEER: Explicit Error Estimation and Refinement ###### Abstract Efficient and accurate segmentation of unseen objects is crucial for robotic manipulation. However, it remains challenging due to over- or under-segmentation. Although existing refinement methods can enhance the segmentation quality, they fix only minor boundary errors or are not sufficiently fast. In this work, we propose INSTAnce Boundary Explicit Error Estimation and Refinement (INSTA-BEEER), a novel refinement model that allows for adding and deleting instances and sharpening boundaries. Leveraging an error-estimation-then-refinement scheme, the model first estimates the pixel-wise boundary explicit errors: true positive, true negative, false positive, and false negative pixels of the instance boundary in the initial segmentation. It then refines the initial segmentation using these error estimates as guidance. Experiments show that the proposed model significantly enhances segmentation, achieving state-of-the-art performance. Furthermore, with a fast runtime (less than 0.1 s), the model consistently improves performance across various initial segmentation methods, making it highly suitable for practical robotic applications. object detection, segmentation and categorization, RGB-D perception, deep learning for visual perception. ## I Introduction Unseen Object Instance Segmentation (UOIS) [1, 2, 3], the task of segmenting untrained objects in cluttered scenes, is essential for robotic manipulation such as object pushing [4, 5] and grasping [6, 7, 8]. Recently, category-agnostic instance segmentation networks [1, 2, 3, 8, 9, 10, 11, 12], which learn the concept of objectness from large-scale synthetic datasets, have demonstrated state-of-the-art performance. However, they often struggle with over- or under-segmentation issues [2, 8, 9, 13, 14], leading to potential errors in robotic manipulation tasks. Several studies [15, 16, 17, 18, 19, 20] have attempted to refine boundaries in initial segmentation; however, their inability to add or remove instance masks hinders effective correction of over- and under-segmentation. Although a recent refinement method [21] offers broader capabilities, such as adding or splitting instances, its high computational load limits its practical robotic application. We propose INSTAnce Boundary Explicit Error Estimation and Refinement (INSTA-BEEER), a novel approach to refine initial UOIS with well-balanced accuracy and speed. Under an error-estimation-then-refinement scheme, INSTA-BEEER predicts pixel-wise explicit errors: True Positive (TP), True Negative (TN), False Positive (FP), and False Negative (FN) on the instance boundaries of initial segmentation. It then generates refined segmentation by leveraging these predicted errors, employing an error estimates fusion module. The experimental results indicate that the proposed method achieves state-of-the-art segmentation accuracy and efficiency across various initial segmentation and datasets. * We propose INSTA-BEEER, refining segmentation by adding or deleting instances and sharpening boundaries, using an error-estimation-then-refinement scheme. * We introduce an explicit boundary error estimation, predicting pixel-wise TP, TN, FP, and FN errors of instance boundaries, for in-depth error analysis. * We present error estimates fusion to explicitly incorporate predicted errors into the refinement. * Our model achieves state-of-the-art accuracy and runtime across various initial segmentation and datasets. Fig. 1: INSTA-BEEER estimates pixel-wise boundary explicit errors (\(\blacksquare\) TP, \(\square\) TN, \(\blacksquare\) FP, \(\blacksquare\) FN in (c)) from the initial segmentation of unseen objects and leverages these error estimates to refine the segmentation. ## II Related Work ### _Unseen Object Instance Segmentation_ UOIS [1, 2, 3] aims to detect all arbitrary object instances in an image. It serves various robotic applications, including object pushing [4, 5], grasping [6, 7, 8], and re-arrangement [22, 23, 24, 25]. Various segmentation methods, such as graph-based segmentation [26], surface patches with SVM [27], and ambiguity graph [28] have been proposed, and recent advancements in deep learning have led to the introduction of category-agnostic instance segmentation networks trained on large-scale synthetic data [1, 2, 3, 8, 9, 10, 13, 29]. These networks learn object-ness, distinguishing foreground object instances from the background from RGB-D images, thereby enabling the segmentation of unseen objects such as tabletop scenarios [27, 30]. Nevertheless, accurately segmenting objects in cluttered environments remains a challenge, particularly in the cases of objects overlap [2, 8, 9, 13, 14]. The proposed method complements these methods, providing more accurate segmentation through error estimation and refinement of existing UOIS methods. ### _Refining Segmentation_ Accurate object segmentation is crucial in robotics for precise object grasping [6, 7], and manipulation [8, 25]. However, existing segmentation methods often struggle with capturing object boundaries accurately and suffer from over- or under-segmentation issues. To address these challenges, conditional random fields [15, 16] methods have been proposed; however, it is difficult to handle semantic information and large error regions [18]. Recent methods, such as CascadePSP [19] and Segfix [18], enhance the initial segmentation using global and local refinement or offset estimation. However, they focus on refining the boundary details and do not have instance manipulation capabilities, making it challenging to overcome over- and under-segmentation. To tackle this issue, RICE [21] uses a graph neural network to score and refine initial segmentations by sampling from perturbations (adding, deleting, splitting, or merging instances) and selecting the best-scored segmentation. Despite achieving state-of-the-art UOIS performance, its application is limited by a long inference time of up to 10-15 s per frame. Zhang et al. [31] proposed a test-time adaptation method to reduce the sim2real gap of UOIS methods; however, it requires approximately 20 seconds for new scenarios. In contrast, our approach facilitates fast refinement, processing frames in 0.1 s with comparable performance. Another relevant work is the panoptic refinement network [32], which refines initial panoptic segmentation based on the Panoptic-DeepLab [33] architecture. Our method also extends the Panoptic-DeepLab. However, we introduce an error estimation and refinement scheme through the fusion of error estimates, which leads to better segmentation. ### _Error Detection for Segmentation and Refinement_ Estimating errors in deep segmentation models is essential for ensuring safe and reliable systems [34]. Error detection methods have been proposed, such as using maximum softmax probability [35] or Monte Carlo Dropout [36]. However, they often fall short when applied to segmentation [37, 38]. Several approaches have been proposed to address such a challenge, including anomaly segmentation methods utilizing uncertainty and re-synthesis [39], and energy-biased abstention learning [40]. In addition, the integration of a failure detection network with the main segmentation network has been explored to identify pixel-wise failures [38, 41, 42]. Instead of focusing on error detection, estimated errors can be used to refine the segmentation. Xie et al. [43] used separate networks for segmentation, evaluation, refinement, and verification for accurate semantic segmentation. However, it requires separate training steps for each network, which increases its computational complexity. Kuhn et al. [44] introduced an error-reversing autoencoder to correct the initial segmentation errors; however, it was limited to semantic segmentation and did not address instance grouping. Our method differs from existing approaches in that it estimates and refines the segmentation errors using a single-shot network in an end-to-end manner. Moreover, we have introduced the novel concept of detecting explicit errors (TP, TN, FP, and FN) in boundaries instead of binary errors (true and false) in masks, therby enhancing overall performance. ## III Problem Statement The problem is formalized using the following definitions: 1. **UOIS**: The objective of UOIS is to identify the set of all object instance masks \(M\) in an RGB-D image \(I\). An object is categorized as unseen if it is not included in the training dataset. 2. **Initial segmentation model (\(\mathcal{F}_{in}\))**: This model, denoted as \(\mathcal{F}_{in}:I\to M_{in}\), could be any UOIS model that functions as a mapping from the input image \(I\) to a set of initial segmentation masks, denoted by \(M_{in}\). 3. **Segmentation refinement model (\(\mathcal{F}_{re}\))**: This model, represented by \(\mathcal{F}_{re}:(I,M_{in})\to M_{re}\), refines the initial segmentation masks. It takes the RGB-D image \(I\) and the initial segmentation masks \(M_{in}\) as inputs and produces a set of refined segmentation masks \(M_{re}\). For the segmentation refinement, we propose an error-estimation-then-refinement scheme, formalized as follows. 1. **Segmentation errors (\(e_{in}\))**: These represent the discrepancies between the initial segmentation masks \(M_{in}\) and their ground-truth masks. The discrepancies could include inaccurately classified masks or boundaries. 2. **Error estimator (\(\mathcal{E}\))**: Denoted as \(\mathcal{E}:(I,M_{in})\to e_{in}\sim\hat{e}_{in}\), this module generates the segmentation error estimates, \(\hat{e}_{in}\), in the initial segmentation \(M_{in}\). These estimates, which may include the predicted errors and their features, highlight areas that require refinement, thus enabling more accurate and targeted refinement. 3. **Error-informed refiner (\(\mathcal{H}\))**: This module, denoted as \(\mathcal{H}:(I,M_{in},\hat{e}_{in})\to M_{re}\), refines the initial segmentation \(M_{in}\) based on the error estimates \(\hat{e}_{in}\), with the aim of producing more accurate segmentation masks. The INSTA-BEEER serves as a segmentation refinement model \(\mathcal{F}_{re}\). It enhances initial segmentation from various UOIS methods through a two-step process. It first predicts error estimates and then refines the segmentation based on them. The process is formulated as follows: \[\mathcal{F}_{re}(I,M_{in}) =\mathcal{H}(I,M_{in},\mathcal{E}(I,M_{in})) \tag{1}\] \[=\mathcal{H}(F_{enc}(I,M_{in}),\mathcal{E}(F_{enc}(I,M_{in}))=M_{ re}\] where \(\mathcal{F}_{re}(I,M_{in})\) denotes the INSTA-BEEER model, while the second and third terms represent the sequence of feature extraction, error estimation, and refinement. The function \(F_{enc}(I,M_{in})\) indicates the encoded features of \(I\) and \(M_{in}\) derived from the feature extractor \(F_{enc}\). Leveraging this error-estimation-then-refinement scheme enables more procedure and accurate refinement. In particular, we introduce boundary explicit errors as the segmentation errors \(e_{in}\) and propose an Error Estimates Fusion (EEF) module to seamlessly incorporate these estimated errors into the refinement process. Further details are explained in Section IV. ## IV Insta-Beeer ### _Architecture Overview_ The INSTA-BEEER model follows the error-estimation-then-refinement scheme and comprises three components (Fig. 2): 1) initial segmentation encoder-decoder (\(F_{enc}\)), 2) error estimator (\(\mathcal{E}\)), and 3) error-informed refiner (\(\mathcal{H}\)). The initial segmentation encoder-decoder \(F_{enc}\) takes the RGB, depth (D), and initial segmentation (IS) as inputs and extracts the IS features for error estimation and refinement. The error estimator \(\mathcal{E}\) predicts the segmentation errors \(e_{in}\) from the IS feature to guide the refinement process. Thereafter, the error-informed refiner \(\mathcal{H}\) generates the final refined segmentation masks using the error estimates. The proposed model re-segments the IS by leveraging the fast end-to-end, single-shot instance segmentation network, Panoptic-DeepLab [33]. This re-segmentation approach enables the flexibility to add, delete, merge, or split instances as well as sharpen the boundaries. As the original Panoptic-DeepLab aims for the panoptic segmentation from RGB images, we introduce new ideas for the category-agnostic refinement from RGB-D and IS. First, we used the RGB-IS and Depth-IS encoders to fuse the RGB, depth, and IS effectively and extract the IS features through multi-scale fusion. Second, we used a class-agnostic foreground branch instead of a separate semantic decoder to reduce computational costs. Finally, we propose an error estimator and EEF to facilitate the refinement informed by the error. ### _Instance and Segmentation Error Representation_ #### Iv-B1 Center and offset maps Following [33], we employ center and offset maps (Figs. (d)d and (e)e) as instance representation to ensure a uniform representation of the IS, independent of the total number of instances. The instance center maps (\(C\in\mathcal{R}^{w\times h\times 1}\)) is the probability of each pixel indicating its likelihood of being an instance center. The instance offset map (\(O\in\mathcal{R}^{w\times h\times 2}\)) represents the x and y offsets from each pixel to its closest instance center. Using these, we represent the initial segmentation (\(IS\in\mathcal{R}^{w\times h\times 3}\)) by concatenating the instance center and offset maps. It also serves as the prediction target for our model. Here, \(w\) and \(h\) represent the image width and height, respectively. #### Iv-B2 Boundary explicit error For a detailed representation of segmentation errors \(e_{in}\), we introduce the concept of explicit errors (Figs. (h)h and (i)i), which are pixel-wise calculations of TP, TN, FP, and FN derived from the IS and corresponding ground truth. Previous methods [37, 41, 43, 44] in error estimation predict binary errors (true, false) as depicted in Fig. (g)g. However, by estimating explicit errors, the model learns to predict not only whether pixels are segmented correctly or not, but also correctly segmented Fig. 2: INSTA-BEEER follows an error-estimation-then-refinement scheme and consists of 1) an initial segmentation encoder-decoder to extract the initial segmentation feature, 2) an error estimator to predict the pixel-wise boundary explicit errors, and 3) an error-informed refiner to refine the initial segmentation using error estimates fusion (EEF). pixels (TP, TN), incorrectly segmented object pixels (FP), and missing object pixels (FN). This approach enables the model to analyze the segmentation errors in a more structured manner and comprehensively. We specifically utilized the boundary explicit errors (\(e_{in}^{b}\in\{0,1\}^{w\times h\times 4}\)) to represent the segmentation errors, which are pixel-wise TP, TN, FP, and FN maps for all instance boundaries (Fig. 3i). The instance boundaries are determined using the dilation of the instance contour [45], providing an effective way to pinpoint instance segmentation errors in the overlap and instance boundaries. A common method for measuring semantic segmentation error is the mask error [37, 41, 43, 44], which represents the discrepancy between all the segmentation masks and their ground truths. However, it is difficult to identify errors in the overlapping instances using the mask errors (Figs. 3g and 3h), making it less suitable for instance segmentation. Conversely, the boundary error map highlights errors in overlapping instances, leading to better segmentation performance. ### _Network Architecture_ #### Iii-C1 Initial segmentation encoder-decoder Our encoder-decoder \(F_{enc}\) effectively fuses RGB, depth, and IS information, generating IS features necessary for error estimation and refinement. The process begins by concatenating the IS with RGB and depth data. These combined inputs are then independently fed into two distinct ResNet-50 [46] backbones pre-trained on ImageNet [47]: one as an RGB-IS encoder and the other as a depth-IS encoder. These encoders extract and progressively reduce the spatial resolution of the RGB-IS and Depth-IS features. The feature fusion process occurs at the stages C2, C3, and C5. Here, the RGB-IS and Depth-IS features are combined through concatenation, and the IS features are extracted using one \(1\times 1\) followed by two \(3\times 3\) convolutions. The C5 stage uses only 1x1 convolutions owing to the large parameters. Next, the IS features at the C5 stage are fed into an atrous spatial pyramid pooling [16] for multi-scale context extraction. Finally, our decoder, similar to the Panoptic DeepLab [33], upsamples the resolution by a factor of 2 using a \(5\times 5\) depthwise-separable convolution [48]. During this upscaling, the lower-level IS features from the C2 and C3 encoder stages are reintegrated using a \(1\times 1\) convolution. #### Iii-C2 Error estimator and error-informed refiner Following our strategy of error-estimation-then-refinement, the error estimator \(\mathcal{E}\) first predicts the boundary explicit error \(e_{in}^{b}\). The Error Estimates Fusion (EEF) within an error-informed refiner then combines these error estimates with the IS features, enabling the refiner to adjust the predictions based on these errors. Subsequently, an error-informed refiner predicts the foreground, center, and offset and, through post-processing, merges these predictions to produce the refined segmentation. Further, the error estimator takes the inputs of IS features of shapes \(c\times p\times q\) (where \(c\), \(p\), and \(q\) represent the channels, width, and height of the features, respectively) and uses \(5\times 5\) depthwise separable convolutions [48] to create error features of the same shape. These are then processed using \(1\times 1\) convolutions to predict the boundary explicit error maps of shapes \(4\times p\times q\). These IS features, error features, and error maps are concatenated, and the EEF module reduces its channels from \(2c+4\) to \(c\) using a \(1\times 1\) convolution, followed by feature extraction through three \(3\times 3\) convolutions. Each prediction branch of the error-informed refiner \(\mathcal{H}\) has its own EEF module and uses the resulting EEF features to predict the foreground, instance center, and instance offset. These predictions are performed using \(5\times 5\) depthwise separable [48] and \(1\times 1\) convolutions, similar to the approach in [33]. Finally, as part of the post-processing (also inspired by [33]), the center maps undergo non-maximum suppression and a hard threshold, effectively filtering out the non-confident centers. Subsequently, the foreground pixels are grouped with their nearest center, using the center and offset maps to form the final refined instance masks. #### Iii-C3 Training and segmentation refinement details During the training of the INSTA-BEEER model, we used several loss functions: dice loss for error estimation (\(L_{err}\)), cross entropy loss for foreground segmentation (\(L_{fg}\)), MSE loss for center map (\(L_{ctr}\)) [49], and L1 loss for offset map (\(L_{off}\)) [50]. The total loss \(L\) is computed as follows. \[L=\lambda_{err}L_{err}+\lambda_{fg}L_{fg}+\lambda_{ctr}L_{ctr}+\lambda_{off}L _{off} \tag{2}\] For similar scale between losses, we set \(\lambda_{err}=1\), \(\lambda_{fg}=1\), \(\lambda_{ctr}=200\), and \(\lambda_{off}=0.01\), following the values in [33]. For training, we used the UOAIS-SIM (Fig. 3) [8] consisting of 50,000 photorealistic synthetic images. This dataset is well-suited for our research owing to its realistic textures and sim-to-real transfer performance. The model was trained for 90,000 iterations using the Adam optimizer [51] with a learning rate of 0.000125 and a batch size of 8. The training takes approximately 21 hours using two RTX 3090 GPUs. Fig. 3: **Input and error representations. Our model uses the boundary explicit error (**–** TP, -** TN, -** FP, -** FN in (i)) as the prediction target for segmentation errors, instead of the mask binary error (**–** true, -** false in (g)).** We applied a random perturbation to the ground truths and used it as the IS input (Figs. 2(b) and 2(c)). This allows us to mimic the segmentation errors and refine the IS derived from the various UOIS methods. To simulate inaccurate boundaries, we applied mask perturbation [19], including contour subsampling, random dilation, and erosion. For instance-level errors, we used random mask removal, splitting neighboring masks [21], and adding false positives obtained from graph segmentation [26]. We also used color [52] and depth augmentation [53] for the sim-to-real transfer. In line with the approach proposed in [8], a lightweight foreground segmentation [54] model was trained on the TOD dataset [2] as a part of the INSTA-BEEER. This model aids in filtering out background instances, such as objects outside the table, with an overlap ratio of 0.3. The INSTA-BEEER method comprises a main model with 79.5M as well as 1.3M additional parameters for foreground segmentation. As in [2], [3], noisy masks smaller than 500 pixels were removed. We used 0.3 for the center threshold during post-processing. ## V Experiments We conducted experiments to verify the following aspects of our INSTA-BEEER model: 1) refining segmentation from various initial segmentation methods and datasets. 2) achieving superior accuracy and speed compared to existing refinement methods. 3) assessing the effectiveness of boundary explicit error estimation and EEF. ### _Datasets and Metrics_ We evaluated the models on two real-world cluttered datasets: the OCID [30] and OSD [27]. The OCID dataset consists of 2,346 images of both tabletop and floor scenes with semi-automated labels and features an average of 7.5 objects per image, with a maximum of 20 objects. The OSD dataset includes 111 images of tabletop scenes with human-annotated ground truths and averages 3.3 objects per image, with a maximum of 15 objects. To compare the segmentation performances, we used standard metrics [21]: Object Size Normalized (OSN) overlap P/R/F (\(P_{n}^{O},R_{n}^{O},F_{n}^{O}\)), and OSN boundary P/R/F (\(P_{n}^{B},R_{n}^{B},F_{n}^{B}\)). These metrics enable us to evaluate the segmentation performance over masks and boundaries, independent of the object sizes. Given a Hungarian assignment \(A\) between the \(N\) predicted and \(M\) ground truth instance masks, the OSN overlap measures are computed as follows: \[P_{n}^{O}=\frac{\sum\limits_{(i,j)\in A}P_{ij}}{N},R_{n}^{O}=\frac{\sum\limits _{(i,j)\in A}R_{ij}}{M},F_{n}^{O}=\frac{\sum\limits_{(i,j)\in A}F_{ij}}{\max(M,N)} \tag{3}\] where \(P_{ij}\), \(R_{ij}\), and \(F_{ij}\) represent the precision, recall, and F-measure between predicted and ground truth instance masks. Similarly, the OSN boundary P/R/F are computed using the boundary pixels through a dilation operation. We also used the percentage of segmented objects with OSN overlap \(F_{n}^{O}\geq 0.75\) (\(F_{n}^{O}@.75\)) to measure the percentage of objects segmented with high accuracy [55]. The reported metrics for INSTA-BEEER are an average of three measurements. Considering the significance of computational requirements in robotics applications, we also evaluated the runtime required to process a single frame, including post-processing, on a single RTX 3090 GPU and an Intel Xeon Gold 6248R processor. For Segfix, we used different hardware due to compatibility issues, TITAN RTX GPU and an Intel Xeon Gold 6248R processor. It should be noted that the runtime can vary based on dataset characteristics, as CascadePSP refines the segmentation per instance, whereas the sample tree in the RICE method increases with the number of instances. The runtime of INSTA-BEEER also varies because of post-processing in merging instances. ### _Comparison with State-of-the-Art Methods_ We compared our INSTA-BEEER with three refinement models: CascadePSP [19], SegFix [18], and RICE [21]. CascadePSP [19] refines individual instance masks in a cascade and coarse-to-fine manner to obtain accurate boundaries. SegFix [18] detects the boundary and direction map and refines the initial masks by substituting the boundary predictions with corresponding interior pixels, assuming that predictions of interior pixels are more reliable. RICE [21] uses segmentation perturbations, including splitting, merging, adding, and deleting, and selects the best-scored segmentation using a graph neural network, enabling more flexible refinement than CascadePSP and SegFix. For RICE and CascadePSP, we utilized pre-trained weights from their official codes. SegFix, which requires dataset-specific training, was trained on the UOAIS-SIM [8] with the HRNet-W48 [56] backbone and carefully optimized hyperparameters for the best performance. For the initial segmentation models, we used state-of-the-art UOIS models: UOIS-Net-3D [3], UOAIS-Net [8], MSMFormer [13] and UCN [9] (last two models are with zoom refinement). We also utilized pre-trained weights from their official codes. The image size was 640\(\times\)480 for both training and testing. Table I compares the UOIS performances and runtimes of the proposed INSTA-BEEER with those of state-of-the-art segmentation refinement methods across various initial segmentation methods in the OCID and OSD datasets. INSTA-BEEER achieved superior segmentation performance compared to SegFix and CascadePSP, demonstrating state-of-the-art results similar to RICE. Although RICE performed slightly better than the proposed model in OCID with UCN, our approach is significantly faster (\(\sim\)0.1s) than RICE (\(\sim\)2.3s), striking a favorable balance between accuracy and speed. It is also worth highlighting that our INSTA-BEEER consistently enhances the segmentation quality, regardless of the initial segmentation methods used. Considering the substantial runtime of RICE (10-15s) and the runtime requirements of various initial segmentation methods (0.13s for UOAIS-Net on Titan XP, 0.2-0.33s for UOIS-Net-3D on RTX 2080, approximately 0.2s + 0.05K for UCN on Titan XP, and 0.278s + 0.072k on A100, where k denotes the number of instances) as reported in respective papers, our INSTA-BEEER approach offers efficient and effective segmentation refinement. Our models have fewer parameters than RICE (68.5M for refinement + 45.4M for foreground masks) but more parameters than CascadePSP (67.7M) and SegFix (65.7M). However, our methods have faster runtimes as they process all instances in a single shot and an end-to-end manner, making them more suitable for real-time robotics applications. Further, CascadePSP and SegFix face difficulties in enhancing \(F_{n}^{O}@.75\), as they focus primarily on refining segmentation details without adding or removing instances. Because the initial segmentation tends to capture fine boundaries between objects and backgrounds by leveraging the depth input, the main challenge lies in effectively handling the boundaries between objects, which requires instance manipulation capabilities. SegFix, in particular, struggles with refining segmentation because it relies heavily on boundary segmentation quality, which is a challenging task, as discussed in [20], particularly in cluttered scenes. The deterioration in the performance of the proposed model on OCID can be attributed to the inherent difficulties posed by the UOAIS-SIM training set, as reported in [13, 29]. ### _Ablation Studies_ Table II presents the ablation analysis of the proposed error estimation and EEF in the INSTA-BEEER model on the OCID dataset, with UCN [9] as the initial segmentation model. Without error estimation and EEF, where a model similar to the original Panoptic-DeepLab [33] is used, the resulting segmentation is less accurate than the initial segmentation obtained from UCN. This observation suggests that the performance improvements are not solely attributed to the Panoptic-DeepLab architecture. On the other hand, when error estimation is used without using EEF, the performance is similar to that of UCN. However, when error estimation and EEF are employed together, the INSTA-BEEER model surpasses the performance of initial segmentation. This highlights the significance of error estimation and EEF in enhancing the overall segmentation performance. Table III presents the ablation study of the proposed explicit and boundary errors on the OCID datasets using UCN as the initial segmentation model. We compared the performance of three different methods: 1) predicting the binary error in boundaries, represented in the first row of the table, 2) predicting the explicit error in masks, depicted in the second row, and 3) predicting the explicit error in boundaries, shown in the third row. Results indicate that the best performance is achieved when both explicit error and boundary error are used, demonstrating the effectiveness of the proposed boundary explicit error method. \begin{table} \begin{tabular}{|l|c c c|c c c|c c|} \hline \multicolumn{1}{|c|}{\multirow{2}{*}{Method}} & \multicolumn{3}{c|}{Seg. Error} & \multicolumn{3}{c|}{Overlap} & \multicolumn{3}{c|}{Boundary} & \multirow{2}{*}{\(F_{n}^{O}\oplus 75\)} \\ & Refine. & Estim. & & EEF & & & & & \\ \hline \hline UCN \(\dagger\) & ✗ & ✗ & ✗ & 84.2 & 83.2 & 85.0 \\ UCN & ✗ & ✗ & ✗ & 84.1 & 83.0 & 84.9 \\ UCN + INSTA-BEEER & ✓ & ✗ & ✗ & 82.2 & 77.2 & 81.0 \\ UCN + INSTA-BEEER & ✓ & ✓ & ✗ & 84.7 & 81.3 & 85.3 \\ UCN + INSTA-BEEER & ✓ & ✓ & ✓ & **86.1** & **83.7** & **87.6** \\ \hline \end{tabular} \end{table} TABLE II: Effect of error estimation and error estimates fusion (EEF) on OCID. (\(\dagger\): UCN [9] paper values) \begin{table} \begin{tabular}{|l|l|c|c c c|c c c|c c c|c c c|} \hline \multicolumn{1}{|c|}{\multirow{2}{*}{Dataset}} & \multicolumn{3}{c|}{Initial Segmentation Models} \\ \cline{3-14} & \multicolumn{3}{c|}{Segmentation} & \multicolumn{3}{c|}{UOIS-Net-3D [3]} & \multicolumn{3}{c|}{UOAIS-Net [8]} & \multicolumn{3}{c|}{MSMFormer [13]} & \multicolumn{3}{c|}{UCN [9]} \\ & \multicolumn{3}{c|}{Refinement Models} & (seconds) & \(F_{n}^{O}\) & \(F_{n}^{B}\) & \(F_{n}^{O}@.75\) & \(F_{n}^{O}\) & \(F_{n}^{B}\) & \(F_{n}^{O}@.75\) & \(F_{n}^{O}\) & \(F_{n}^{B}\) & \(F_{n}^{O}@.75\) & \(F_{n}^{O}\) & \(F_{n}^{B}\) & \(F_{n}^{O}@.75\) & \(F_{n}^{O}\) & \(F_{n}^{B}\) & \(F_{n}^{O}@.75\) \\ \hline \multirow{4}{*}{OCID} & No refinement & - & 77.4 & 74.2 & 76.3 & 66.7 & 64.4 & 66.5 & 81.9 & 81.3 & 81.6 & 84.1 & 83.0 & 84.9 \\ & SegFix [18] & 0.614\(\pm\)0.001 & 76.8 & 71.7 & 76.2 & 66.4 & 64.3 & 66.5 & 81.3 & 81.1 & 81.5 & 83.2 & 82.1 & 84.5 \\ & CscadePSP [19] & 1.735\(\pm\)1.208 & 78.5 & 76.5 & 77.7 & 66.5 & 64.7 & 66.4 & 81.5 & 80.5 & 81.9 & 83.2 & 82.5 & 84.7 \\ & RICE [21] & 2.349\(\pm\)1.610 & 82.9 & 79.1 & 84.3 & 69.4 & 66.6 & 70.0 & 84.8 & **83.4** & 85.7 & **86.4** & **84.8** & **87.8** \\ & **INSTA-BEEER (Ours)** & **0.091\(\pm\)0.017** & **84.2** & **81.5** & **84.6** & **78.6** & **75.7** & **78.0** & **85.6** & 83.2 & **86.7** & 86.1 & 83.7 & 87.6 \\ \cline{2-14} & RICE\(\dagger\)[21] & - & 83.5 & 79.9 & 85.5 & - & - & - & - & - & - & 87.0 & 85.4 & 88.4 \\ \hline \multirow{4}{*}{OSD} & No refinement & - & 75.0 & 69.8 & 70.4 & 75.7 & 68.6 & 71.5 & 77.0 & 63.4 & 75.7 & 76.4 & 64.7 & 75.5 \\ & SegFix [18] & 0.547\(\pm\)0.001 & 74.3 & 68.6 & 70.0 & 75.6 & 68.3 & 71.8 & 76.9 & 63.6 & 75.7 & 76.6 & 64.8 & 75.8 \\ & CscadePSP [19] & 1.120\(\pm\)1.076 & 75.4 & 71.1 & 70.5 & 76.6 & 72.9 & 71.8 & 78.7 & **73.8** & 76.0 & 79.0 & 73.5 & 77.1 \\ & RICE [21] & 2.342\(\pm\)2.015 & 78.4 & 71.4 & 76.1 & 77.6 & 69.6 & 74.5 & 79.3 & 64.3 & **79.2** & 79.9 & 67.5 & 79.6 \\ & **INSTA-BEEER (Ours)** & **0.083\(\pm\)0.015** & **80.5** & **73.5** & **77.1** & **81.8** & **74.7** & **79.0** & **81.0** & 73.2 & 78.4 & **82.9** & **75.3** & **81.0** \\ \cline{2-14} & RICE \(\dagger\)[21] & - & 78.7 & 72.3 & 77.4 & - & - & - & - & - & - & 78.2 & 66.1 & 78.9 \\ \hline \end{tabular} \end{table} TABLE I: Comparison of UOIS performance on OCID [30] and OSD [27] datasets, using various initial segmentation and refinement models, including INSTA-BEEER. (\(\dagger\): RICE [21] paper values, **Bold**: Highest, **Underlined**: Second) \begin{table} \begin{tabular}{|l|c c c c|c c c|} \hline \multicolumn{1}{|c|}{\multirow{2}{*}{Error Features}} & \multicolumn{3}{c|}{Overlap} & \multicolumn{3}{c|}{Boundary} & \multirow{2}{*}{\(F_{n}^{O}\)} & \multirow{2}{*}{\(F_{n}^{O}\)} & \multirow{2}{*}{\(F_{n}^{O}\)} & \multirow{2}{*}{\(F_{n}^{O}\)} & \multirow{2}{*}{\(F_{n}^{O}@.75\)} \\ & Maps & \(P_{n}^{O}\) & \(R_{n}^{O}\) & \(F_{n}^{O}\) & \(P_{n}^{B}\) & \(R_{n}^{B}\) & \(F_{n}^{B}\) & \(F_{n}^{O}@.75\) \\ \hline ✗ & ✗ & ✗ & 89.0 & 84.4 & 82.2 & 80.1 & 80.4 & 77.2 & 81.0 \\ ✗ & ✓ & 88.8 & 89.0 & 85.5 & 84.8 & 86.1 & 82.4 & 87.0 \\ ✓ & ✗ & 88.7 & 89.2 & 85.6 & 85.5 & 86.8 & 83.1 & 87.1 \\ ✓ & ✓ & 88.9 & **89.8** Table IV presents a performance comparison of INSTA-BEEER with different fusion targets on the OCID dataset using UCN [9]. The EEF module, which combines both the error features and the error maps in the error-informed refiner, performs best. This indicates that the dense guidance from the error estimator to the error-informed refiner is important for segmentation accuracy. ### _Qualitative Results_ Figure 4 depicts a visualization of INSTA-BEEER and the other refinement methods when using UCN [9] as the initial segmentation. The proposed method effectively refines the segmentation boundaries by adding, splitting, and sharpening instances. However, failure cases were also observed (Fig. 5), particularly when dealing with small objects with challenging textures or cases where separate masks have to be merged. ## VI Conclusion This study introduced INSTA-BEEER, a novel method to refine UOIS using a process of error estimation followed by refinement. We proposed the concept of boundary explicit error estimation, which predicts TP, TN, FP, and FN pixels on instance boundaries. We also proposed error estimates fusion to refine the segmentation guided by errors. The proposed method successfully refined the initial segmentation obtained from various UOIS methods with state-of-the-art accuracy and speed. In future studies, we plan to utilize continuous learning on large datasets for domain adaptability. We expect that INSTA-BEEER will open up possibilities for advancing UOIS and its practical robotic applications.
2310.02879
Online Mechanism Design with Predictions
Aiming to overcome some of the limitations of worst-case analysis, the recently proposed framework of "algorithms with predictions" allows algorithms to be augmented with a (possibly erroneous) machine-learned prediction that they can use as a guide. In this framework, the goal is to obtain improved guarantees when the prediction is correct, which is called \emph{consistency}, while simultaneously guaranteeing some worst-case bounds even when the prediction is arbitrarily wrong, which is called \emph{robustness}. The vast majority of the work on this framework has focused on a refined analysis of online algorithms augmented with predictions regarding the future input. A subsequent line of work has also successfully adapted this framework to mechanism design, where the prediction is regarding the private information of strategic agents. In this paper, we initiate the study of online mechanism design with predictions, which combines the challenges of online algorithms with predictions and mechanism design with predictions. We consider the well-studied problem of designing a revenue-maximizing auction to sell a single item to strategic bidders who arrive and depart over time, each with an unknown, private, value for the item. We study the learning-augmented version of this problem where the auction designer is given a prediction regarding the maximum value over all agents. Our main result is a strategyproof mechanism whose revenue guarantees are $\alpha$-consistent with respect to the highest value and $(1-\alpha^2)/4$-robust with respect to the second-highest value, for $\alpha \in [0,1]$. We show that this tradeoff is optimal within a broad and natural family of auctions, meaning that any $\alpha$-consistent mechanism in that family has robustness at most $(1-\alpha^2)/4$. Finally, we extend our mechanism to also achieve expected revenues proportional to the prediction quality.
Eric Balkanski, Vasilis Gkatzelis, Xizhi Tan, Cherlin Zhu
2023-10-04T15:17:28Z
http://arxiv.org/abs/2310.02879v2
# Online Mechanism Design with Predictions+ ###### Abstract Aiming to overcome some of the limitations of worst-case analysis, the recently proposed framework of "algorithms with predictions" allows algorithms to be augmented with a (possibly erroneous) machine-learned prediction that they can use as a guide. In this framework, the goal is to obtain improved guarantees when the prediction is correct, which is called _consistency_, while simultaneously guaranteeing some worst-case bounds even when the prediction is arbitrarily wrong, which is called _robustness_. The vast majority of the work on this framework has focused on a refined analysis of online algorithms augmented with predictions regarding the future input. A subsequent line of work has also successfully adapted this framework to mechanism design, where the prediction is regarding the private information of strategic agents. In this paper, we initiate the study of online mechanism design with predictions, which combines the challenges of online algorithms with predictions and mechanism design with predictions. We consider the well-studied problem of designing a revenue-maximizing auction to sell a single item to strategic bidders who arrive and depart over time, each with an unknown, private, value for the item. We study the learning-augmented version of this problem where the auction designer is given a prediction regarding the maximum value over all agents. Our main result is a strategyproof mechanism whose revenue guarantees are \(\alpha\)-consistent with respect to the highest value and \((1-\alpha^{2})/4\)-robust with respect to the second-highest value, for \(\alpha\in[0,1]\). We show that this tradeoff is optimal within a broad and natural family of auctions, meaning that any \(\alpha\)-consistent mechanism in that family has robustness at most \((1-\alpha^{2})/4\). Finally, we extend our mechanism to also obtain expected revenue that is proportional to the prediction quality. Introduction One of the well-established shortcomings of worst-case analysis is that it often leads to overly pessimistic conclusions. On the other hand, any non-trivial performance guarantee that can be established through worst-case analysis is very robust, since it holds no matter what the input may be. In an attempt to overcome the limitations of worst-case analysis without compromising its robustness, the recently proposed framework of "algorithms with predictions" allows algorithms to be augmented with a machine-learned prediction that they can use as a guide [28]. Crucially, this prediction may be highly inaccurate, so depending too heavily on it can lead to very poor performance in the worst case. Therefore, the goal in this framework is to use such a prediction so that a strong performance can be guaranteed whenever the prediction is accurate (known as the _consistency_ guarantee), while simultaneously maintaining non-trivial worst-case guarantees even if the prediction is inaccurate (known as the _robustness_ guarantee). During the last five years since this framework was introduced, a surge of work has utilized it toward a refined analysis of algorithms, data structures, and mechanisms (see [25] for a frequently updated list of papers in this rapidly growing literature). The vast majority of this work has focused on the design and analysis of online algorithms, i.e., algorithms that need to process their input piece-by-piece and make irrevocable decisions without knowing the whole input. Learning-augmented online algorithms are enhanced with a prediction regarding the future input, which they can potentially use to make more informed decisions, while carefully managing the risk of being misguided by it. An even more recent line of work has successfully adapted this framework for the design and analysis of mechanisms interacting with strategic bidders [1, 34]. One of the canonical problems in mechanism design is the design of auctions for selling goods to a group of strategic bidders, aiming to maximize the revenue. The main obstacle in achieving this goal is the fact that the amount that each bidder is willing to pay is private information that the designer needs to carefully elicit. Learning-augmented mechanisms are therefore enhanced with predictions regarding the value of this private information, which can potentially overcome these obstacles. In this work, we initiate the study of online mechanism design with predictions, bringing together the two lines of work on online algorithms with predictions and mechanism design with predictions. Specifically, we consider the problem of selling goods to strategic bidders that arrive and depart over time. This problem combines the challenges of both lines of work since the designer needs to carefully elicit the unknown, private, value of each bidder, while also not knowing (and being unable to elicit) the values of the bidders who have not yet arrived. In fact, designing an auction for such dynamic settings can be more demanding because, apart from the combined information limitations that the designer faces, the bidders may not only strategically misreport their value for the good(s) beings sold, but they may also strategically misrepresent their arrival and departure times. The study of online mechanism design (without predictions) has previously received a lot of attention, given the many important applications that involve dynamic settings with bidders that arrive and depart over time [32]. For example the sale of airplane and theater seats or the sale of cars usually takes place over a period of time, during which interested buyers join the market and depart from it. As this happens, the seller may gradually adjust the prices of the goods being sold aiming to maximize the revenue. These adjustments can be a function of the demand that the seller observes over time, but it is quite natural to assume that the designer may also have access to some prediction regarding this demand, e.g., using historical data. Our goal in this paper is to design online auctions enhanced with such a prediction and to evaluate the extent to which they can yield strong performance guarantees in terms of consistency and robustness. ### Our results Our main goal is to evaluate the potential impact of the learning-augmented model on the performance of auctions in dynamic environments. To achieve this goal, we revisit the well-studied model of online mechanism design, where the bidders arrive and depart over time [32, 20]. This model poses several realistic and non-trivial obstacles for the auction designer: 1) the bidders can lie about their value for the good(s) being sold (the standard obstacle in mechanism design), 2) during the execution of the auction, the auctioneer has no information regarding bidders who have not yet arrived (the standard obstacle in online problems), and 3) the bidders can also lie regarding their arrival and departure times (an obstacle that is specific to the online mechanism design setting). Within this model, we focus on the problem of selling a single item aiming to maximize revenue. Each bidder \(i\) has a value \(v_{i}\) for the item being sold and this value is the largest amount she would be willing to pay for it. In the absence of any predictions, the best revenue that one can guarantee, even in an offline setting, is equal to the second-highest value over all bidders.1 Using this as a benchmark, prior work proposed an online single-item auction that guarantees revenue at least \(1/4\) of the second-highest value [20]. Aiming to refine this result and achieve stronger guarantees, we adopt the learning-augmented framework and consider the design of online auctions that are enhanced with a (possibly very inaccurate) prediction regarding the highest value over all bidders. The goal is to guarantee more revenue whenever the prediction is accurate (the consistency guarantee), while also achieving some non-trivial revenue guarantee even if the prediction is highly inaccurate (the robustness guarantee, which is equivalent to the worst-case guarantee studied in prior work). Footnote 1: This can be achieved by the classic Vickrey (second-price) auction. Targeting a more ambitious benchmark, we use the highest value over all bidders (the first-best revenue) as a benchmark for our consistency guarantee, while maintaining the second-highest value (the second-best revenue) as the benchmark for robustness (as in prior work). The Three-Phase learning-augmented online auction.Our first main result is the Three-Phase auction: a learning-augmented online auction parameterized by some value \(\alpha\in[0,1]\), which takes place in three phases. During the first phase, the auction observes the values of the first \(\lceil\frac{1-\alpha}{2}n\rceil\) departing bidders in order to "learn" an estimate regarding what an appropriate price may be. In the second phase, the auction "tests the prediction" by giving each active bidder the opportunity to clinch the item if their value is at least as high as the prediction. After \(\lfloor\alpha n\rfloor\) more bidders have departed, if the item remains unsold the auction enters the third and last phase. During this phase, any active bidder is given the opportunity to clinch the item at a price equal to the maximum value observed during the first two phases. This learning-augmented online auction achieves the following trade-off between consistency and robustness. **Theorem**.: _The Three-Phase learning-augmented online auction is deterministic, strategyproof, and for any \(\alpha\in[0,1]\) such that \(\alpha n\in\mathbb{N}\) and \(\frac{1-\alpha}{2}n\in\mathbb{N}\) its revenue guarantees \(\alpha\)-consistency with respect to the first-best revenue benchmark and \((1-\alpha^{2})/4\)-robustness with respect to the second-best revenue benchmark._ Note that, although we focus on revenue maximization throughout this paper, as a corollary of our analysis, we also obtain a social welfare guarantee that is also \(\alpha\)-consistent and \((1-\alpha^{2})/4\)-robust, where consistency and robustness are both with respect to the highest value. If we let \(\alpha=0\) our auction retrieves the robustness guarantee of \(1/4\) that was achieved in prior work, but provides no consistency guarantees. On the other extreme, if we let \(\alpha=1\) then we get a perfect consistency of \(1\) (since our auction reduces to a posted price auction that offers a price equal to the prediction to every bidder, if the prediction is correct it extracts the first-best revenue) but without any robustness guarantees. Figure 1 exhibits the convex combination of these two extreme solutions, as well as the improved tradeoff achieved by the Three-Phase auction. A tight impossibility result.Our other main result is an impossibility result proving the optimality of our Three-Phase auction within a broad and natural family of learning-augmented online auctions called PAF auctions (Prediction or Any-so-Far auctions). Having no information regarding the bidders' values beyond the observed values of previous bidders and the prediction, online auctions are limited in terms of what is a "reasonable" price for them to offer. The PAF class contains all auctions such that the prices offered are equal to any value of previous bidders or the prediction (see Definition 16). Although one could technically define auctions outside this class (e.g., auctions that just post an arbitrary price outside this set), PAF captures the Three-Phase auction, existing auctions from previous work [20, 14]. **Theorem**.: _For any \(\alpha\in[0,1]\), there is no PAF auction that is \(\alpha\)-consistent with respect to the first-best revenue benchmark and \((1-\alpha^{2})/4+\omega(\frac{1}{n})\)-robust with respect to the second-best revenue benchmark._ Note that optimality results for secretary and online auction problems are often obtained through LP duality arguments [15, 14, 2]. The LP formulations for these problems rely on strong history-independence properties. In our problem, these history-independence properties do not hold because, for example, the probability of a bidder accepting a price equal to the prediction crucially depends on how many bidders have previously rejected an offered price equal to the prediction. Such dependencies make it challenging to give an LP formulation of our problem. Instead, we use an interchange argument to show that, for any \(\alpha\in[0,1]\) there exists an \(\alpha\)-consistent PMF auction that achieves an optimal robustness and satisfies a three-phase structure identical to our auction. We then optimize for the optimal time thresholds for auctions that satisfy this three-phase structure. Figure 1: The robustness-consistency trade-off achieved by the Three-Phase auction and the trade-off achieved by convex combinations of the auction that optimizes consistency by completely trusting the predictions and the auction that optimizes robustness by ignoring the predictions. ### Related work Online mechanism design.Due to the many applications with strategic agents who arrive in an online fashion, online mechanism design is an important subfield of mechanism design (see Chapter 16 by Parkes [32] of the Algorithmic Game Theory textbook [31] for an overview). The problem of online auctions with bidders who arrive dynamically and might misreport their arrival and departure times was introduced by Hajiaghayi et al. [20]. For revenue maximization, their main results are a \(1/4\)-competitive strategyproof mechanism and a \(2/3\) impossibility result in the single item setting, and a constant factor competitive strategyproof mechanism for the \(k\)-item setting. Since then, different variations of the problem have been considered. Buchbinder et al. [14] examine agents with private arrival times and values, along with an unrestricted strategy space, and propose a strategyproof auction that achieves a competitive ratio of \(3/16\) in terms of revenue. Krysta and Telelis [24] improve the \(k\)-item competitive ratio of [20] to \(1/26e\) for the special case where the active times of bidders do not overlap. Additional related models for online mechanism design include unlimited supply, digital goods [9, 11, 10, 23]; two-sided auctions with both buyers and sellers online [13, 12]; and interdependent value environments [16]. Online algorithms with predictions.The line of work on algorithms with predictions, also called learning-augmented algorithms, is an exciting emerging literature (see [29] for a survey of early contributions and [25] for an updated list of papers in this field). Numerous classic online algorithm design problems have been revisited, including online paging [26], scheduling [33], optimization problems involving covering [7] and knapsack constraints [21], as well as Nash social welfare maximization [8], the secretary problem [2, 17, 18], and a range of graph-related challenges [3]. Among these previous works, the most closely related to our setting is by Antoniadis et al. [2], who consider the value-maximizing secretary problem augmented with a prediction regarding the maximum value of the agents. In fact, their proposed learning-augmented algorithm follows a three-phase structure which is similar to the one used in our auction. However, our setting and our proposed solution, differ in several significant ways. The most significant one is the fact that just focusing on the online aspect of the problem, we need to deal with the important additional obstacle that the agents are strategic and can misreport their value, as well as their arrival and departure time. Furthermore, our goal is to maximize revenue whereas the goal in the secretary problem is to choose the agent with maximum value. Finally, our setting does not assume that each agent departs before the arrival of the next one, like the secretary setting does; this makes the problem of designing strategyproof mechanisms significantly more delicate. To achieve strategyproofness, our auction must very carefully determine the time at which the item is allocated, the bidder who receives the item, and the item's price in order to handle bidders who might be active during multiple phases, which is the main technical (and novel) challenge in our setting. Mechanism design with predictions.Mechanism design with predictions regarding the private information of bidders is an even more recent line of work that was initiated by [1] and [34]. It includes strategic facility location [1, 34, 22], price of anarchy of cost-sharing protocols [19], strategic scheduling [34, 6], auctions [27, 34], and bicriteria (social welfare and revenue) mechanism design [4]. We refer to [5] for a reading list of this line of work. Preliminaries We consider the problem of designing an auction to sell a single item to a set \(N=\{1,2,\ldots,n\}\) of \(n\) bidders who arrive and depart over time. Each bidder \(i\) arrives at some time \(a_{i}\), departs at some time \(d_{i}\geq a_{i}\), and has value \(v_{i}\) for the item being sold. We refer to the interval \([a_{i},d_{i}]\) as the _active time_ for bidder \(i\). For simplicity, we assume that the bidders are indexed based on their order of departure (i.e., bidder \(i\) is the \(i\)-th bidder to depart). We also let \(\pi\) be an arbitrary total order over the set of bidders, which we use for tie-breaking, and let \(i\succ j\) denote the fact that \(i\) is ranked before bidder \(j\) according to \(\pi\). Our objective is to maximize the revenue from the sale. The main obstacle is that all the relevant information of each bidder \(i\), i.e., their "type" \(\theta_{i}=(a_{i},d_{i},v_{i})\), is private information that is unknown to the auction designer, so the auction needs to elicit it from each bidder. However, the bidders can misreport their types and the auction needs to be designed to ensure that they cannot benefit by doing so. Specifically, apart from misreporting her value \(v_{i}\) for the item (which is the standard type of manipulation considered in mechanism design), a bidder can also misreport her arrival and departure times: adopting the original model introduced by Hajiaghayi et al. [20], we assume that each bidder \(i\) can delay the announcement of her arrival (essentially reporting a delayed arrival time \(\hat{a}_{i}>a_{i}\)), and she can report a false departure time \(\hat{d}_{i}\) (either earlier or later than her true departure time, \(d_{i}\)). Upon arrival, each bidder \(i\) declares a type \(\hat{\theta}_{i}\) (potentially different than \(\theta_{i}\)) and the auction needs to determine who the winner is (i.e., which bidder will be allocated the item), at what time \(t\) the item should be allocated to the winner, as well as the amount \(p\) that the winner should pay. Apart from the information limitations that the auction faces due to the private nature of the bidders' types, the auction also needs to be implemented in an _online_ fashion. This means that if it decides to allocate the item at some time \(t\), then this decision is irrevocable, and both this allocation decision and the payment amount requested from the winner can depend only on information regarding bidders with arrival time \(a_{i}\leq t\). In other words, the allocation and payment cannot in any way depend on the types of bidders that have not yet arrived. If the auction allocates the item to some bidder \(i^{*}\) at some time \(t\) for a price of \(p\), then this bidder's utility is equal to \(v_{i^{*}}-p\), as long as \(t\in[a_{i^{*}},d_{i^{*}}]\), i.e., as long as \(i^{*}\) is active at time \(t\). Otherwise, if \(i^{*}\) is allocated the item outside her (real) active time, then she receives no value from it, and her utility is \(-p\). All other bidders receive no item and contribute no payment, so their utility is \(0\). A auction is _strategyproof_ if for every bidder \(i\), truthfully reporting her type is a dominant strategy. This means that no matter what types \(\hat{\Theta}_{-i}=(\hat{\theta}_{1},\ldots,\hat{\theta}_{i-1},\hat{\theta}_{i +1},\ldots,\hat{\theta}_{n})\) the other bidders report, the utility of bidder \(i\) is maximized if she reports her true type, \(\theta_{i}\). To emphasize the added difficulty for achieving strategyproofness in online auctions, relative to static ones, prior work often distinguishes between _value-strategyproofness_, which ensures that bidders will not want to misreport their value, and _time-strategyproofness_, which is the additional requirement to ensure that bidders cannot benefit by misrepresenting their arrival or departure times either. To evaluate the performance of online auctions with respect to the revenue they extract, prior work focused on a model where a set \(I=\{[a_{1},d_{1}],[a_{2},d_{2}],...,[a_{n},d_{n}]\}\) of \(n\) arrival-departure intervals and a set \(V\) of \(n\) values are generated adversarially, and the values of \(V\) are then matched to arrival-departure intervals from \(I\) uniformly at random. Note that, if the intervals are all non-overlapping, this reduces to the classic _random ordering_ model, where the values of the bidders are determined adversarially and the order of their arrival is random. Therefore, our setting generalizes the classic setting of the "secretary problem". We let \(\mu(V,I)\) denote the random matching of values to intervals and \(\mathbb{E}_{\Theta\sim\mu(V,I)}(\mathtt{Rev}(M(\Theta)))\) denote the _expected_ revenue of an auction \(M\) with respect to this random matching. Also, let \(v_{(1)}\) and \(v_{(2)}\) denote the highest and second-highest values in \(V\), which are important benchmarks since the former is the highest feasible revenue (no bidder would pay more than that) and the latter is the "offline Vickrey" benchmark (this corresponds to the amount of revenue that is actually achievable via the classic Vickrey auction in offline settings). In this work we adopt the learning-augmented framework and study online auctions that are also equipped with a (potentially very inaccurate) prediction \(\tilde{v}_{(1)}\) regarding the highest value, \(v_{(1)}\), in \(V\). We denote the expected revenue of a auction, \(M\), as \(\mathbb{E}_{\Theta\sim\mu(V,I)}(\mathtt{Rev}(M(\Theta,\tilde{v}_{(1)})))\) and we evaluate the performance of \(M\) using its _consistency_ and _robustness_. Consistency refers to the competitive ratio of the expected revenue achieved by the algorithm when the prediction it is provided with is accurate, i.e., whenever \(\tilde{v}_{(1)}=v_{(1)}\). The benchmark we use for consistency is the highest value in \(V\), often referred to as the _first-best_ revenue. Formally: \[\text{consistency}(M)=\min_{V,I}\frac{\mathbb{E}_{\Theta\sim\mu(V,I)}\left( \mathtt{Rev}\left(M\left(\Theta,v_{(1)}\right)\right)\right)}{v_{(1)}}.\] Robustness refers to the competitive ratio of the expected revenue given an adversarially chosen, inaccurate, prediction. The benchmark we use for robustness is the best revenue achievable via any (offline) strategyproof auction, i.e., the second highest value \(v_{(2)}\), often referred to as the _second-best_ revenue. Formally: \[\text{robustness}(M)=\min_{V,I,\tilde{v}_{(1)}}\frac{\mathbb{E}_{\Theta\sim \mu(V,I)}\left(\mathtt{Rev}\left(M\left(\Theta,\tilde{v}_{(1)}\right)\right) \right)}{v_{(2)}}.\] ## 3 The Three-Phase Auction We propose the Three-Phase auction, which is parameterized by a value \(\alpha\in[0,1]\), with greater values corresponding to higher confidence in the accuracy of the prediction. Our main result in this section shows that for any choice of \(\alpha\) this auction achieves \(\alpha\)-consistency and \((1-\alpha^{2})/4\)-robustness, while simultaneously guaranteeing both value-strategyproofness and time-strategyproofness. The Three-Phase auction considers the bidders based on the order of their departure (i.e., the order of their indices) and comprises three separate phases. 1. During the **first phase**, the auction observes the values of the first \(\lceil\frac{1-\alpha}{2}n\rceil\) bidders to depart (without allocating the item to any of them), aiming to "learn" an estimate regarding what a reasonable price for the item may be. If, during this first phase, the auction observes a value that exceeds the predicted maximum, \(\tilde{v}_{(1)}\) (implying that the prediction is inaccurate), then, after the first phase is complete, the auction essentially skips the second phase and moves directly onto the third phase. If, on the other hand, the first phase does not prove the prediction to be inaccurate, then the auction proceeds to the second phase. 2. During the **second phase**, the auction "tests" the prediction. Specifically, during this phase (which terminates after \(\lfloor\alpha n\rfloor\) more bidders have departed) it asks all active bidders whether they would be willing to pay a price equal to the prediction. If any active bidder is willing to pay this price, then they secure the item and they are guaranteed to pay a price no more than that. The exact payment of bidders who secure the item during the second phase, however, may need to be lower than that to guarantee strategyproofes; we discuss this important subtlety later on. Finally, if none of the \(\lfloor\alpha n\rfloor\) bidders is willing to pay a price equal to the prediction during the second phase, then the auction enters its third phase. 3. During the **third phase**, the auction offers a take-it-or-leave-it price equal to the highest value observed over all the bidders that have previously departed, and any active bidder can claim the item at that price. Before going into more detail regarding each of the phases, we note that the auction has a simple description for the special case where no two bidders have overlap with respect to their active intervals (i.e., there is just one active bidder at a time). In this case, the auction is a posted price mechanism that posts price \(\infty\) to the first \(\lceil\frac{1-\alpha}{2}n\rceil\) bidders, then posts price \(\max\{v_{\max},\tilde{v}_{(1)}\}\) to the next \(\lfloor\alpha n\rfloor\) bidders and, finally, if the item remains unsold, it posts price \(v_{\max}\) to the remaining bidders, where \(v_{\max}\) is the maximum value of bidders who have previously departed. We note that the allocation rule induced by these posted prices is a generalization of the threshold-based algorithm for the classic secretary problem. The main challenge, and the main technical portion of our auction, is to handle the cases where there is an overlap between bidders. The time at which the item is allocated, the bidder who receives the item, and the item's price must all be carefully designed to handle bidders who might be active during multiple phases (in particular the second and third phases) and are competing against other bidders. Irrespective of the stage of the auction where the winner \(i^{*}\) is determined, the item is allocated to \(i^{*}\) at the time of her (reported) departure, \(d_{i^{*}}\), to guarantee time-strategyproofness. If the winner is determined during the third phase, then her final price is the take-it-or-leave-it price that they accepted during this phase. If, on the other hand, the winner is determined during the second phase, the final price needs to be carefully determined in order to guarantee the strategyproofness of the auction. Specifically, if the winner remains active after the transition into the third phase and no other bidder would have claimed the item during the second phase, then the Three-Phase auction may need to reduce the winner's payment to be equal to the take-it-or-leave-it price that would have been offered during the third phase if we were to remove \(i^{*}\) and simulate the outcome of the auction without them. For clarity, we formally present the allocation and the payment rule of the auction separately: * Process 1 is the execution of the **allocation rule**, i.e., it determines who should receive the item. This process maintains a value \(v_{\max}\), corresponding to the maximum value observed among the bidders that have departed so far, and a threshold value \(\tau\). If any active bidder has value at least \(\tau\), then they can secure the item (tie-breaking using \(\pi\) if there are multiple such active bidders). The threshold \(\tau\) is \(\infty\) during the first phase, then \(\max\{v_{\max},\tilde{v}_{(1)}\}\) during the second phase, and finally \(v_{\max}\) during the third phase. This process returns the winner \(i^{*}\), if any, and the threshold \(\tau\) at which \(i^{*}\) secured the item. Furthermore, to make the formal definition of the payment rule easier, we also let this process return a Boolean variable, "active-winner," which is true only if \(i^{*}\) secured the item right after the transition between two phases. Specifically, this Boolean variable is set to true if the item was secured after a departure of an agent rather than an arrival of one, which implies that the departure caused the transition from one phase to another, leading to a drop in the threshold value, \(\tau\), and the winner was already active. * Process 2 is the execution of the **payment rule**, i.e., it determines how much the winner, if any, should pay for the item. The price is initially set to be equal to the threshold \(\tau\) at which the item was secured and the final price will be no more than that. However, under some circumstances, the price is reduced to guarantee strategyproofness. Specifically, if the winner secured the item during the second phase and remains active during the third phase, they may receive a lower price. In this case, the price is determined by simulating the allocation process without the winning bidder, \(i^{*}\). If the new winner \(i^{\prime}\), in the absence of \(i^{*}\), either i) is not active during the transition into the third phase or ii) loses to \(i^{*}\) in tie-breaking, then the price \(p\) is lowered to the threshold \(\tau^{\prime}\) at which \(i^{\prime}\) would have secured the item. Intuitively, if neither of these two conditions hold and we did not offer \(i^{*}\) the reduced price, then \(i^{*}\) could report a value of \(\tau^{\prime}\) instead of her true value and secure the item at that lower price right after the transition into the third phase. ``` Input : types \(\Theta\) of the \(n\) bidders, consistency parameter \(\alpha\in[0,1]\), prediction \(\tilde{v}_{(1)}\geq 0\) 1\(A\leftarrow\emptyset\)// the set of active bidders 2\(L\leftarrow\emptyset\)// the set of bidders who have departed 3\(v_{\max}\gets 0\)// the maximum value observed so far 4\(\tau\leftarrow\infty\) 5\(i^{*}\gets 0\) 6active-winner \(\leftarrow\) false 7while\(L\neq N\)do 8if some bidder \(i\) arrives (tie-break using \(\pi\)) then 9\(A\gets A\cup\{i\}\) 10if\(v_{i}\geq\tau\)then 11\(i^{*}\gets i\) and then break from while-loop 12 13elseif some bidder \(i\) departs (tie-break using \(\pi\)) then 14\(A\gets A\setminus\{i\}\) and \(L\gets L\cup\{i\}\) 15\(v_{\max}\leftarrow\max\{v_{\max},v_{i}\}\) // threshold update if we just entered the second phase 16if\(|L|=\lceil\frac{1-\alpha}{2}n\rceil\)then 17\(\tau\leftarrow\max\{v_{\max},\tilde{v}_{(1)}\}\) // threshold update if we just entered the third phase 18elseif\(|L|=\lfloor\frac{1+\alpha}{2}n\rfloor\)then 19\(\tau\gets v_{\max}\) // check for potential winner among active bidders 20if there exists \(i\in A\) such that \(v_{i}\geq\tau\)then 21active-winner \(\leftarrow\) true 22\(i^{*}\gets i\) (tie-break using \(\pi\) if needed in choosing \(i\)) and then break from while-loop 23return\(i^{*}\), \(\tau\), active-winner ``` **Process 1:**Alloc: the allocation rule of the Three-Phase auction **Theorem 1**.: _The Three-Phase auction can be implemented in an online fashion._ Proof.: It is easy to verify that the allocation rule is online implementable, since the auction maintains a threshold \(\tau\) at any point and decides the winner when some active bidder's value is above the threshold; this requires no future information. We now argue that the payment rule is online implementable as well. Crucially, note that the winner is allocated the item at the time of their departure, so all we need to argue is that the price that they need to pay can be determined at that point. To verify this fact, note that if \(i^{*}\) is not active during the transition from the second phase to the third phase, then her price is just \(\tau\). If, on the other hand, \(i^{*}\) is active during that transition, then the auction can also check the value of any other bidder that is also active up to that transition to determine \(i^{\prime},\tau^{\prime}\) and active-winner\({}^{\prime}\), without needing to simulate any portion of the allocation rule beyond the departure of \(i^{*}\). Our main result in this section shows that the Three-Phase not only guarantees value- and time-strategyproofness, but it also achieve a non-trivial tradeoff between robustness and consistency. **Theorem 2**.: Three-Phase _is a value-strategyproof and time-strategyproof online auction that, given any parameter \(\alpha\in W_{n}\), simultaneously guarantees \(\alpha\)-consistency and \(\frac{1-\alpha^{2}}{4}\)-robustness._ In Section 3.1, we prove the consistency and robustness guarantees achieved by our auction. In Section 3.2, we show that it is strategyproof. Finally, in Section 3.3, we give an extension of our auction that achieves revenue guarantees as a function of the prediction quality. For presentation purposes we use \(i_{1}\) to denote \(\frac{1-\alpha}{2}n\) and \(i_{2}\) to denote \(\frac{1+\alpha}{2}n\) in the following analysis. ### Revenue Guarantees In this section we analyze the performance of our auction in terms of consistency and robustness. We focus on the values of \(\alpha\) in the set \(W_{n}=\{\alpha\in[0,1]:\alpha n\in\mathbb{N}\text{ and }\frac{1-\alpha}{2}n\in \mathbb{N}\}\) which make \(\alpha n\) and \(\frac{1-\alpha}{2}n\) (the number of bidder departures in the first two phases) are integral. **Lemma 3**.: _For any \(\alpha\in W_{n}\), Three-Phase is \(\alpha\)-consistent._ Proof.: Assume that the prediction is correct, i.e., \(\tilde{v}_{(1)}=v_{(1)}\). First, observe that \(\tilde{v}_{(1)}\geq v_{i}\) for all \(i\in[n]\), so \(\tau=\tilde{v}_{(1)}=v_{(1)}\) during Phase 2, and no one would be above the threshold besides the highest value bidder. Additionally, note that no bidder is allocated the item in the first phase. Then the auction would be able to extract revenue \(v_{(1)}\) if the highest value bidder is allocated the item during Phase 2 and pays the price \(v_{(1)}\). Let \(i^{*}\) be the bidder with the highest value; it is sufficient to guarantee the aforementioned outcome if the departure time is between the \(i_{1}\)-th departure and the \(i_{2}\)-th departure, i.e., \(i^{*}\in[i_{1}+1,i_{2}]\), which, based on our random-ordering assumption, occurs with a probability of \(\frac{i_{2}-i_{1}}{n}=\alpha\). **Lemma 4**.: _For any \(\alpha\in W_{n}\), Three-Phase is \(\frac{(1-\alpha^{2})}{4}\)-robust._ Proof.: Recall that robustness is measured relative to \(v_{(2)}\). We now consider the cases where we under-predict and over-predict separately. **Case one: \(\tilde{v}_{(1)}>v_{(1)}\).** Since \(\tau=\tilde{v}_{(1)}>v_{i}\) for any bidder \(i\), we have that bidders can only be above the threshold in phase 3. In this case, by the payment rule, it is sufficient to extract \(v_{(2)}\) if \(\tau=v_{(2)}\), which requires the second highest bidder to be amongst the first \(i_{2}\) bidders to depart and bidders with the highest value are amongst the last \(n-i_{2}\) to depart. Then with probability \(\frac{i_{2}}{n}\frac{n-i_{2}}{n-1}=\frac{1-\alpha^{2}}{4}+O(\frac{1}{n})\), revenue \(v_{(2)}\) is extracted. **Case two: \(\tilde{v}_{(1)}\leq v_{(1)}\).** Observe that in the under-predicted case, the prices in phases 2 and 3 are guaranteed to fall in the range \([v_{(2)},v_{(1)}]\) if the second highest bidder is amongst the first \(i_{1}\) bidders to depart. Bidders with the highest value may be allocated the item if they are present at any point in phases 2 and 3, meaning they are amongst the last \(n-i_{1}\) to depart. Then with probability \(\frac{i_{1}}{n}\frac{n-i_{1}}{n-1}=\frac{1-\alpha^{2}}{4}+O(\frac{1}{n})\), revenue \(v_{(2)}\) is extracted. ### Strategyproofness In this section we show our auction is both value-strategyproof and time-strategyproof. All the missing proofs are deferred to Appendix A. We first show that the bidders with the first \(i_{1}\) departure times have no incentive to misreport. **Lemma 5**.: _Consider some bidder \(i\) and any \(\hat{\Theta}_{-i}\). If bidder \(i\)'s true departure time is in the first \(i_{1}\), then bidder \(i\) has no incentive to misreport her type \(\theta_{i}\)._ Proof.: First note that if such bidder \(i\) reports her type truthfully, she won't receive the item since \(\tau=\infty\) before the departure of the \(i_{1}\)-th bidder. Let \(\hat{d}\) be the reported departure time of the \(i_{1}+1\)-th bidder based on the departure schedule. The only way for bidder \(i\) to possibly obtain the item is to report a later departure time, denoted as \(\hat{d}_{i}\), such that \(\hat{d}_{i}>\hat{d}\geq d_{i}\). However, the auction allocates at her reported departure time \(\hat{d}_{i}\), which falls outside her active time. Based on our assumption, she receives no value from the item. Such bidder therefore has no incentive to change her type to obtain the item. We now make the following observation: if there exists a value in the first \(i_{1}\) bidders that is weakly more than the prediction, then the price the winner pays is fixed. **Observation 6**.: _Let \(v_{\max}^{\leq i_{1}}\) be the maximum value of the first \(i_{1}\) departed bidders. If in Line 16, \(\tau=v_{\max}^{\leq i_{1}}\), then the price winners pays is \(p=v_{\max}^{\leq i_{1}}\)._ We refer to the above scenario as the _single-threshold case_ since \(\tau\) is effectively only updated once (from \(\infty\) to \(v_{\max}^{\leq i_{1}}\)), analogously, we refer to the other scenario as the _two-threshold case_. We note that only the first \(i_{1}\) bidders can define \(v_{\max}^{\leq i_{1}}\) therefore they (together with the prediction) decide which case the rest of the bidders are in. We first show that the rest of the bidders have no incentive to lie in the single-threshold case. The next two lemmas focus on bidders with true value below and above the threshold, respectively. **Lemma 7**.: _Consider some bidder \(i\), and any \(\hat{\Theta}_{-i}\) that results in the single-threshold case and let \(\tau\) be the thresholds defined in Line 16. If bidder \(i\) has a value \(v_{i}\leq\tau\), she has no incentive to misreport her type \(\theta_{i}\)._ **Lemma 8**.: _Consider some bidder \(i\), and any \(\hat{\Theta}_{-i}\) that results in the single-threshold case and let \(\tau\) be the thresholds defined in Line 16. If bidder \(i\) has a value \(v_{i}>\tau\), she has no incentive to misreport her type \(\theta_{i}\)._ We now discuss the more involved case, the two-threshold case. For the ease of presentation, we will denote the threshold defined in Line 16 as \(\tau_{1}\) and the threshold defined in Line 18 as \(\tau_{2}\). Note that \(\tau_{2}<\tau_{1}\) (\(\tau_{2}=\tau_{1}\) is equivalent to the single-threshold case). We first show that winners in this case can't manipulate the price via misreporting. **Lemma 9**.: _Consider some bidder \(i\), and any \(\hat{\Theta}_{-i}\) that results in the two-threshold case. If \(i\) is the winner with her true type \(\theta_{i}\), then bidder \(i\) has no incentive to misreport her type \(\theta_{i}\)._ Proof.: Consider the two possible prices the bidder is paying, \(\tau_{1}\) and \(\tau_{2}\). If bidder \(i\) wins and pays \(\tau_{2}\) (the cheaper one), she has no incentive to deviate as it is the best outcome. Consider the cases where bidder \(i\) wins with price \(\tau_{1}\). By the payment rule, she either left before the \(\tau\) update (she is ranked before \(i_{2}\) with respect to departure time) or there must exist a bidder \(i^{\prime}\) that wins in Line 3 of the payment rule. Consider the three possible cases below: **Case one:**\(i\leq i_{2}\). In this case the bidder \(i\) departs before the threshold drops to \(\tau_{2}\). To possibly get the lower price, she has to report a departure time \(\hat{d}_{i}>\hat{d}_{i_{2}}\geq d_{i}\). However, the auction allocates at her reported departure time \(\hat{d}_{i}\) which falls outside her active time. Based on our assumption, she receives no utility from the item. **Case two:**\(\tau^{\prime}=\tau_{1}\). This means that some other bidder \(i^{\prime}\) is above the threshold \(\tau_{1}\) before it drops. In this case, the winner can't reduce the price since the existence of such a bidder is independent of her report. **Case three:** active-winner = true and \(i^{\prime}\succ i\). This case happens when \(i^{\prime}\) becomes available right after the threshold drops to \(\tau_{2}\). Since both the tie-breaking rule and the existence of bidder \(i^{\prime}\) are independent of bidder \(i\)'s report, bidder \(i\) can't get a better price via misreporting. We now demonstrate that the losing bidders cannot benefit from misreporting as well. The next lemma shows that bidders with values below \(\tau_{2}\) have no incentive to lie. The proof is almost identical to the proof of Lemma 7. **Lemma 10**.: _Consider some bidder \(i\), and any \(\hat{\Theta}_{-i}\) that results in the two-threshold case and let \(\tau_{1}\) and \(\tau_{2}\) be the thresholds defined in Line 16 and Line 18 respectively. If bidder \(i\) has a value \(v_{i}\leq\tau_{2}\), she has no incentive to misreport her type \(\theta_{i}\)._ The next lemma shows that the losing bidder with values above \(\tau_{1}\) has no incentive to lie. The proof is almost identical to the proof of Lemma 8 regrading the losing bidders. **Lemma 11**.: _Consider some bidder \(i\), and any \(\hat{\Theta}_{-i}\) that results in the two-threshold case and let \(\tau_{1}\) and \(\tau_{2}\) be the thresholds defined in Line 16 and Line 18 respectively. If bidder \(i\) has a value \(v_{i}\geq\tau_{1}\) and she is not the winner, she has no incentive to misreport her type \(\theta_{i}\)._ The next lemma shows that bidders with value in between the two threshold have no incentive to lie. **Lemma 12**.: _Consider some bidder \(i\), and any \(\hat{\Theta}_{-i}\) that results in the two-threshold case and let \(\tau_{1}\) and \(\tau_{2}\) be the thresholds defined in Line 16 and Line 18 respectively. If bidder \(i\) has a value \(v_{i}\geq\tau_{1}\) and she is not the winner, she has no incentive to misreport her type \(\theta_{i}\)._ Proof.: Consider a bidder \(i\) with a value such that \(\tau_{1}>v_{i}>\tau_{2}\) who is not the winner. The only outcome that is strictly better for her is winning the item with a price of \(\tau_{2}\). For the rest of the proof we show that such outcome is not obtainable by such bidders through misreporting. Since \(i\) is not the winner, there must be another bidder \(i^{*}\) who either has a value above the threshold before \(i\) or \(i^{*}\) is above the threshold at the same time as \(i\) but \(i^{*}\succ i\). Since \(i\)'s value is less than \(\tau_{1}\), she can only be above the threshold after the departure of the \(i_{2}\)-th bidder if she reports her true type. **Case one:** If bidder \(i^{*}\) is above the threshold before bidder \(i\), two scenarios are possible. First, bidder \(i^{*}\) is above the threshold before the arrival time of bidder \(i\), in which case there is no way bidder \(i\) can misreport and win the item, as we assume bidders cannot report an arrival time earlier than their actual arrival time. If bidder \(i^{*}\)'s value is not above the threshold before bidder \(i\)'s arrival time but is earlier than when \(v_{i}\) is above the threshold, it must be that bidder \(i\)'s arrival time is before the threshold drops to \(\tau_{2}\), and bidder \(i^{*}\) is above the threshold \(\tau_{1}\). In this case, bidder \(i\) indeed can report a value \(\hat{v}_{i}\geq\tau_{1}\) to win the item. However, due to the presence of bidder \(i^{*}\), the price she needs to pay is \(p=\tau_{1}\) by Line 5 of the payment subroutine, implying that bidder \(i\) would reduce her utility by misreporting in this way. **Case two:** If bidder \(i\) loses to bidder \(i^{*}\) in the tie-breaking, this can only occur when the threshold drops to \({\tau_{2}}^{2}\). The only way for bidder \(i\) to win the item is if she has an arrival time before the threshold drop and reports a value \(\hat{v}_{i}\geq\tau_{1}\) to avoid the tie-breaking. However, due to the presence of bidder \(i^{*}\), we would get active-winner \(=\) true and \(i^{\prime}\succ i\), making \(p=\tau=\tau_{1}\). Even if bidder \(i\) obtains the item, her utility is non-positive. We are now ready to show the main Lemma of the subsection **Lemma 13**.: Three-Phase _is both value-strategyproof and time-strategyproof._ Proof.: Combining Lemma 7 and 8 we get that in the single-threshold case, no bidder \(i\) has incentive to misreport her type \(\theta_{i}\). Combining Lemma 9, 10, 11 and 12, we get that in the two-threshold case, no bidder \(i\) has incentive to misreport her type \(\theta_{i}\). ### The Error-Tolerant Auction We now show that our auction can be easily extended to achieve an improved revenue guarantee not only when the prediction is perfectly accurate, but even when it is approximately accurate. Given a prediction \(\tilde{v}_{(1)}\) regarding the maximum bidder value \(v_{(1)}\), we use \(q(\tilde{v}_{(1)},v_{(1)})\), or just \(q\), to capture the prediction quality, defined as the relative under- or over-prediction: \[q=\min\left\{\frac{\tilde{v}_{(1)}}{v_{(1)}},\frac{v_{(1)}}{\tilde{v}_{(1)}} \right\}.\] Note that \(q\in[0,1]\) and that higher values of \(q\) correspond to better predictions. We start by describing the Error-Tolerant auction, which is an extension of the Three-Phase auction. This auction takes as input an additional parameter \(\gamma\in[0,1]\) called the error-tolerance parameter and whose value is chosen by the auction designer. The only change from Three-Phase to Error-Tolerant is that Line 16 is changed from \(\tau=\max\{v_{\max},\tilde{v}_{(1)}\}\) to \(\tau=\max\{v_{\max},\gamma\cdot\tilde{v}_{(1)}\}\). The main result for the Error-Tolerant auction is that when the prediction quality \(q\) is at least the error-tolerance \(\gamma\), then the auction achieves a revenue guarantee of \(\max\{\alpha\gamma q\cdot v_{(1)},\frac{1-\alpha^{2}}{4}v_{(2)}\}\). Thus, in that case, a competitive ratio of \(\alpha\gamma q\) is guaranteed against the first-best revenue benchmark \(v_{(1)}\), even if the prediction is not exactly correct. In addition, a competitive ratio of \(\frac{1-\alpha^{2}}{4}\) against the second-best revenue benchmark \(v_{(2)}\) is always maintained. We defer the proof to Appendix B. **Theorem 14**.: Error-Tolerant _is a value-strategyproof and time-strategyproof online auction that, given any parameter \(\alpha\in W_{n}\), \(\gamma\in[0,1]\), and the actual quality \(q\) of the prediction, achieves expected revenue at least_ \[\mathbb{E}(\text{Rev}(M,\Theta,\tilde{v}_{(1)}))\geq\begin{cases}\max\left\{ \alpha\gamma q\cdot v_{(1)},\frac{1-\alpha^{2}}{4}v_{(2)}\right\}&\text{if $q\geq \gamma$,}\\ \frac{1-\alpha^{2}}{4}v_{(2)}&\text{if $q<\gamma$.}\end{cases}\] ## 4 A Tight Impossibility Result In this section, we show that the tradeoff achieved by our auction is optimal for a natural family of auctions. To define this family of auctions, we first need to define the family of instances \(\mathcal{I}_{\text{no}}\), called the no-overlap instances. A no-overlap instance is such that at each time step \(i\in[n]\), there is a single active bidder \(i\) such that \(a_{i}=d_{i}=i\) (the values can be arbitrary). For the remainder of this section, we implicitly assume that we are only considering no-overlap instances and show that the tight impossibility result holds on this restricted family of instances. We refer to bidder \(i\) as the bidder who arrives and departs at time \(i\in[n].\) The analysis of the impossibility result considers two nested families of auctions. The first is called the family of Prediction or Maximum-so-Far (PMF) auctions. **Definition 15**.: _Consider the following three allocation rules: \(x_{1}^{i}\) never allocates the item to bidder \(i\), \(x_{2}^{i}\) allocates to \(i\) if \(v_{i}\geq\max(\tilde{v}_{(1)},v_{\max}^{<i})\), and \(x_{3}^{i}\) allocates to \(i\) if \(v_{i}\geq v_{\max}^{<i}\). An auction \(M\) is in the family of Prediction or Maximum-so-Far (PMF) auctions \(\mathcal{M}_{m}\) if, for every bidder \(i\in[n]\), there is an allocation rule \(x^{i}\in\{x_{1}^{i},x_{2}^{i},x_{3}^{i}\}\) such that, for all no-overlap instances \(\mathcal{I}_{\text{no}}\), if the item is not allocated to a bidder \(j<i\) then \(M\) allocates to \(i\) according to \(x^{i}\)._ It is easy to verify that our auction, as well as online auctions in previous work that are without predictions [20, 14], are PMF auctions. Our impossibility result holds for a family of auctions that generalizes PMF auctions. In a Prediction or Any-so-Far (PAF) auction \(M\in\mathcal{M}_{a}\), the allocation rules can depend on the \(j^{th}\) highest value seen so far \(v_{(j)}^{<i}\), for any \(j\in[i-1]\). **Definition 16**.: _Consider the following allocation rules for bidder \(i\):_ * \(x_{1}^{i}\) _never allocates the item to bidder_ \(i\)_,_ * _for all_ \(j\in[i-1]\)_,_ \(x_{2,j}^{i}\) _allocates to_ \(i\) _if_ \(v_{i}\geq\max(\tilde{v}_{(1)},v_{(j)}^{<i})\)_, and_ * _for all_ \(j\in[i-1]\)_,_ \(x_{3,j}^{i}\) _allocates to_ \(i\) _if_ \(v_{i}\geq v_{(j)}^{<i}\)_._ _Let \(A^{i}=\{x_{1}^{i}\}\cup\{x_{2,j}^{i}\}_{j\in[i-1]}\cup\{x_{3,j}^{i}\}_{j\in[i- 1]}\). An auction \(M\) is in the family of Prediction or Any-so-Far (PAF) auctions \(\mathcal{M}_{a}\) if, for every bidder \(i\in[n]\), there is an allocation rule \(x^{i}\in A^{i}\) such that, if the item is not allocated to a bidder \(j<i\), \(M\) allocates to \(i\) according to \(x^{i}\) for all no-overlap instances \(\mathcal{I}_{\text{no}}\)._ Observe that \(\mathcal{M}_{m}\subset\mathcal{M}_{a}\subset\mathcal{M}\). The main result in this section is the following. **Theorem 17**.: _For any \(\alpha\in[0,1]\), there is no auction \(M\) in the PAF family of auctions \(\mathcal{M}_{a}\) that is \(\alpha\)-consistent and \((\frac{1-\alpha^{2}}{4}+\omega(\frac{1}{n}))\)-robust._ We conjecture that the above result also holds for all strategyproof auctions. Note that, even without predictions, there is still a gap between the best-known \(\frac{1}{4}\)-competitive auction and the \(\frac{2}{3}\) impossibility result of [20]. Thus, showing that Theorem 17 holds for all strategyproof auctions would also close the gap for the setting without predictions. Overview of the proof.By Myerson's Lemma [30], we have that for any PAF auction \(M\) and bidder \(i\), there is a price \(p_{i}\in\{\infty,\{\max(\tilde{v}_{(1)},v_{(j)}^{<i})\}_{j\in[i-1]},\{v_{\max}^ {<i}\}_{j\in[i-1]}\}\) such that, if the item is not allocated to a bidder \(j<i\), \(M\) posts price \(p_{i}\) to bidder \(i\) for all no-overlap instances \(\mathcal{I}_{\mathrm{no}}\). We say that an \(\alpha\)-consistent auction \(M\) is robustness-optimal among \(\mathcal{M}\) if \(M\in\mathcal{M}\) and there is no \(\alpha\)-consistent auction \(M^{\prime}\in\mathcal{M}\) that achieves strictly better robustness than \(M\). 1. We first show that, for any PAF auction \(M\in\mathcal{M}_{a}\), there exists a PMF auction that achieves consistency and robustness that are no worse than those achieve by \(M\) (Section 4.1). Thus, impossibility results for PMF auctions extend to PAF auctions. 2. We then show that, for any \(\alpha\in[0,1]\), there exist \(i_{1},i_{2}\in[n]\) and an \(\alpha\)-consistent, robustness-optimal auction among auctions in \(\mathcal{M}_{m}\) that posts price \(\infty\) at each time \(i\in[1,i_{1}]\), then price \(\max(\tilde{v}_{(1)},v_{(1)}^{\leq i_{1}})\) at each time \(i\in[i_{1}+1,i_{2}]\), and finally price \(v_{(1)}^{\leq i_{2}}\) at each time \(i\in[i_{2}+1,n]\) (Section 4.2). This is the main part of the proof. 3. Finally we show that, for the auction structure described in the second step, the optimal thresholds for maximizing robustness are \(i_{1}=\frac{1-\alpha}{2}n\) and \(i_{2}=\frac{\alpha+1}{2}n\), achieving robustness at most \(\frac{1-\alpha^{2}}{4}+\omega(\frac{1}{n})\) (Section 4.3). ### The reduction from PAF to PMF auctions We start by giving a simple formula for the consistency and robustness of auctions in \(\mathcal{M}_{a}\) over the family of instances \(\mathcal{I}_{\mathrm{no}}\). Observe that for instances in \(\mathcal{I}_{\mathrm{no}}\), the random matching of values to intervals is equivalent to drawing a random permutation that maps the values \(V\) to indices \([1,\ldots,n]\). Let \(\Sigma\) be the set of all permutations over the \(n\) bidders. Given a permutation \(\sigma\in\Sigma\), we define \(\sigma(i)\) as the rank of the ith arriving bidder. In particular, \(\sigma(i)=1\) if bidder \(i\) is the bidder with the highest value. If multiple bidders have equal value, we break ties arbitrarily and consistently. Then we have that \(\sigma^{-1}(j)\) denotes the position of the \(j\)th highest ranked bidder. For \(M\in\mathcal{M}_{a}\), we let \(C^{M}:=\{\sigma:M\text{ posts price }\tilde{v}_{(1)}\text{ to }\sigma^{-1}(1)\text{ under order }\sigma\}\) and \(R^{M}:=\{\sigma:M\text{ posts price }v_{(2)}\text{ to }\sigma^{-1}(1)\text{ under order }\sigma\}\). **Lemma 18**.: _Consider an auction \(M\in\mathcal{M}_{a}\). Over the family of no-overlap instances \(\mathcal{I}_{\mathrm{no}}\), its consistency is \(|C^{M}|/(n!)\) and its robustness is \(|R^{M}|/(n!)\)._ Proof.: Observe that for each \(\sigma\in C^{M}\), the auction achieves revenue \(v_{(1)}\) when the prediction is correct. In addition, for each \(\sigma\in R^{M}\), the auction achieves revenue \(v_{(2)}\), even when the prediction is incorrect. Then consistency and robustness are lower bounded by the probability of drawing \(\sigma\in C^{M}\) and \(\sigma\in R^{M}\) respectively from \(\Sigma\), which are precisely \(|C^{M}|/|\Sigma|=|C^{M}|/(n!)\) and \(|R^{M}|/|\Sigma|=|R^{M}|/(n!)\). To show that consistency is at most \(|C^{M}|/(n!)\), it suffices to find a single instance where equality holds. Consider the instance where the values are \(v_{(1)}=1,v_{(2)}=\cdots=v_{(n)}=0\) and prediction \(\tilde{v}_{(1)}=1\) (we will denote this \(I_{1}\)). Observe from our definition of PAF auctions, the only (noninfinite) prices that can be posted to bidder \(i\) are in the set \(\{v_{1},v_{2},\ldots,v_{i-1},\tilde{v}_{(1)}\}\). For this instance, only two of these may be nonzero, \(\tilde{v}_{(1)}\) or \(v_{j}=1\) if the highest bidder arrives at step \(j<i\) bidder. In the first case, the only bidder who can accept this price is the highest bidder, and revenue of \(v_{(1)}\) is extracted. In the second case, we know the highest bidder has already departed, so bidder \(i\) must have value \(v_{i}=0\) and no revenue can be gained. Thus the only way revenue is gained in this instance is by posting \(\tilde{v}_{(1)}\) to the highest bidder at step \(\sigma^{-1}(1)\), and the revenue is precisely \(v_{(1)}\), so consistency is exactly \(|C^{M}|/(n!)\). Similarly, for robustness consider the instance where the values are \(v_{(1)}=1,v_{(2)}=\varepsilon,v_{(3)}=\cdots=v_{(n)}=0\) for some \(\varepsilon<1\) and the prediction is \(\tilde{v}_{(1)}=v_{(1)}+1\) (we will denote this \(I_{2}\)). No revenue is gained by posting \(\tilde{v}_{(1)}\) since no bidder would accept that price. The only other positive prices that can be posted to bidder \(i\) are \(v_{j}=1\) if the highest bidder arrives at step \(j<i\) or \(v_{j}=\varepsilon\) if the second highest bidder arrives at step \(j<i\). The first case is the same as above. As for \(v_{j}=\varepsilon\), the only bidder who can accept this price is the highest bidder, and revenue of \(v_{(2)}\) is gained. Thus the only way revenue is gained in this instance is by posting price \(v_{(2)}\) to the highest bidder at step \(\sigma^{-1}(1)\), and the revenue is precisely \(v_{(2)}\), so robustness is \(|R^{M}|/(n!)\). **Lemma 19**.: _For every \(M\in\mathcal{M}_{a}\), there exists some \(M^{\prime}\in\mathcal{M}_{m}\) such that consistency\((M^{\prime})\geq\text{consistency}(M)\) and robustness\((M^{\prime})\geq\text{robustness}(M)\)._ Proof.: We will construct \(M^{\prime}\) from \(M\) as follows. We determine the allocation rule \(M^{\prime}\) uses for bidder \(i\): \[x^{\prime i}=\begin{cases}x_{1}^{i}&x^{i}\in\{x_{1}^{i}\}\\ x_{2}^{i}&x^{i}\in\{x_{2,j}^{i}\}_{j\in[i-1]}\\ x_{3}^{i}&x^{i}\in\{x_{3,j}^{i}\}_{j\in[i-1]}\end{cases}\] Note that \(x_{2}^{i}\) and \(x_{3}^{i}\) allocate the item to bidder i if \(v_{i}\) is at least \(\max(\tilde{v}_{(1)},v_{(1)}^{\leqslant i})\) and \(v_{(1)}^{\leqslant i}\) respectively. Next, we show that consistency\((M^{\prime})\geq\text{consistency}(M)\). First, we have that consistency\((M^{\prime})=|C^{M^{\prime}}|/(n!)\) by Lemma 18, so it is sufficient to show that \(|C^{M^{\prime}}|\geq|C^{M}|\). Consider any \(\sigma\in C^{M}\) with \(\sigma^{-1}(1)=i\). Observe that if \(M\) does not allocate the item prior to step \(i\), neither does \(M^{\prime}\) because at any \(j<i\), \(M^{\prime}\) posts to \(j\) a price at least as high as the price \(M\) posts to \(j\). Since price \(\tilde{v}_{(1)}\) is posted by \(M\) to \(i\), we know that \(x^{i}\in\{x_{2,j}^{i}\}_{j\in[i-1]}\), and subsequently \(x^{\prime i}=x_{2}^{i}\), so \(M^{\prime}\) also posts price \(\tilde{v}_{(1)}\) to \(i\). Thus \(\sigma\in C^{M^{\prime}}\), and therefore \(|C^{M^{\prime}}|\geq|C^{M}|\). Similarly, we show that robustness\((M^{\prime})\geq\text{robustness}(M)\) by proving that \(|R^{M^{\prime}}|\geq|R^{M}|\). Consider any \(\sigma\in R^{M}\) with \(\sigma^{-1}(1)=i\). By the same argument as above, if \(M\) does not allocate the item prior to step \(i\) neither does \(M^{\prime}\). Since \(M\) posts price \(v_{(2)}\) to \(i\), there are two cases for \(x^{i}\). Case 1 is \(x^{i}=x_{2,1}^{i}\) if \(\tilde{v}_{(1)}\leq v_{(2)}\). Note that we know \(j=1\) because \(v_{(1)}\) must be seen at time i. Then \(x^{\prime i}=x_{2}^{i}\) and also posts \(v_{(2)}\) to bidder \(i\). Case 2 is \(x^{i}=x_{3,1}^{i}\), and \(j=1\) by the same reasoning. Then \(x^{\prime i}=x_{3}^{i}\) and again posts \(v_{(2)}\) to bidder \(i\). Thus \(\sigma\in R^{M^{\prime}}\) and \(|R^{M^{\prime}}|\geq|R^{M}|\). By Lemma 19, impossibility results for \(\mathcal{M}_{m}\) extend to \(\mathcal{M}_{a}\). ### The main lemma for the impossibility result The main lemma for the impossibility result shows that there exists an \(\alpha\)-consistent auction that is robustness-optimal among auctions in \(\mathcal{M}_{a}\) and has, on no-overlap instances, a three-phase structure (as our auction). **Lemma 20**.: _There exists an \(\alpha\)-consistent auction that is robustness-optimal among auctions in \(\mathcal{M}_{a}\) and satisfies the following structure: it posts price \(\infty\) at each time \(i\in[1,i_{1}]\), then price \(\max(\tilde{v}_{(1)},v_{\max}^{<i})\) at each time \(i\in[i_{1}+1,i_{2}]\), and finally price \(v_{\max}^{<i}\) at each time \(i\in[i_{2}+1,n]\)._ The remainder of Section 4.2 is devoted to the proof of Lemma 20. Overview of the proof of Lemma 20.The proof follows an interchange argument that shows that if an auction \(M\in\mathcal{M}_{m}\) does not post prices in the order specified by Lemma 20, then there are two positions \(i\) and \(i+1\) that violate this order and the prices posted at these time steps can be swapped without decreasing \(|C^{M}|\) and \(|R^{M}|\), and therefore without decreasing consistency and robustness. There are three potential violations of the ordering specified by Lemma 20. In Lemma 22, we consider the case where \(v_{\max}^{<i}\) is posted to bidder \(i\) and \(\max(\tilde{v}_{(1)},v_{\max}^{<i+1})\) to bidder \(i+1\), in Lemma 23 the case where \(v_{\max}^{<i}\) is posted to bidder \(i\) and \(\infty\) to bidder \(i+1\), and in Lemma 24 the case where \(\max(\tilde{v}_{(1)},v_{\max}^{<i})\) to bidder \(i\) and \(\infty\) to bidder \(i+1\). We now define the interchange function \(f_{i}:\Sigma\to\Sigma\). For fixed index \(i\) and any permutation \(\sigma\), let \[f_{i}(\sigma)(j)=\begin{cases}\sigma(i+1)&j=i\\ \sigma(i)&j=i+1\\ \sigma(j)&\text{else},\end{cases}\] which is a bijective function that swaps the values of the ith and (i+1)th bidders. We first state a trivial fact regarding the revenue achieved from the first \(i-1\) bidders for two auctions that are identical up to step \(i\). This fact will be repeatedly used in the proof of the next lemmas. **Lemma 21**.: _Consider two auctions \(M,M^{\prime}\in\mathcal{M}_{m}\) that are identical for steps up to \(i\). Then \(M\) under order \(\sigma\) and \(M^{\prime}\) under order \(f_{i}(\sigma)\) gain the same revenue before step \(i\)._ Proof.: Observe that \(f_{i}(\sigma)\) does not affect the values that appear before \(i\). Then \(M^{\prime}\) sees the same ranks before \(i\) under \(f_{i}(\sigma)\) as \(M\) does under \(\sigma\), and since they follow the same rules the revenue gained at each step before \(i\) is the same. The first potential violation of the ordering specified by Lemma 20 is when \(v_{\max}^{<i}\) is posted to bidder \(i\) and \(\max(\tilde{v}_{(1)},v_{\max}^{<i+1})\) to bidder \(i+1\). **Lemma 22**.: _Consider an auction \(M\in\mathcal{M}_{m}\) that posts price \(v_{\max}^{<i}\) at some step \(i\) and \(\max(\tilde{v}_{(1)},v_{\max}^{<i+1})\) at step \(i+1\). Let \(M^{\prime}\) be the same auction as \(M\) except that it posts price \(\max(\tilde{v}_{(1)},v_{\max}^{<i})\) at step \(i\) and \(v_{\max}^{<i+1}\) at step \(i+1\). Then, \(|C^{M^{\prime}}|\geq|C^{M}|\) and \(|R^{M^{\prime}}|\geq|R^{M}|\)._ Proof.: First we show that if \(\sigma\in C^{M}\), then \(f_{i}(\sigma)\in C^{M^{\prime}}\), meaning \(f_{i}(C^{M})\subseteq C^{M^{\prime}}\). Since \(f_{i}\) is a bijective function, consequently \(|C^{M^{\prime}}|\geq|C^{M}|\). Assume that \(\tilde{v}_{(1)}=v_{(1)}\). Consider any \(\sigma\in C^{M}\), then \(\tilde{v}_{(1)}\) is posted to the highest ranked bidder who arrives at step \(\sigma^{-1}(1)\). Observe that to prove \(f_{i}(\sigma)\in C^{M^{\prime}}\), it is sufficient to show that price \(\tilde{v}_{(1)}\) is posted to bidder \(f_{i}(\sigma)^{-1}(1)\), or that revenue \(\tilde{v}_{(1)}\) is extracted by \(M^{\prime}\) under ordering \(f_{i}(\sigma)\). There are three cases. First, if \(\sigma^{-1}(1)<i\), observe that there is no difference between \(M\) and \(M^{\prime}\) before step \(i\). Then it follows from Lemma 21 that since \(M\) extracts revenue \(\tilde{v}_{(1)}\) at step \(\sigma^{-1}(1)<i\), \(M^{\prime}\) extracts the same revenue under \(f_{i}(\sigma)\) before step \(i\). The second case is if \(\sigma^{-1}(1)\in\{i,i+1\}\). Recall that \(\sigma\in C^{M}\) implies that price \(\tilde{v}_{(1)}\) is posted to bidder \(\sigma^{-1}(1)\). Then since \(M\) posts \(v_{\max}^{<i}<\tilde{v}_{(1)}\) to bidder \(i\), \(\sigma^{-1}(1)=i+1\) is the only possibility, and indeed \(M\) posts \(\max(\tilde{v}_{(1)},v_{\max}^{\varsigma i+1})=\tilde{v}_{(1)}\) to \(i+1\). Now we must show that \(M^{\prime}\) posts price \(\tilde{v}_{(1)}\) to \(f_{i}(\sigma)^{-1}(1)\). By our definition of the interchange function, \(f_{i}(\sigma)(i)=\sigma(i+1)=1\), so \(f_{i}(\sigma)^{-1}(1)=i\). Note that if \(M\) reaches step \(i\), then \(M^{\prime}\) does as well. Then since \(M^{\prime}\) posts price \(\max(\tilde{v}_{(1)},v_{\max}^{<i})=\tilde{v}_{(1)}\) at step \(i\), then \(f_{i}(\sigma)\in C^{M^{\prime}}\). The third and last case is if \(\sigma^{-1}(1)>i+1\). Observe that it is sufficient to show that under \(M^{\prime}\), bidders \(i\) and \(i+1\) do not receive the item because at steps after \(i+1\), the auctions see the same order of bidders and make the same posts. Clearly if under \(M\) bidder \(i+1\) does not accept its posted-price \(\tilde{v}_{(1)}\), ie \(v_{i+1}<\tilde{v}_{(1)}\), then under \(M^{\prime}\), bidder \(i\) with value \(v_{(f_{i}(\sigma)(i))}=v_{(\sigma(i+1))}=v_{i+1}\) will not accept its posted price \(\tilde{v}_{(1)}\). Now we consider bidder \(i+1\) under \(M^{\prime}\). If we let \(\overline{v}_{\max}^{<\ell}\) be the value of the highest ranked bidder seen before step \(\ell\) given ordering \(f_{i}(\sigma)\), we can see that \(\overline{v}_{\max}^{<i+1}\geq\overline{v}_{\max}^{<i}=v_{\max}^{<i}\). Then if bidder \(i\) does not accept the price \(v_{\max}^{<i}\) posted under \(M\), ie \(v_{i}<v_{\max}^{<i}\), then \(v_{(f_{i}(\sigma)(i+1))}=v_{(\sigma(i))}=v_{i}<v_{\max}^{<i}\leq\overline{v}_ {\max}^{<i+1}\) and under \(M^{\prime}\) bidder \(i+1\) also does not accept its posted price \(\overline{v}_{\max}^{<i+1}\). Next, to show the second part of lemma, we show that \(f_{i}(R^{M})\subseteq R^{M^{\prime}}\). If \(\sigma\in R^{M}\) is true, then \(v_{\max}^{<i}=v_{(2)}\) is posted to the bidder at step \(\sigma^{-1}(1)\). Similar to the consistency proof, to prove \(f_{i}(\sigma)\in R^{M^{\prime}}\), it is sufficient to show that price \(v_{(2)}\) is posted to bidder \(f_{i}(\sigma)^{-1}(1)\), or that revenue \(v_{(2)}\) is extracted by \(M^{\prime}\) under ordering \(f_{i}(\sigma)\) (for any value of \(\tilde{v}_{(1)}\)). Cases 1 and 3 are exactly the same as for consistency. Then consider \(\sigma^{-1}(1)\in\{i,i+1\}\). If \(\tilde{v}_{(1)}>v_{(1)}\), then under \(M\) price \(\max(\tilde{v}_{(1)},v_{\max}^{<i+1})>v_{(1)}\) is posted to and rejected by bidder \(i+1\), so \(\sigma^{-1}(1)=i\) is the only possibility. Indeed, price \(v_{\max}^{<i}=v_{(2)}\) may be posted to \(i\) if the second highest bidder arrives before \(i\). After the interchange, bidder \(i\) is offered price \(\tilde{v}_{(1)}>v_{(1)}\) under \(M^{\prime}\) and they reject it. Observe that if \(\sigma^{-1}(1)=i\), then \(f_{i}(\sigma)^{-1}(1)=i+1\). Then under \(M^{\prime}\), bidder \(i+1\) sees price \(\overline{v}_{\max}^{<i+1}=v_{\max}^{<i}=v_{(2)}\). Now if \(\tilde{v}_{(1)}<v_{(2)}\), then there are two scenarios. First, consider \(\sigma^{-1}(1)=i\), then since \(M\) posts \(v_{\max}^{<i}\) at \(i\) we know that \(v_{\max}^{<i}=v_{(2)}\). Observe that since the highest two bidders arrive by step \(i\), then \(v_{i+1}<v_{(2)}\). Under \(f_{i}(\sigma)\), bidder \(i\) has value \(v_{(f_{i}(\sigma)(i))}=v_{(\sigma(i+1))}=v_{i+1}\). Then when \(M^{\prime}\) posts to bidder \(i\) price \(\max(\tilde{v}_{(1)},\overline{v}_{\max}^{<i})=v_{\max}^{<i}=v_{(2)}\), it is rejected. \(M^{\prime}\) then posts price \(\overline{v}_{\max}^{<i+1}\), which is exactly \(v_{(2)}\) because bidder \(i\) has value below \(v_{(2)}\), to the bidder with rank \(f_{i}(\sigma)(i+1)=\sigma(i)=1\). If instead \(\sigma^{-1}(1)=i+1\), then it is impossible for bidder \(i\) to have the second highest value or else they would accept their price \(v_{\max}^{<i}<v_{(2)}\). Then for \(\max(\tilde{v}_{(1)},v_{\max}^{<i})=v_{(2)}\) to hold, the second highest bidder must arrive before \(i\) and \(v_{\max}^{<i}=v_{(2)}\). Thus under \(M^{\prime}\), price \(\overline{v}_{\max}^{<i}=v_{\max}^{<i}\) is posted at step \(i\) to the bidder with rank \(f_{i}(\sigma)(i)=\sigma(i+1)=1\). Observe that if \(v_{(2)}\leq\tilde{v}_{(1)}\leq v_{(1)}\), given that \(v_{\max}^{<i}=v_{(2)}\), selling by posting \(\max(\tilde{v}_{(1)},v_{\max}^{<i})\) and \(v_{\max}^{<i}\) both result in at least \(v_{(2)}\) revenue, so swapping these two prices does not lower robustness. The second potential violation of the ordering specified by Lemma 20 is when \(v_{\max}^{<i}\) is posted to bidder \(i\) and \(\infty\) to bidder \(i+1\). **Lemma 23**.: _Consider an auction \(M\in\mathcal{M}_{m}\) that posts price \(v_{\max}^{<i}\) at some step \(i\) and \(\infty\) at step \(i+1\). Let \(M^{\prime}\) be the same auction at \(M\) except that it posts price \(\infty\) at step \(i\) and \(v_{\max}^{<i+1}\) at step \(i+1\). Then, \(|C^{M^{\prime}}|\geq|C^{M}|\) and \(|R^{M^{\prime}}|\geq|R^{M}|\)._ Proof.: We first show that \(|C^{M^{\prime}}|\geq|C^{M}|\) by proving that \(f_{i}(C^{M})\subseteq C^{M^{\prime}}\). Let \(\tilde{v}_{(1)}=v_{(1)}\). Consider any \(\sigma\in C^{M}\), so \(\tilde{v}_{(1)}\) is posted to bidder \(\sigma^{-1}(1)\). There are again three cases. The first case, \(\sigma^{-1}(1)<i\), is as in Lemma 22. The second case is \(\sigma^{-1}(1)\in\{i,i+1\}\). Observe that it is impossible to post \(\tilde{v}_{(1)}\) to bidder \(\sigma^{-1}(1)\) by posting \(\infty\) or \(v_{\max}^{<i}\). This is because \(v_{\max}^{<i}\leq v_{(2)}\). The third and last case is \(\sigma^{-1}(1)>i+1\). It is sufficient to show that \(M^{\prime}\) does not sell the item at time \(i\) or \(i+1\). Clearly the former is true because \(\infty\) is posted to \(i\). Since \(M\) fails to sell the item at step \(i\) by posting price \(v_{\max}^{<i}\), we know that \(v_{i}<v_{\max}^{<i}\). Then under \(M^{\prime}\), the price \(\overline{v}_{\max}^{<i+1}\geq\overline{v}_{\max}^{<i}=v_{\max}^{<i}\) is posted to bidder with value \(v_{(f_{1}(\sigma)(i+1))}=v_{(\sigma(i))}=v_{i}\), so the item is not sold to bidder \(i+1\) under \(M^{\prime}\). For the second part of the lemma, we show \(f_{i}(\sigma)\in R^{M^{\prime}}\) for any \(\sigma\in R^{M}\). If \(\sigma\in R^{M}\) is true, then \(v_{\max}^{<i}=v_{(2)}\) is posted to the bidder at step \(\sigma^{-1}(1)\). Similar to the consistency proof, to prove \(f_{i}(\sigma)\in R^{M^{\prime}}\), it is sufficient to show that price \(v_{(2)}\) is posted to bidder \(f_{i}(\sigma)^{-1}(1)\), or that revenue \(v_{(2)}\) is extracted by \(M^{\prime}\) under ordering \(f_{i}(\sigma)\) (for any value of \(\tilde{v}_{(1)}\)). Once again, cases one and three are the same. Now consider \(\sigma^{-1}(1)\in\{i,i+1\}\). Since bidder \(i+1\) never accepts \(\infty\), this means that \(\sigma^{-1}(1)=i\). \(M\) posts price \(v_{\max}^{<i}\) to bidder \(i\), so \(v_{\max}^{<i}=v_{(2)}\). Observe that \(f_{i}(\sigma)(i+1)=\sigma(i)=1\). First \(M^{\prime}\) posts \(\infty\) at step \(i\), and then \(\overline{v}_{\max}^{<i+1}\), which equals \(v_{(2)}\) since the highest bidder cannot be at step \(i\) under \(f_{i}(\sigma)\). Then \(v_{(2)}\) is posted to bidder \(i+1=f_{i}(\sigma)^{-1}(1)\). The third and last potential violation of the ordering specified by Lemma 20 is when \(\max(\tilde{v}_{(1)},v_{\max}^{<i})\) to bidder \(i\) and \(\infty\) to bidder \(i+1\). **Lemma 24**.: _Consider an auction \(M\in\mathcal{M}_{m}\) that posts price \(\max(\tilde{v}_{(1)},v_{\max}^{<i})\) at some step \(i\) and \(\infty\) at step \(i+1\). Let \(M^{\prime}\) be the same auction at \(M\) except that it posts price \(\infty\) at step \(i\) and \(\max(\tilde{v}_{(1)},v_{\max}^{<i+1})\) at step \(i+1\). Then, \(|C^{M^{\prime}}|\geq|C^{M}|\) and \(|R^{M^{\prime}}|\geq|R^{M}|\)._ Proof.: We again first show that \(|C^{M^{\prime}}|\geq|C^{M}|\) by proving that \(f_{i}(C^{M})\subseteq C^{M^{\prime}}\). Let \(\tilde{v}_{(1)}=v_{(1)}\). Consider any \(\sigma\in C^{M}\), so \(\tilde{v}_{(1)}\) is posted to bidder \(\sigma^{-1}(1)\). The first case, \(\sigma^{-1}(1)<i\), is as in Lemma 22. The second case is \(\sigma^{-1}(1)\in\{i,i+1\}\). Since auction \(M\) posts \(\infty\) to \(i+1\), \(\sigma^{-1}(1)=i\) is the only possibility, and indeed \(\max(\tilde{v}_{(1)},v_{\max}^{<i})=\tilde{v}_{(1)}\) is posted at \(i\). First observe that if \(M\) reaches step \(i\), then \(M^{\prime}\) does as well. \(M^{\prime}\) subsequently posts \(\infty\) to bidder \(i\), effectively skipping them. We know that \(f_{i}(\sigma)(i+1)=\sigma(i)=1\). Then when under \(f_{i}(\sigma)\) auction \(M^{\prime}\) posts \(\max(\tilde{v}_{(1)},v_{\max}^{<i+1})=\tilde{v}_{(1)}\) at step \(i+1\), it is to the highest bidder. The third and last case is \(\sigma^{-1}(1)>i+1\). Use the same argument as in Lemma 23 except instead of \(v_{\max}^{<i}\) being rejected by bidder \(i\) under \(M\) and \(i+1\) under \(M^{\prime}\), here \(\max(\tilde{v}_{(1)},v_{\max}^{<i})\geq v_{\max}^{<i}\) is being rejected by bidder \(i\) under \(M\) and \(i+1\) under \(M^{\prime}\). For the second part of the lemma, we show that \(f_{i}(\sigma)\in R^{M^{\prime}}\) for \(\sigma\in R^{M}\). If \(\sigma\in R^{M}\) is true, then \(v_{\max}^{<i}=v_{(2)}\) is posted to the bidder at step \(\sigma^{-1}(1)\). Similar to the consistency proof, to prove \(f_{i}(\sigma)\in R^{M^{\prime}}\), it is sufficient to show that price \(v_{(2)}\) is posted to bidder \(f_{i}(\sigma)^{-1}(1)\), or that revenue \(v_{(2)}\) is extracted by \(M^{\prime}\) under ordering \(f_{i}(\sigma)\) (for any value of \(\tilde{v}_{(1)}\)). Observe that the proofs for cases 1 and 3 are the same as above. Case two is impossible if \(\tilde{v}_{(1)}>v_{(1)}\) because no bidder accepts prices \(\infty\) or \(\max(\tilde{v}_{(1)},v_{\max}^{<i})>v_{(1)}\). Then if \(\tilde{v}_{(1)}<v_{(1)}\), we have \(\sigma^{-1}(1)=i\), as auction \(M\) posts \(\infty\) to bidder \(i+1\). In order for value at least \(v_{(2)}\) (but below \(v_{(1)}\)) to be posted at step \(i\) by \(M\), we need \(\max(\tilde{v}_{(1)},v_{\max}^{<i})\geq v_{(2)}\), so it is sufficient for the second highest bidder to arrive before \(i\) and therefore \(v_{\max}^{<i}=v_{(2)}\). Observe that \(f_{i}(\sigma)(i+1)=\sigma(i)=1\). Under \(M^{\prime}\), \(\infty\) is posted to bidder \(i\), and then \(\max(\tilde{v}_{(1)},\overline{v}_{\max}^{<i+1})\) is posted to bidder \(i+1\). We know that \(\overline{v}_{\max}^{<i+1}=\overline{v}_{\max}^{<i}=v_{\max}^{<i}\) by the same reasoning as in Lemma 23. Then \(M^{\prime}\) posts \(\max(\tilde{v}_{(1)},v_{\max}^{<i+1})\in[v_{(1)},v_{(2)}]\) to bidder \(i+1\) with the highest value. We are now ready to prove Lemma 20. _Proof of Lemma 20._ Consider an \(\alpha\)-consistent robustness-optimal auction \(M\in\mathcal{M}_{m}\). If it does not satisfy the structure specified by Lemma 20, then there exist time steps \(i\) and \(i+1\) such that \(M\) either posts prices \(v_{\max}^{<i}\) and \(\max(\tilde{v}_{(1)},v_{\max}^{<i+1})\), or prices \(\infty\) and \(v_{\max}^{<i+1}\), or prices \(\infty\) and \(\max(\tilde{v}_{(1)},v_{\max}^{<i+1})\) to \(i\) and \(i+1\). Lemma 22, Lemma 23, and Lemma 24 show that for each of these cases, the two prices can be swapped without decreasing \(|R^{M}|\) and \(|C^{M}|\). By repeating this swapping process, we obtain an auction \(M^{\prime}\) such that \(|C^{M^{\prime}}|\geq|C^{M}|\) and \(|R^{M^{\prime}}|\geq|R^{M}|\). Thus, by Lemma 18, \(M^{\prime}\) is also an \(\alpha\)-consistent robustness-optimal auction among \(\mathcal{M}_{m}\) with the structure presented in Lemma 20. By Lemma 19, \(M^{\prime}\) is also robustness-optimal among \(\mathcal{M}_{a}\). ### The optimal thresholds For auctions in \(\mathcal{M}_{a}\) constructed as in Lemma 20, the time thresholds \(i_{1}\) and \(i_{2}\) set to \(\frac{1-\alpha}{2}n\) and \(\frac{1+\alpha}{2}n\) achieve \(\alpha\)-consistency and \(\frac{1-\alpha^{2}}{4}+O(\frac{1}{n})\)-robustness. We show that for \(\alpha\in[0,1]\), no other thresholds lead to a better robustness, which then shows the impossibility result for PAF auctions. We note that our auction also use these same thresholds. Proof of Theorem 17.: Let us first introduce some notation. Let \(R_{i}(M)\) be the event that step \(i\) is reached under auction \(M\) and let \(P_{i}^{\tilde{v}_{(1)}}(M)\) be the event that \(\tilde{v}_{(1)}\) is posted at step \(i\) under auction \(M\). For a fixed \(\alpha\in[0,1]\), consider an \(\alpha\)-consistent auction \(M\in\mathcal{M}_{a}\) that is optimal with respect to robustness and is structured as in Lemma 20 with time thresholds \(i_{1}\) and \(i_{2}\). Observe that if \(\tilde{v}_{(1)}=v_{(1)}\), then the consistency achieved by \(M\) is \[\sum_{i=1}^{n}\mathbb{P}(\sigma(i)=1)\cdot\mathbb{P}(R_{i}(M)| \sigma(i)=1)\cdot\mathbb{P}(P_{i}^{\tilde{v}_{(1)}}(M)|R_{i}(M),\sigma(i)=1) =\frac{1}{n}\sum_{i=i_{1}+1}^{i_{2}}\mathbb{P}(R_{i}(M)|\sigma(i) =1)\] \[=\frac{i_{2}-i_{1}}{n}.\] where the first equality is because \(\tilde{v}_{(1)}\) is posted only at steps \(i\in[i_{1}+1,i_{2}]\) and the highest ranking bidder is equally likely to be at any step. The second equality holds because \(M\) posting \(\infty\) up to step \(i_{1}\) and bidders within \([i_{1}+1,i-1]\) failing to accept price \(v_{(1)}\). Thus, \(M\) achieves \(\alpha\)-consistency if \(i_{2}\geq i_{1}+\alpha n\). In our auction, we use \(i_{1}=\frac{1-\alpha}{2}n\) and \(i_{2}=\frac{1+\alpha}{2}n\), and we show no other pair \(i_{1}^{\prime},i_{2}^{\prime}\) can improve robustness. Recall from Lemma 18 that the robustness of \(M\) is precisely \(|R^{M}|/(n!)\). We consider two cases. The first is if \(i_{1}^{\prime}\leq i_{1}\). Observe that if \(\tilde{v}_{(1)}<v_{(2)}\), and letting \(\sigma\) being a uniformly random permutation in \(\Sigma\), then we have that \[\frac{|R^{M}|}{n!}=\mathbb{P}_{\sigma}(\sigma\in R^{M}) =\mathbb{P}_{\sigma}(M\text{ posts price }v_{(2)}\text{ to }\sigma^{-1}(1))\] \[=\mathbb{P}_{\sigma}(\sigma^{-1}(2)\leq i_{1}^{\prime})\cdot \mathbb{P}_{\sigma}(\sigma^{-1}(1)>i_{1}^{\prime}|\sigma^{-1}(2)\leq i_{1}^{ \prime})\] \[=\frac{i_{1}^{\prime}}{n}\frac{n-i_{1}^{\prime}}{n-1}\] where the first equality is by definition of \(\sigma\) and the second by definition of \(R^{M}\). The third equality is since we need \(\sigma^{-1}(1)>i_{1}^{\prime}\) to not post \(\infty\) to \(\sigma^{-1}(1)\), \(\sigma^{-1}(2)\leq\sigma^{-1}(1)\) so that \(\max(\tilde{v}_{(1)},v_{\max}^{<\sigma^{-1}(1)})=v_{\max}^{<\sigma^{-1}(1)}=v_{ (2)}\) is posted to \(\sigma^{-1}(1)\), and \(\sigma^{-1}(2)\not\in[i_{1}^{\prime},\sigma^{-1}(1)]\) to not sell to \(\sigma^{-1}(2)\) and reach \(\sigma^{-1}(1)\). Differentiating \(\frac{i_{1}^{\prime}}{n}\frac{n-i_{1}^{\prime}}{n-1}\) with respect to \(i_{1}^{\prime}\), we get \(\frac{n-2i_{1}^{\prime}}{n(n-1)}\), which is positive for \(i_{1}^{\prime}\leq\frac{n}{2}\). Then since \(i_{1}^{\prime}\leq\frac{1-\alpha}{2}n\leq\frac{n}{2}\), we get that the robustness of \(M\) is \(\frac{i_{1}^{\prime}}{n}\frac{n-i_{1}^{\prime}}{n-1}\leq\frac{i_{1}}{n}\frac{n- i_{1}}{n-1}=\frac{n}{n-1}\frac{1-\alpha^{2}}{4}\). The second case is if \(i^{\prime}_{1}>i_{1}\). Since \(i_{2}=i_{1}+\alpha n\), then \(i^{\prime}_{2}\geq i^{\prime}_{1}+\alpha n\geq i_{2}\). Observe that if \(\tilde{v}_{(1)}>v_{(1)}\), and letting \(\sigma\) being a uniformly random permutation in \(\Sigma\), then we have that \[\frac{|R^{M}|}{n!} =\mathbb{P}_{\sigma}(M\text{ posts price }v_{(2)}\text{ to } \sigma^{-1}(1))\] \[=\mathbb{P}_{\sigma}(\sigma^{-1}(2)\leq i^{\prime}_{2})\cdot \mathbb{P}_{\sigma}(\sigma^{-1}(1)>i^{\prime}_{2}|\sigma^{-1}(2)\leq i^{ \prime}_{1})\] \[=\frac{i^{\prime}_{2}}{n}\frac{n-i^{\prime}_{2}}{n-1}\] where the second equality is since we need \(\sigma^{-1}(1)>i^{\prime}_{2}\) to not post \(\infty\) or \(\tilde{v}_{(1)}\) to \(\sigma^{-1}(1)\), \(\sigma^{-1}(2)\leq\sigma^{-1}(1)\) so that \(v_{\max}^{<\sigma^{-1}(1)}=v_{(2)}\) is posted to \(\sigma^{-1}(1)\), and \(\sigma^{-1}(2)\not\in[i^{\prime}_{2},\sigma^{-1}(1)]\) to not sell to \(\sigma^{-1}(2)\) and reach \(\sigma^{-1}(1)\). Differentiating \(\frac{i^{\prime}_{2}}{n}\frac{n-i^{\prime}_{2}}{n-1}\) with respect to \(i^{\prime}_{2}\), we get \(\frac{n-2i^{\prime}_{2}}{n(n-1)}\), which is negative for \(i^{\prime}_{2}\geq\frac{n}{2}\). Since \(i^{\prime}_{2}\geq\frac{1+\alpha}{2}n\geq\frac{n}{2}\), we obtained that the robustness achieved by \(M\) is \(\frac{i^{\prime}_{2}}{n}\frac{n-i^{\prime}_{2}}{n-1}\leq\frac{i_{2}}{n}\frac{n -i_{2}}{n-1}=\frac{n}{n-1}\frac{1-\alpha^{2}}{4}\). Thus, an \(\alpha\)-consistent auction \(M\in\mathcal{M}_{a}\) that is optimal with respect to robustness and is structured as in Lemma 20 achieves a robustness that is at most \(\frac{n}{n-1}\frac{1-\alpha^{2}}{4}=\frac{1-\alpha^{2}}{4}+O(\frac{1}{n})\). By Lemma 20, we conclude that this robustness bound holds for any PAF auction.
2307.01767
Localized Data Work as a Precondition for Data-Centric ML: A Case Study of Full Lifecycle Crop Disease Identification in Ghana
The Ghana Cashew Disease Identification with Artificial Intelligence (CADI AI) project demonstrates the importance of sound data work as a precondition for the delivery of useful, localized datacentric solutions for public good tasks such as agricultural productivity and food security. Drone collected data and machine learning are utilized to determine crop stressors. Data, model and the final app are developed jointly and made available to local farmers via a desktop application.
Darlington Akogo, Issah Samori, Cyril Akafia, Harriet Fiagbor, Andrews Kangah, Donald Kwame Asiedu, Kwabena Fuachie, Luis Oala
2023-07-04T15:14:59Z
http://arxiv.org/abs/2307.01767v1
# Localized Data Work as a Precondition for Data-Centric ML: ###### Abstract The Ghana Cashew Disease Identification with Artificial Intelligence (CADI AI) project demonstrates the importance of sound data work as a precondition for the delivery of useful, localized data-centric solutions for public good tasks such as agricultural productivity and food security. Drone-collected data and machine learning are utilized to determine crop stressors. Data, model and the final app are developed jointly and made available to local farmers via a desktop application. Cashew is a significant cash crop in Ghana (Rabany et al., 2015), with small and medium farmers relying on it for income. Cashew cultivation is concentrated in specific regions of Ghana. However, farmers face challenges including insect, plant disease and abiotic stress factors that reduce their yields (ICAR; Jayaprakash et al., 2023; Mensah et al., 2023; Timothy et al., 2021). To address these issues, the Cashew Disease Identification With Artificial Intelligence (CADI AI) project was launched to provide a data-centric solution. The project encompasses three stages. Comprehensive data work encompassed stakeholder consultation, data collection, data annotation and labelling. The collected drone data is open for researchers and data scientists to develop innovative machine learning applications to improve food security. Model work involved the training of an object detection model to diagnose and detect stress factors in cashew crop images. Finally, the model was integrated into a desktop application for farmers, allowing them to input their own data and receive diagnoses. The application also displays the precise location of the image, enabling farmers to identify affected areas on their farms. Draftraft, City, Canada, US, USA. ## 1 Data work The data was collected from cashew farms in the Bono Region of Ghana, necessitating two separate trips to the farms to accommodate seasonal variations and diversity of data. In total, the data collection process spanned six days. The dataset is diverse in maturity stages, camera angles, time of capture, and various types of stress morphology. All images were captured with the P4 Multi-spectral drone (Dji) at image resolution of 1600 x 1300 pixels. The images comprise close up shots and shots from distance of the cashew plant abnormalities. The total number of images collected is 4,736. Full details and datasheet in the appendices. Further improvements to the dataset could be made by capturing across more regions during blooming cycles or varying devices for robustness testing (Oala et al., 2023). Figure 1: A visual summary of the application lifecycle: data work (data collection with farmers, data annotation and labelling), model work (model training and fine-tuning), and UI application (software deployment and release to farmers). The data was annotated by the project team with labelling tools makesense (Makesense) and roboflow (Roboflow). Refer to appendix A.1 for annotation guidelines developed by an agricultural scientist with expertise in crop health and disease management from a local Ghanaian university. Each stress instance is associated with a class label based on the status of the crop. The labels are **"insect"**, **"disease"** and **"abiotic"** respectively as depicted in Figure 2. The data was split into train, validation and test sets i.e., 3788, 710 and 238 images respectively. During the training, it was found that the dataset is significantly skewed towards the abiotic class. ## 2 Model work We utilized the YOLO v5X (Jocher, 2020) architecture, known for its strong performance on object detection benchmarks, as the foundation for this study. The experiments were conducted on the high-performance DFKI GIZ cluster.The dataset has a significant skew towards the abiotic class (see Figure 2), and measures were taken to balance the data by augmenting other classes. However, preserving the skewness was important to reflect the higher occurrence of abiotic factors on farms. The best model achieved a mean average precision (mAP) of 0.648. See Table 1 and Figure 4 in the Appendices for detailed experimental evaluations and baselines. The model has a few limitations that affect its performance in distinguishing between the disease class and the abiotic class. The primary challenge lies in the similarity between these two classes within a typical farm setting. The model may encounter difficulties in accurately differentiating between them due to their overlapping characteristics. This limitation is an inherent challenge in the dataset and can impact the model's accuracy when classifying these instances. However, it is worth noting that the model exhibits strong performance when it comes to the insect class. This is attributed to the distinct characteristics of insect class, which make them easier to identify and classify accurately. ## 3 Closing the loop: UI application A software application built with Flutter (Google, 2019) was infused with the CNN model trained on the collected data, allowing farmers to use the model on new data for crop disease management. Our objective is to make the CADI AI project's data, model, and software widely accessible to maximize their impact within the agriculture community. The data is available through prominent platforms such as Kaggle and Hugging Face dataset hub. These platforms provide user-friendly interfaces, allowing researchers, developers, and enthusiasts to access the data for their specific needs. Furthermore, the model itself can be accessed through the Hugging Face platform, enabling users to leverage its capabilities in their own ML applications. A summary of all resources is in the appendices. Figure 3: Screenshot of the final UI application. For more details see appendices. Figure 2: Top: Sample instances from the annotated dataset. For a higher resolution sample see the appendices. Bottom: Distribution of labels in the annotated data.
2303.02941
Planetary Orbit Eccentricity Trends (POET). I. The Eccentricity-Metallicity Trend for Small Planets Revealed by the LAMOST-Gaia-Kepler Sample
Orbital eccentricity is one of the basic planetary properties, whose distribution may shed light on the history of planet formation and evolution. Here, in a series of works on Planetary Orbit Eccentricity Trends (dubbed POET), we study the distribution of planetary eccentricities and their dependence on stellar/planetary properties. In this paper, the first work of the POET series, we investigate whether and how the eccentricities of small planets depend on stellar metallicities (e.g., [Fe/H]). Previous studies on giant planets have found a significant correlation between planetary eccentricities and their host metallicities. Nevertheless, whether such a correlation exists in small planets (e.g. super-Earth and sub-Neptune) remains unclear. Here, benefiting from the large and homogeneous LAMOST-Gaia-Kepler sample, we characterize the eccentricity distributions of 244 (286) small planets in single (multiple) transiting systems with the transit duration ratio method. We confirm the eccentricity-metallicity trend that eccentricities of single small planets increase with stellar metallicities. Interestingly, a similar trend between eccentricity and metallicity is also found in the radial velocity (RV) sample. We also found that the mutual inclination of multiple transiting systems increases with metallicity, which predicts a moderate eccentricity-metallicity rising trend. Our results of the correlation between eccentricity (inclination) and metallicity for small planet support the core accretion model for planet formation, and they could be footprints of self (and/or external) excitation processes during the history of planet formation and evolution.
Dong-Sheng An, Ji-Wei Xie, Yuan-Zhe Dai, Ji-Lin Zhou
2023-03-06T07:22:29Z
http://arxiv.org/abs/2303.02941v1
Planetary Orbit Eccentricity Trends (POET). I. The Eccentricity-Metallicity Trend for Small Planets Revealed by the LAMOST-Gaia-Kepler Sample. ###### Abstract Orbital eccentricity is one of the basic planetary properties, whose distribution may shed light on the history of planet formation and evolution. Here, in a series of works on Planetary Orbit Eccentricity Trends (dubbed POET), we study the distribution of planetary eccentricities and their dependence on stellar/planetary properties. In this paper, the first work of the POET series, we investigate whether and how the eccentricities of small planets depend on stellar metallicities (e.g., [Fe/H]). Previous studies on giant planets have found a significant correlation between planetary eccentricities and their host metallicities. Nevertheless, whether such a correlation exists in small planets (e.g. super-Earth and sub-Neptune) remains unclear. Here, benefiting from the large and homogeneous LAMOST-Gaia-Kepler sample, we characterize the eccentricity distributions of 244 (286) small planets in single (multiple) transiting systems with the transit duration ratio method. We confirm the eccentricity-metallicity trend that eccentricities of single small planets increase with stellar metallicities. Interestingly, a similar trend between eccentricity and metallicity is also found in the radial velocity (RV) sample. We also found that the mutual inclination of multiple transiting systems increases with metallicity, which predicts a moderate eccentricity-metallicity rising trend. Our results of the correlation between eccentricity (inclination) and metallicity for small planet support the core accretion model for planet formation, and they could be footprints of self (and/or external) excitation processes during the history of planet formation and evolution. eccentricity, planet, stellar metallicity + Footnote †: journal: AAA 0000-0002-8070-7883]Dong-Sheng An (Xie ) 0000-0002-4880-7886]Ji-Wei Xie 0000-0002-4880-7886]Yuan-Zhe Dai (Xie ) 0000-0002-0002-3878]Ji-Lin Zhou ## 1 Introduction Orbital eccentricity is one of the fundamental parameters in planetary dynamics, which provides crucial constraints on planet formation and evolution. Based on the fact that the solar system's planets have small orbital inclinations (mean \(\sim 3^{\circ}\)) and eccentricities (mean \(\sim 0.06\)), Kant and Laplace in the 18th century put forward that the solar system formed from a nebula disk, laying the foundation for the modern theory of planet formation. Since the discovery of 51 Pegasi b by Mayor & Queloz (1995), the radial velocity (RV) method has been widely used to detect exoplanets and to measure their orbital eccentricities. In contrast to the near circular orbits of solar system planets, exoplanets detected by RV are commonly found on eccentric orbits (mean eccentricity \(\sim 0.3\)), which may imply that some violent dynamical processes, e.g., planet-planet scattering (Chatterjee et al., 2008; Ford & Rasio, 2008; Raymond et al., 2010) may occur in the history of exoplanet formation and evolution. Although the RV method plays an important role in measuring exoplanet eccentricity, it suffers from some notable biases and degeneracies which can cause considerable systematical uncertainties of eccentricity distributions (Shen & Turner, 2008; Anglada-Escude et al.,
2303.13694
Ensemble Gaussian Processes for Adaptive Autonomous Driving on Multi-friction Surfaces
Driving under varying road conditions is challenging, especially for autonomous vehicles that must adapt in real-time to changes in the environment, e.g., rain, snow, etc. It is difficult to apply offline learning-based methods in these time-varying settings, as the controller should be trained on datasets representing all conditions it might encounter in the future. While online learning may adapt a model from real-time data, its convergence is often too slow for fast varying road conditions. We study this problem in autonomous racing, where driving at the limits of handling under varying road conditions is required for winning races. We propose a computationally-efficient approach that leverages an ensemble of Gaussian processes (GPs) to generalize and adapt pre-trained GPs to unseen conditions. Each GP is trained on driving data with a different road surface friction. A time-varying convex combination of these GPs is used within a model predictive control (MPC) framework, where the model weights are adapted online to the current road condition based on real-time data. The predictive variance of the ensemble Gaussian process (EGP) model allows the controller to account for prediction uncertainty and enables safe autonomous driving. Extensive simulations of a full scale autonomous car demonstrated the effectiveness of our proposed EGP-MPC method for providing good tracking performance in varying road conditions and the ability to generalize to unknown maps.
Tomáš Nagy, Ahmad Amine, Truong X. Nghiem, Ugo Rosolia, Zirui Zang, Rahul Mangharam
2023-03-23T22:13:12Z
http://arxiv.org/abs/2303.13694v2
# Ensemble Gaussian Processes ###### Abstract Driving under varying road conditions is challenging, especially for autonomous vehicles that must adapt in real-time to changes in the environment, e.g., rain, snow, etc. It is difficult to apply offline learning-based methods in these time-varying settings, as the controller should be trained on datasets representing all conditions it might encounter in the future. While online learning may adapt a model from real-time data, its convergence is often too slow for fast varying road conditions. We study this problem in autonomous racing, where driving at the limits of handling under varying road conditions is required for winning races. We propose a computationally-efficient approach that leverages an ensemble of Gaussian processes (GPs) to generalize and adapt pre-trained GPs to unseen conditions. Each GP is trained on driving data with a different road surface friction. A time-varying convex combination of these GPs is used within a model predictive control (MPC) framework, where the model weights are adapted online to the current road condition based on real-time data. The predictive variance of the ensemble Gaussian process (EGP) model allows the controller to account for prediction uncertainty and enables safe autonomous driving. Extensive simulations of a full scale autonomous car demonstrated the effectiveness of our proposed EGP-MPC method for providing good tracking performance in varying road conditions and the ability to generalize to unknown maps. L + Footnote †: footnote (2018); Rodriguez et al. (2021), as we leverage a library of GPs to achieve lower prediction error. We show that in the presence of changing environments, a single GP is unable to achieve the best estimate, while our method can compute accurate predictions. Second, we provide a weight smoothing algorithm for ensembling the library of pre-trained GPs. Finally, we demonstrate how these algorithms can be used to design a model predictive controller for autonomous driving in varying surface conditions. The paper is organized as follows. In Section 2, we introduce the problem formulation. Section 3 describes the proposed GP ensemble strategy. The control design methodology is presented in Section 4. Finally, simulation results are discussed in Section 5. ## 2 Problem Formulation We consider the following nonlinear dynamic system \[x_{k+1}=f(x_{k},u_{k};\theta^{n})+\epsilon_{k} \tag{1}\] where, at time \(k\) and mode \(n\), \(x_{k}\) is the system state, \(u_{k}\) is the control input, \(\epsilon_{k}\) is the noise, and \(\theta^{n}\) is the set of system parameters. The system is subject to state and input constraints \[x_{k}\in\mathcal{X}\text{ and }u_{k}\in\mathcal{U},\] for all time step \(k\in\{0,1,\ldots\}\). The parameters \(\theta^{n}\) are assumed to be unknown at runtime. For controlling the system (1), we aim to learn a data-driven model of it. This is particularly challenging as the system parameters \(\theta^{n}\) are both unknown and varying over time. To tackle this challenge, we propose an ensemble data-driven modeling approach that combines several estimated models of the system, which are trained offline from system data obtained under different operating conditions, and adapts their combination in real time. Suppose that we can collect experimental data of the system in \(N\) different operating conditions with unique sets of parameters \(\theta^{1},\ldots,\theta^{N}\). We do not assume that the parameters are known, however, in each controlled operating condition, the parameters are constant so that the collected data is consistent with the specific operating condition. For each operating condition with parameters \(\theta^{n}\), we collect time series of input-state pairs of the system in the form \[\mathcal{D}^{n}=\left\{(\hat{u}_{1}^{n},\hat{x}_{1}^{n}),\ldots,(\hat{u}_{m}^ {n},\hat{x}_{m}^{n})\right\}. \tag{2}\] In the context of autonomous driving in this paper, we consider the road friction as the system parameter \(\theta^{n}\) as it can greatly affect a vehicle's dynamics and its driving performance, but is often unknown in real time. Furthermore, in practical driving, road friction varies due to various factors, such as the type of road and the weather condition. Our proposed ensemble learning approach builds a library of offline models trained for different friction surfaces then combines them to adapt to the actual friction under the real time driving conditions. We then leverage the adaptive ensemble model in a model predictive control framework to drive the vehicle autonomously. The overall pipeline of our approach is illustrated in Figure 1. This work utilizes GPs as the data-driven models of vehicle dynamics due to their many advantages (Jain et al., 2021; Rodriguez et al., 2021; Hewing et al., 2018). ## 3 Ensemble Gaussian Process As the model changes as a function of a hidden parameter \(\theta^{n}\), estimating a single GP for all operating conditions may not be possible - see simulation results in Section 5. Thus, we propose to use an ensemble of GPs. Given \(N\) GPs \(\mathcal{GP}^{1}\ldots\mathcal{GP}^{N}\), we would like to obtain a model which is an ensemble of these GPs, denoted by \(\mathcal{GP}^{E}\). The resulting model is a valid GP as it is a linear combination of \(N\) GPs. Given a vector of weights \(w=[w_{1},w_{2},\ldots,w_{N}]^{T}\), we can find the maximum likelihood estimate of output given the input \(\hat{y}_{k+1|x_{k}}\) and its variance \(\hat{\sigma}_{k+1|x_{k}}^{2}\) as follows: \[\hat{y}_{k+1|x_{k}} =\sum_{n=1}^{N}w_{n}\hat{\mu}_{k+1|x_{k}}^{n} \tag{3a}\] \[\hat{\sigma}_{k+1|x_{k}}^{2} =\sum_{n=1}^{N}w_{n}^{2}(\hat{\sigma}_{k+1|x_{k}}^{n})^{2} \tag{3b}\] Where \(\hat{\mu}_{k+1|x_{k}}^{n}\) is the mean of the \(n^{th}\) GP \(\mathcal{GP}^{n}\) and \(\hat{\sigma}_{k+1|x_{k}}^{n}\) is the corresponding standard deviation of that GP. Since differentiation is a linear operator, we can approximate the ensembled mean and variance by using Taylor expansion about a nominal input point \(x^{l}\). Let \(\nabla_{x^{l}}f_{\mu}^{n}\) be the Jacobian of the mean function \(f_{\mu}^{n}\) of the \(n^{th}\) GP with inputs \(x^{i}\). We can use this jacobian to find a linear approximation of \(\hat{\mu}^{n}\) as \(\tilde{\mu}^{n}\approx x^{l}+\Delta_{x}^{T}\nabla_{x}f_{\mu}^{n}\). As this is now a linear function of \(x\), we can now express \(\hat{y}_{k+1|x_{k}}\) as \(\tilde{y}_{k+1|x_{k}}\), the linear combination of the \(\tilde{\mu}^{n}\) functions as follows: \[\tilde{y}_{k+1|x_{k}}=x^{l}+\Delta_{x}^{T}\sum_{n=1}^{N}w_{n}\nabla_{x}f_{\mu}^ {n} \tag{4}\] ### Model weights adaptation To calculate the prediction of the ensembled model, we need to first calculate the vector of weights \[w=[w_{1},w_{2},\ldots,w_{N}]^{T}. \tag{5}\] Let \(\hat{y}^{n}\) be the prediction of the \(n^{th}\) GP \(\hat{y}^{n}=f_{\mu}^{n}(x^{i})\), where \(x^{i}\) is the input to the GP. Let \(y\) be the true value of the output. Given the history of the length \(K\) of the output-input pairs \([(y_{k-1},x^{i}_{k-1}),\ldots,(y_{k-K},x^{i}_{k-K})]\) our goal is to calculate the combination of models that provide the best representation over the history. We can formulate this problem as the following optimization program: \[w^{*}=\underset{w}{\text{argmin}} \left\|Y-Fw\right\|_{2}^{2}+\alpha\|w-w_{k-1}\|_{1}\] (6a) subject to \[0\leq w\leq 1, \tag{6b}\] \[\mathbf{1}w^{T}=1, \tag{6c}\] where \(Y\) is the vector of true output values \[Y=\left[y_{t-1}\ y_{t-2}\ \cdots\ y_{t-K}\right]^{T}, \tag{7}\] \(F\) is a matrix of predictions from all of the models over the whole history \[F=\begin{bmatrix}f_{\mu}^{1}(x^{i}_{k-1})&\cdots&f_{\mu}^{N}(x^{i}_{k-1})\\ f_{\mu}^{1}(x^{i}_{k-2})&\cdots&f_{\mu}^{N}(x^{i}_{k-2})\\ \vdots&\vdots\\ f_{\mu}^{1}(x^{i}_{k-K})&\cdots&f_{\mu}^{N}(x^{i}_{k-K})\end{bmatrix}, \tag{8}\] and \(\alpha\) is a regularization parameter that minimizes the distance between the previous estimate of the weights \(w_{k-1}\) and the new weights. ## 4 Ensemble Gaussian Process Model Predictive Control (EGP-MPC) ### System modeling In this section, we present the system identification strategy. We consider a vehicle with states: \[x=[p^{x},p^{y},v_{x},\psi,v_{y},\omega,\delta], \tag{9}\] where \(p^{x}\) and \(p^{y}\) are the position in Cartesian coordinates, \(\psi\) is the orientation, \(v_{x}\) and \(v_{y}\) are the longitudinal and lateral velocities, \(\omega\) is the yaw rate, and \(\delta\) is the steering angle. The control input is \(u=[F^{x},\dot{\delta}]\), where \(F^{x}\) is the engine drive force, and \(\dot{\delta}\) is the steering velocity. To estimate a discrete-time model, we exploit the kinematic equations of motion and construct a data-driven model of the dynamics using GPs. The main advantage of using GPs is that it is possible to reason about the uncertainty of the model prediction. The kinematic equations of motion are defined as follows: \[\dot{p}_{x} =v_{x}\cos(\psi)-v_{y}\sin(\psi), \tag{10a}\] \[\dot{p}_{y} =v_{x}\sin(\psi)+v_{y}\cos(\psi),\] (10b) \[\dot{\psi} =\omega,\] (10c) \[\delta =\dot{\delta}. \tag{10d}\] To describe the dynamics of the system using equations of motion, we would need to perform a system identification campaign for all of the physical parameters. However, system identification for tire parameters is time-consuming as it requires designing specific experiments for data collection as demonstrated by Van Gennip (2018). Therefore, we choose to use a data-driven approach using GPs. We discretize (10) and model each of the dynamic states \(v_{x}\), \(v_{y}\), and \(\omega\) as an independent GP directly in a discretized form. Discretized model equations have the following form \[p_{x}[k+1] =f_{px}=p_{x}[k]+(v_{x}\cos(\psi)-v_{y}\sin(\psi))dt,\] \[p_{y}[k+1] =f_{py}=p_{y}[k]+(v_{x}\sin(\psi)+v_{y}\cos(\psi))dt,\] \[v_{x}[k+1] =f_{vx}=v_{x}[k]+f_{\mu,vx}(v_{x},v_{y},\omega,\delta,F^{x},\dot {\delta};\theta),\] \[\psi[k+1] =f_{\psi}=\psi[k]+\omega dt,\] \[v_{y}[k+1] =f_{vy}=v_{y}[k]+f_{\mu,vy}(v_{x},v_{y},\omega,\delta,F^{x},\dot {\delta};\theta),\] \[\omega[k+1] =f_{\omega}=\omega[k]+f_{\mu,\omega}(v_{x},v_{y},\omega,\delta,F^ {x},\dot{\delta};\theta),\] \[\delta[k+1] =f_{\delta}=\delta[k]+\delta dt, \tag{11}\] where \(dt\) is the discretization time step, and \(f_{\mu,vx}\), \(f_{\mu,vy}\), and \(f_{\mu,\omega}\) are mean functions of the GPs for the longitudinal velocity, lateral velocity and yaw-rate respectively. In the above equation, \(\theta\) represents the friction that affects the dynamics of the system. We can write the system model from (11) in a more compact way as \[x_{k+1}=f(x_{k},u_{k};\theta)=\begin{bmatrix}f_{px}(x_{k},u_{k})\\ f_{py}(x_{k},u_{k})\\ f_{vx}(x_{k},u_{k};\theta)\\ f_{\psi}(x_{k},u_{k};\theta)\\ f_{\psi}(x_{k},u_{k};\theta)\\ f_{\omega}(x_{k},u_{k})\\ f_{\delta}(x_{k},u_{k})\end{bmatrix}. \tag{12}\] As we discussed in the Section 2 we propose an ensemble of models to solve the problem of driving under changing road friction \(\theta\). We assume that we have \(N\) datasets defined as in (2), each collected under different friction parameters \(\theta^{1},\ldots,\theta^{N}\). Leveraging these datasets we compute a library of models \(f^{1}(x,u;\theta^{1}),\ldots,f^{N}(x,u;\theta^{N})\) using (12). Then, we compute model weights \(w\) using (6) and the resulting ensemble model is: \[\hat{f}(x,u;\theta)=\sum_{n=1}^{N}w_{n}f(x,u;\theta^{n}). \tag{13}\] ### Control Synthesis In this section, we describe the EGP-MPC algorithm. To reduce the computational complexity we leverage a linearized version of the ensemble model from (13). The linearization of the ensemble of GPs from Section 3 can Figure 1: System pipeline: At every time step, we have a vector of past observation (inputs and outputs for the GPs) \([(x_{k-1}^{i},y_{k-1}),\ldots,(x_{k-K}^{i},y_{k-K})]\). We use inputs to the GPs \([x_{k-1}^{i},\ldots,x_{k-K}^{i}]\) and a library of GPs to predict outputs \(\hat{y}_{k}\). Then we use measured outputs \(y\) and predicted outputs \(\hat{y}\) to calculate the combination of models (weights \(w\)) that best represents the real behavior over the history of observations. We use weights \(w\) and the library of GP models to create ensemble matrices \(\hat{A}\), \(\hat{B}\), and \(\hat{C}\) which we use for the MPC. be used to construct a linearized system model about the nominal point \(x^{l}\). This is achieved by computing the system Jacobian using the values of linearized dynamic states from (4). Thus at every time \(k\), we choose an operating trajectory \(x^{l}_{t|k}\), \(u^{l}_{t|k}\), \(t\in\{0,1,\ldots,T\}\), where \(T\) is the prediction horizon, around which we linearize the system defined by the models \(f^{1}(x,u;\theta^{1}),\ldots,f^{N}(x,u;\theta^{N})\). Then, we compute model matrices \[[A^{1}_{t|k},B^{1}_{t|k},C^{1}_{t|k};\ldots;A^{N}_{t|k},B^{N}_{t|k},C^{N}_{t|k}], \tag{14}\] where \(N\) is the number of models, and \[A^{n}_{t|k} =\nabla_{x}f^{n}(x^{l}_{t|k},u^{l}_{t|k};\theta^{n}) \tag{15}\] \[B^{n}_{t|k} =\nabla_{u}f^{n}(x^{l}_{t|k},u^{l}_{t|k};\theta^{n}),\] \[C^{n}_{t|k} =f^{n}(x^{l}_{t|k},u^{l}_{t|k};\theta^{n})-A^{n}_{t|k}x^{l}_{t|k}- B^{n}_{t|k}u^{l}_{t|k}.\] Finally, using weights \(w\) we create a linearized ensemble model in the form: \[x_{t+1|k}=\hat{A}_{t|k}x_{t|k}+\hat{B}_{t|k}u_{t|k}+\hat{C}_{t|k}, \tag{16}\] where \[\hat{A}_{t|k}=\sum_{n=1}^{N}w_{n}A^{n}_{t|k},\ \ \ \hat{B}_{t|k}=\sum_{n=1}^{N}w_{n}B^{n}_{t|k}, \tag{17}\] \[\hat{C}_{t|k}=\sum_{n=1}^{N}w_{n}C^{n}_{t|k}.\] Next, we leverage the above linearized ensemble model matrices \(\hat{A}_{t|k}\), \(\hat{B}_{t|k}\), \(\hat{C}_{t|k}\) to design the EGP-MPC. At every time \(k\), given the initial state \(x_{k}\) we solve the following finite-time optimal control problem (FTOCP): \[\mathbf{u}^{*},\mathbf{x}^{*}= \tag{18a}\] \[\operatorname*{argmin}_{u_{t|k},x_{t|k}} \sum_{t=1}^{T-1}(||x_{t|k}-x^{r}_{t|k}||_{Q}+||u_{t|k}||_{R})\] \[+\sum_{t=2}^{T-1}||u_{t|k}-u_{t+1|k}||_{R_{d}}+||x_{T|k}-x^{r}_{T| k}||_{Q_{T}}\] subject to \[x_{0|k}=x_{k},\] (18b) \[x_{t+1|k}=\hat{A}_{t|k}x_{t|k}+\hat{B}_{t|k}u_{t|k}+\hat{C}_{t|k},\] (18c) \[x_{t|k}\in\mathcal{X}\] (18d) \[u_{t|k}\in\mathcal{U},\] (18e) \[\forall t\in\{0,1,\cdots,T-1\}, \tag{18f}\] where \(\mathbf{x}^{*}=[x^{*}_{t|k},\ldots,x^{*}_{t+T|k}]\), \(\mathbf{u}^{*}=[u^{*}_{t|k},\ldots,u^{*}_{t+T|k}]\), and \(\mathbf{x}^{*}=[x^{r}_{t|k},\ldots,x^{r}_{t+T|k}]\) is a reference trajectory which in our case consists of reference position, velocity, and orientation, \(\mathcal{X}\) is the state space, \(\mathcal{U}\) is the input space, and the norm \(\|z\|_{Q}:=z^{T}Qz\). The proposed approach is shown in Algorithm 1. First, we initialize the GP library \(GP_{ib}\) and all necessary variables (lines 1-4). At each time step, we compute the matrices for the ensemble using (17) (lines 6-9). Then, we solve the FTOCP (18) and store the optimal solution (lines 10-11). Next we store the current vector of inputs to the GPs \(x^{i}_{b}\) and apply to the system the first element of the optimizer vector \(\mathbf{u}^{*}\) from (20) (lines 12-13). We then observe the system state and store the vector of inputs to the GPs after applying the control input \(x^{i}_{a}\) (line 14). Now, we calculate the difference between \(x^{i}_{a}\) and \(x^{i}_{b}\) and store the first three elements which are the dynamic state transition (lines 15-16). Finally, we update the history of measurements \(H\), calculate the weights \(w\) using a sliding window over \(H\) as in (6), and store \(w\) as \(w_{prev}\) for the next iteration (lines 17-19). ``` 1:\(GP_{ib}\leftarrow\mathcal{GP}^{1}\ldots\mathcal{GP}^{N}\) 2:\(x_{prev}\leftarrow[0,0,\ldots,0]\), \(u_{prev}\leftarrow[0,0,\ldots,0]\) 3:\(w\leftarrow[\frac{1}{N},\frac{1}{N},\ldots,\frac{1}{N}]\), \(w_{prev}\leftarrow[\frac{1}{N},\frac{1}{N},\ldots,\frac{1}{N}]\) 4:\(H\leftarrow\emptyset\) 5:for each time step \(k\)do 6:for each \(n^{th}\) GP do 7: Update \(A^{n}_{t|k}\), \(B^{n}_{t|k}\) and \(C^{n}_{t|k}\) using (15) 8:endfor 9: Update \(\hat{A}_{t|k}\), \(\hat{B}_{t|k}\), \(\hat{C}_{t|k}\) using (17) and \(w\) 10:\(u^{*}\), \(x^{*}\leftarrow\) solution of (18) 11:\(x_{prev}\), \(u_{prev}\gets x^{*}\), \(u^{*}\) 12:\(x^{i}_{a}\leftarrow[v^{x},v^{y},\omega,\delta,F^{x},\delta]\) 13: Apply \(u^{*}\) to the vehicle for a \(dt\) 14:\(x^{i}_{b}\leftarrow[v^{x},v^{y},\omega,\delta,F^{x},\delta]\) 15:\(d\leftarrow(x^{i}_{b}-x^{i}_{a})\) 16:\(y\leftarrow[d[0],d[1],d[2]]\) 17: Append tuple \((y,x^{i}_{1})\) to \(H\) 18: Update \(w\) using \(w_{prev}\) and \(H\) according to (6) 19:\(w_{prev}\gets w\) 20:endfor ``` **Algorithm 1** EGP-MPC ## 5 Results We test our approach on the F1Tenth gym simulator O'Kelly et al. (2020) with a multi-body model from Althoff et al. (2017) and vehicle parameters of the vehicle ID: 1 from Althoff et al. (2017). With this setup, we use a full-scale car simulator with complex multi-body dynamics to test the performance of our approach which is using a simpler model for the controller. We then show that using one GP is not sufficient for driving on different conditions, as the ensemble of GPs can achieve lower prediction errors in the environment that they are trained on. We also show that the ensemble GP can achieve similar prediction error for new, unseen surface conditions. We validated our controller by driving on a custom racetrack with a time-optimal raceline optimized using Christ et al. (2021). Additionally, we tested our controller by tracking the Sao Paolo raceline from Betz et al. (2022) as well as performing several dual-lane change ISO3888-2:2011 (2011) standard maneuvers. Finally we show that the learned controller can generalize to new tracks and surface conditions that it was not trained on. All code for the EGP-MPC was implemented in Python. GPyTorch Gardner et al. (2018) was used for GP training and inference. GPyTorch provides fast variance prediction via Lanczos Variance Estimates (LOVE) Pleiss et al. (2018) as well as fast kernel operations through KeOps Ragan-Kelley et al. (2017). GPyTorch also interfaces with Pytorch autograd functionality allowing for easy computation of Jacobians, gradients, and Hessians. The FTOCP, as well as the model weight estimation problem, was solved using OSQP Stellato et al. (2020) with CVXPY Diamond and Boyd (2016); Agrawal et al. (2018) as the interface. CVXPY provides an easy-to-use interface to OSQP and is capable of generating C code for optimization problems using CVXPYgen. Using the linearized models of 2 GPs and a sliding window size \(K=11\), the EGP-MPC achieves an average control frequency of 30Hz. All code can be found here: [https://github.com/atomyks/multisurface-racing](https://github.com/atomyks/multisurface-racing). ### Comparison with Standard GP In this section, we show that the ensemble GP outperforms a single GP when the vehicle is operating on surfaces with different friction. We also show that an ensemble GP model is able to generalize to surfaces with frictions it was not trained on. We start by creating five datasets [\(D^{A}\), \(D^{B}\), \(D^{T}\), \(D^{C}\), \(D^{D}\)], for five friction values [0.3, 0.6, 0.7, 0.8, 1.0] respectively. First, we define bounded subsets of the state and input spaces. Then, we randomly sample states and inputs from these subsets. We drive the car initialized at the sampled state for 0.1 seconds, while applying the sampled control input. After each time step (\(k=0.02\) seconds), we store a sequence of two consecutive inputs and outputs for the GPs, i.e., we store tuples \([(y_{k-1},x^{i}_{k-1}),(y_{k},x^{i}_{k})]\) where \[x^{i}_{k} =[v_{x}[k],v_{y}[k],\omega[k],\delta[k],F^{x}[k],\delta[k]],\] \[y_{k} =[v_{x}[k]-v_{x}[k-1],v_{y}[k]-v_{y}[k-1],\omega[k]-\omega[k-1]].\] We repeat this process until we have collected 1000 tuples. Then, we split the datasets [\(D^{A}\), \(D^{B}\), \(D^{C}\), \(D^{D}\)] into training sets (70% of samples) [\(D^{A}_{train}\), \(D^{B}_{train}\), \(D^{C}_{train}\), \(D^{D}_{train}\)], and validation sets (30% of samples) [\(D^{A}_{val}\), \(D^{B}_{val}\), \(D^{D}_{val}\)], while \(D^{T}\) is kept as a testing set. We train five GP models [\(\mathcal{GP}^{A}\), \(\mathcal{GP}^{B}\), \(\mathcal{GP}^{C}\), \(\mathcal{GP}^{D}\), \(\mathcal{GP}^{\mathcal{P}^{\cup}}\)] trained on the datasets [\(D^{A}_{train}\), \(D^{B}_{train}\), \(D^{C}_{train}\), \(D^{D}_{train}\), \(D^{A}_{train}\cup D^{D}_{train}\)]. To validate our trained models, we test our GPs on their respective validation sets [\(D^{A}_{val}\), \(D^{B}_{val}\), \(D^{C}_{val}\), \(D^{D}_{val}\), \(D^{A}_{val}\cup D^{D}_{val}\)]. In Figure 2, we only visualize \(\mathcal{GP}^{A}\), \(\mathcal{GP}^{D}\) and \(\mathcal{GP}^{\cup}\) for clarity. From Figure 2, we can see that \(\mathcal{GP}^{A}\), \(\mathcal{GP}^{D}\) perform well on their respective validation sets but perform poorly when validated on datasets collected from a friction different from the friction encountered during training. We can also see that \(\mathcal{GP}^{\cup}\) which was trained on data collected from both friction sets \(D^{A}_{train}\cup D^{D}_{train}\) performs better than the worst-case performance of \(\mathcal{GP}^{A}\) and \(\mathcal{GP}^{D}\), but much worse than the best-case performance of \(\mathcal{GP}^{A}\) and \(\mathcal{GP}^{D}\). Our proposed approach using \(\mathcal{GP}^{E}\) which was ensembled from \(\mathcal{GP}^{A}\) and \(\mathcal{GP}^{D}\) outperforms \(\mathcal{GP}^{\cup}\). To validate the ensemble approach, we first compute weights \(w\) according to (6) using models \(\mathcal{GP}^{A}\) and \(\mathcal{GP}^{D}\), and the \((y_{k-1},x^{i}_{k-1})\) samples from \(D^{A}_{val}\cup D^{D}_{val}\). Then using weights \(w\) and the models \(\mathcal{GP}^{A}\) and \(\mathcal{GP}^{D}\) calculate the ensembled prediction using \(x^{i}_{k}\) and then validate against \(y_{k}\) from \(D^{A}_{val}\cup D^{D}_{val}\). As we can see, the ensembled model \(\mathcal{GP}^{E}\) performs much better than the worst-case performance of each of the GPs, \(\mathcal{GP}^{A}\) and \(\mathcal{GP}^{D}\), and is almost as good as their best-case performance. To test generalization to unseen friction parameters, we use dataset \(D^{T}\) with a coeficient of the friction 0.7. Then, we test the model ensembled from \(\mathcal{GP}^{A}\) and \(\mathcal{GP}^{D}\) on the dataset \(D^{T}\). Note that such a friction value is between the friction values from dataset \(D^{A}\) and \(D^{D}\). As we use a convex combination of models - see Section 3.1 - our method can only predict the evolution of the vehicle on unseen frictions that are between the frictions used to construct the training datasets. It can be seen that the accuracy of the prediction does not change much when we introduce a new friction surface. To test if the accuracy of ensembled model \(\mathcal{GP}^{E}\) on the unseen friction surface increases as we increase the number of ensembled GPs, we try three variants of \(\mathcal{GP}^{E}\): \(\mathcal{GP}^{E}\) resembled from [\(\mathcal{GP}^{A},\mathcal{GP}^{D}\)], \(\mathcal{GP}^{E}\) ensembled from [\(\mathcal{GP}^{A},\mathcal{GP}^{B},\mathcal{GP}^{D}\)], and \(\mathcal{GP}^{E}\) ensembled from [\(\mathcal{GP}^{A},\mathcal{GP}^{B},\mathcal{GP}^{C},\mathcal{GP}^{D}\)]. We can see that the accuracy of the ensembled model increases and tends towards the training accuracy as we ensemble more GPs. This happens because the newly added GPs cover more of the parameter space and are closer to the unseen friction. ### Multi-surface Driving and Generalization We test the proposed strategy on three scenarios: **Driving on a track with varying speed:** In this experiment, we drive the car following an optimized trajectory on a custom racetrack. We trained two models for frictions \(\theta^{1}=0.5\) and \(\theta^{2}=1.1\). The training data are collected by driving on the track with single surface friction and gradually increasing speed. At test time, there are three friction zones \((0.5,0.8,1.1)\) and the proposed strategy computes the weight \(w_{1}\) and \(w_{2}\) using (6). The friction zones on the track and the weights used by EGP-MPC are shown in Figure 3. We can see that our controller is correctly choosing the GP models for the specific friction as the weights used in the ensemble correspond to the different friction surfaces. We compared the EGP-MPC with single GP-MPC and kinematic MPC controllers. Driving speeds and tracking errors are presented in Figure 5 and 4. The Figure 2: Comparison of root-mean-square (RMS) error for single GPs trained and validated on datasets with different frictions (Blue, Orange, Red, Purple), single GP trained on a mixture of frictions and validated on that mixture (Green), our proposed approach which ensembles multiple single GPs trained on different frictions and validated on a mixture of those frictions (Brown), and our proposed approach validated on unseen frictions with multiple GPs models (Pink, Grey, Olive). EGP-MPC is the only controller that successfully follows the trajectory while tracking the desired speed. **Testing on an unseen track:** To show that the controller can generalize across multiple trajectories, we test the EGP-MPC that we trained in the previous experiment on the Sao Paolo ractrack shown in Figure 8. We do not retrain the models on any parts of the new track. The reference speed for this experiment is constant at 14 m/s. As we can see in Figures 8 and 9, the algorithm is still able to perform well even when driving on a completely new track. **Lane change maneuver:** We also test our algorithm on the standard dual lane change maneuver ISO3888-2:2011 (2011). For this experiment, we train two new models by driving this maneuver at different friction coefficients: the first model is trained on data collected at a coefficient of friction \(\theta^{1}=0.5\), while the second model is trained on data collected at a coefficient of friction \(\theta^{2}=1.1\). For testing, we split up the maneuver into four segments of friction coefficients \([1.1,0.7,0.5,1.1]\). The reference speed is set to \(15m/s\) and the proposed strategy computes the weight \(w_{1}\) and \(w_{2}\) using (6). Figure 6 shows the reference trajectory, friction coefficients of the trajectory, and the weight estimation by EGP-MPC. During the lane change maneuver, the dynamics of the vehicle are well excited. Therefore, the weight in the ensemble corresponds to the friction with less fluctuation than in Figure 9. A comparison of the tracking performance of different algorithms can be seen in Figure 7. It is important to underline that all tested algorithms except for the proposed EGP-MPC failed the lane change maneuver. **Effect of System Excitation** In this experiment we demonstrate how the EGP-MPC works with 4 models and how system excitation can affect weight transition. Models are created from data collected at friction coefficients \(\theta^{1}=0.5\), \(\theta^{2}=0.6\), \(\theta^{3}=0.9\), and \(\theta^{4}=1.1\). We use the dual lane change maneuver described in the previous experiment. The target velocity is set to \(15m/s\) and the maneuver is split into five segments of friction coefficients \([1.1,0.5,0.8,0.6,1.0]\). Friction segments and weight estimation by EGP-MPC algorithm are shown in Figure 10. Locations of the first two friction transitions are specifically chosen to demonstrate how weight estimation depends on the system excitation. The first friction change is during the turn. This corresponds to the spike in the lateral acceleration (orange line 1, middle graph, Figure 10), and immediate weight change (orange line 1, top graph, Figure 10). The second friction transition is on the straight line, so the system is not well excited (orange line 2, middle graph, Figure 10), and it takes much longer for the weight to start changing (orange line 2, top graph, Figure 10). **Computation time scaling** Finally we discuss the runtime of the current implementation of the EGP-MPC algorithm. Because we are calculating the model ensemble before solving the MPC, the MPC solving time does not change with an increasing number of models (Figure 12). The current implementation calculates the linearization of models sequentially so the EGP-MPC computation time increases linearly (Figure 12). However, it is possible to compute this linearization in parallel which would improve computation time scaling with increasing number of models. estimated by the algorithm do not necessarily correlate with the actual value of the parameters of the system. This can be attributed to a lack of excitation of the system dynamics as can be seen when driving in straight lines at constant speed on the Sao Paolo racetrack (i.e Progress on the track 550m to 1000m). of pre-trained models, each corresponding to a different surface, to achieve higher state prediction accuracy. During the online update, it chooses the convex combination of models that best represents the surface that the car currently drives on. As shown in the results, the model combination clearly corresponds to different surface types. It is also shown that the algorithm behaves well on surfaces and trajectories that the algorithm was not trained on. With the use of GPs, it is also possible to reason about prediction uncertainty. For future work, EGP-MPC can be further constrained using the ensembled variance of the GPs to accommodate for this uncertainty. ## Acknowledgements We would like to thank Mr. Johannes Betz from the University of Pennsylvania for his help with the initial formulation of the kinematic MPC. _This work was supported in part by NSF CCRI #1925587 and DARPA #FA8750-20-C-0542 (Systemic Generative Engineering). The views, opinions, and/or findings expressed are those of the author(s) and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. Government._
2306.09012
Yes, we CANN: Constrained Approximate Nearest Neighbors for local feature-based visual localization
Large-scale visual localization systems continue to rely on 3D point clouds built from image collections using structure-from-motion. While the 3D points in these models are represented using local image features, directly matching a query image's local features against the point cloud is challenging due to the scale of the nearest-neighbor search problem. Many recent approaches to visual localization have thus proposed a hybrid method, where first a global (per image) embedding is used to retrieve a small subset of database images, and local features of the query are matched only against those. It seems to have become common belief that global embeddings are critical for said image-retrieval in visual localization, despite the significant downside of having to compute two feature types for each query image. In this paper, we take a step back from this assumption and propose Constrained Approximate Nearest Neighbors (CANN), a joint solution of k-nearest-neighbors across both the geometry and appearance space using only local features. We first derive the theoretical foundation for k-nearest-neighbor retrieval across multiple metrics and then showcase how CANN improves visual localization. Our experiments on public localization benchmarks demonstrate that our method significantly outperforms both state-of-the-art global feature-based retrieval and approaches using local feature aggregation schemes. Moreover, it is an order of magnitude faster in both index and query time than feature aggregation schemes for these datasets. Code: \url{https://github.com/google-research/google-research/tree/master/cann}
Dror Aiger, André Araujo, Simon Lynen
2023-06-15T10:12:10Z
http://arxiv.org/abs/2306.09012v3
# Yes, we CANN: Constrained Approximate Nearest Neighbors for local feature-based visual localization ###### Abstract Large-scale visual localization systems continue to rely on 3D point clouds built from image collections using structure-from-motion. While the 3D points in these models are represented using local image features, directly matching a query image's local features against the point cloud is challenging due to the scale of the nearest-neighbor search problem. Many recent approaches to visual localization have thus proposed a hybrid method, where first a global (per image) embedding is used to retrieve a small subset of database images, and local features of the query are matched only against those. It seems to have become common belief that global embeddings are critical for said image-retrieval in visual localization, despite the significant downside of having to compute two feature types for each query image. In this paper, we take a step back from this assumption and propose Constrained Approximate Nearest Neighbors (CANN), a joint solution of k-nearest-neighbors across both the geometry and appearance space using only local features. We first derive the theoretical foundation for k-nearest-neighbor retrieval across multiple metrics and then showcase how CANN improves visual localization. Our experiments on public localization benchmarks demonstrate that our method significantly outperforms both state-of-the-art global feature-based retrieval and approaches using local feature aggregation schemes. Moreover, it is an order of magnitude faster in both index and query time than feature aggregation schemes for these datasets. Code will be released. ## 1 Introduction In this paper we focus on the problem of image retrieval for visual localization. Modern visual localization approaches are predominantly based on 3D point clouds that represent the geometry and appearance of large scale scenes [30, 19, 43, 31]. These 3D points are estimated from image collections using Structure-from-Motion (SfM), where each 3D point has an associated descriptor derived from pixels. To localize a query image against such 3D models, a set of local features is extracted from it and 2D-3D correspondences are estimated based on descriptor similarity. In practice, this data association problem suffers from various challenges: visual aliasing, scene change, noise, etc. Because the final localization solution is computed using geometric inference from these 2D-3D correspondences, not finding enough correct matches can lead the entire localization process to fail. Simply establishing many more matches per query keypoint (red points in Fig. 1) however causes long runtime in geometric verification [3]. It is thus important to find Figure 1: The proposed Constrained Approximate Nearest Neighbor algorithm allows to find the best subset of 3D points that are both close to query features in appearance space and that are consistently seen by the same camera, leading to high overlap with the initially unknown query camera pose (shaded area). Jointly solving for these two metrics in a single search algorithm is a long-known open question in the community and CANN provides to the best of our knowledge the first practical solution. Red points in the figure show neighbors retrieved by an unconstrained search using the features from the query image (bottom right). Using CANN it’s more likely to retrieve points that are inliers to geometric verification (green) and less likely to fetch unrelated outlier points (yellow). a small 2D-3D set which has high probability to contain "good" matches (yellow/green points in Fig. 1): In fact we know that the 3D points of "good" matches should all lie inside one (unknown) camera frustum which is the one of the query image (shaded area in Fig. 1). There exist several approximations to this problem, ranging from clustering nearest-neighbor matches in the 3D model's covisibility graph [42] to using image retrieval methods to obtain a small set of candidate images for which local features are matched subsequently [41]. The latter approach, leveraging recent advances in global (per image) embeddings, has gained substantial traction recently [18, 42, 41, 8], to a degree that it appears the community has abandoned the idea of finding a solution that jointly solves for appearance and geometry using local features only. For example, the benchmark we evaluate on [18] didn't even consider local feature based retrieval approach at publication time. We don't consider the case of using local features closed and therefore propose an approach to obtain matches that are close in appearance space while obtaining geometric consistency at the same time - which is a long-known open question in the community. **Contributions.** In this paper we make three contributions: **(1)** Our first and main contribution is a new method, referred to as Constrained Approximate Nearest Neighbors (CANN), that efficiently obtains a high quality, small set of 2D-3D correspondences. CANN performs nearest neighbor search in descriptor space in a constrained manner, so that matches are compact in 3D space. We provide both a brute-force solution as well as an efficient implementation and associated complexity analysis of this _colored_ nearest neighbor search algorithm. **(2)** Our second contribution is to make the connection of colored nearest neighbor search to the problem space of image retrieval and localization, proposing a metric to rank cameras, which can serve as a way to evaluate future work in this area. **(3)** Lastly we provide an extensive evaluation of both global and local feature based methods on four large scale datasets from [18]: "Baidu-Mall","Gangnam Station","RobotCar Seasons" and "Aachen Day-Night v1.1". We demonstrate that local feature based methods are not only competitive, but in fact strongly outperform global embedding based approaches; which goes contrary to the trend in the community. We hope to provide new impulse to techniques that aim for jointly searching in appearance and geometry space, which is more efficient and elegant than previously proposed two-step approaches. ## 2 Related Work **Visual Localization using local features without image retrieval:** A large body of work in visual localization [44, 46, 61, 45, 29, 28, 47, 31, 6, 50] is based on sparse 3D point clouds built from image collections using Structure-from-Motion (SfM). These methods directly establish 2D-3D matches between local features from the query image and the descriptors associated with 3D points in the model. As mentioned before, these matches often contain many outliers and thus directly feeding them to geometric verification is typically impractical[3]. Therefore several post-filtering techniques have been proposed, such as clustering in the SfM covisibility graph [45, 46] or applying voting in the space of camera poses [61, 31]. Remaining 2D-3D matches typically have a sufficiently low fraction of outliers, so that they can be efficiently processed by geometric verification, using minimal pose solvers [26, 51] in a RANSAC [14] scheme. **Visual Localization using local features for retrieval and 2D-3D matching:** Image retrieval approaches promise to both reduce the cost of matching against features in the SfM model and achieving high quality matches by limiting the search to only a subset of the model[37]. Such approaches either remove features that don't belong to top-ranking images or perform an additional matching step to top-ranking images before running geometry verification using the obtained local feature matches. Our proposed algorithm provides an alternative to these two-step filtering approaches, by directly optimizing for compactness of nearest neighbor matches in the covisibility graph or 3D space _during_ the search. **Visual Localization using global features for retrieval and local features for 2D-3D matching:** Image retrieval using local features however has most recently lost attention from the community and instead global features (_e.g._, DELG-GLDv2 [9] and AP-GeM [38]) have dominated benchmarks [18]. While using global features offers significant speedups due to the much smaller database size, the full-image embeddings are not appropriate for high quality localization due to their global nature [18]. In order to obtain an accurate localization result, some approaches [41, 42] compute additionally local features, which are matched only between the query image and top-ranking images from the database. While there are attempts to concurrently compute local and global features to reduce cost/latency [9], the accuracy of the local feature keypoints remain inferior to approaches that compute dedicated local features [43]. **Local feature-based image retrieval techniques:** Despite the image retrieval community's recent focus on global features, local feature-based retrieval has a long history, with well-established methods [49, 36, 24, 55, 34]. Among these, the most relevant method today is the Aggregated Selective Match Kernels (ASMK), which continues to be explored recently in conjunction with deep-learned local features [52, 56, 58]. ASMK (like VLAD [5]) performs local descriptor aggregation and essentially produces high-dimensional global image representations, which are however sparse and can be searched efficiently. In contrast, our method operates directly on local descriptor space and avoids aggregation, which makes it more suitable to match against partial views and unique details that do not get lost in aggregation. **Approximate nearest neighbor methods:** Another related field is the area of proximity problems in high dimensional spaces with its many applications in computer vision [23, 7, 11, 2] (to name a few). The most common of this kind is nearest neighbor search, where given a set \(P\) of \(n\) points in a high-dimensional space \(R^{d}\) we wish to find the point(s) in \(P\) closest to a query point \(q\). Extensive research on this problem has led to a variety of interesting solutions, both exact and approximate [21]. In many use cases, indexed points in the "database" are equipped with additional attributes, such vector-valued attributes or simple scalars, such as an ID ("color") that indicates a grouping of points. The term Constrained Approximate Nearest Neighbors that we propose in this paper refers to a way to apply nearest neighbors in one space given constraints in the space of these attributes. The simplest such case is "colored nearest neighbor search": each point in \(P\) is assigned with an ID and for a given query point \(q\) (with or without colors), we want to use the IDs of points in \(P\) as constraints during the search. A simple example, which is the use case in this work, is to return nearest neighbors for all query points, provided that all of the neighbors have the same ID. The optimal result are those points in \(P\) that all have the same ID and optimize some metric, such as the sum of distances to the query points. Colored range searching and nearest neighbors (also known as "categorical range searching", or "generalized range searching") have been extensively studied in computational geometry since the 1990s [22, 16, 17]. The colored versions of nearest neighbor (or range searching) problems tend to be harder than their uncolored counterparts and several different problems and solutions were proposed, see e.g. [35]. To the best of our knowledge, no previous problem and solution fits into the requirement that we need in this work and the Constrained Approximate Nearest Neighbor problem we address here is new. ## 3 Method ### Ranking Images for Visual Localization using Constrained Approximate Nearest Neighbors We first propose a natural metric to rank cameras and then show that this ranking can be efficiently computed during the feature-matching stage instead of requiring post processing. For simplicity of presentation we consider the case of a single optimal camera/image from the index. This is without loss of generality, since in practice, we may use \(k\)-best cameras or simply weight matches by the rank of each image. **The metric:** We are given a large \(d\)-dimensional space containing local feature descriptors extracted from all images in a large collection. Denote \(I=\{0,1,2,\ldots\}\) the set of image IDs in that collection. We assign each local descriptor the ID of the image, \(i\in I\), it was computed from, so we obtain the set \(P\) of "ID-colored" points (see colors in Fig. 2 on the left). Then, at query time, for a query image with a set of features \(Q=\{q_{j}\}\) extracted from it, let \(d_{ij}=d(q_{j},NN_{i}(q_{j}))/R\) be the (normalized) Euclidean distance in descriptor space between the feature \(q_{j}\) to its nearest neighbor descriptor in image \(i\). \(R\) is some fixed maximum distance that we use for normalization such that \(d_{ij}\in[0,1]\). We then compute a score for each image \(i\) in the dataset \[s_{i}=\sum_{j}(1.0-d_{ij}^{\frac{p}{1-p}})^{\frac{1-p}{p}} \tag{1}\] and use it to rank all images with respect to the query image features \(q_{j}\in Q\). To obtain this per-image compact set of descriptors from the set of all indexed descriptors \(P\) (with their "ID-color"), we have to develop an efficient colored version of nearest neighbors. Such algorithm obtains the nearest neighbor of each \(q_{j}\) for all colors at once, provided that its distance is at most \(R\). We observe that depending on a tuned parameter \(p\), we can crop the distances at \(R\) such that all distances larger than \(R\) have score at most some very small value (say \(10^{-6}\)). This allows to get good bound on the runtime of the search for \(NN\). Figure 3 shows our metric. ### Preliminaries To explain the proposed Constrained Approximate Nearest Neighbors algorithm we refer to standard tools like Approximate Nearest Neighbors (ANN) and Approximate Range searching (RS) and make the (common) assumption that there is a maximum distance, \(R\), known at local descriptor indexing time. We also assume that randomization is allowed, i.e. all results are correct with (arbitrary) high Figure 2: A visual depiction of CANN: the image on the left shows 3D points colored by the camera from which they were reconstructed. CANN leverages this information to retrieve feature matches that are consistently seen in the same camera. This contrasts with prior art (on the right), where unconstrainted feature matching returns many unrelated outlier matches (red), which then need to be filtered out subsequently by geometric verification to obtain inlier matches (green). probability. Details on the exact definitions of ANN and RS for the case of bounded distance can be found in [12]. We can assume (for simplicity of presentation) that ANN and RS data structures can be created in \(O(C_{I}(d,c)*n)\) and a point query takes \(O(C_{q}(d,c)+k)\) time, \(C_{q}(d,c)\) is a constant depending on the dimension \(d\) and the approximation factor, \(c\) and \(k\) is the maximum number of items it returns. For image retrieval, this runtime is multiplied by the number of features in the image, \(|Q|\). **Colored Nearest Neighbors vs Colored Range Searching** As can be seen from Equation 1, we need a colored NN data structure to compute the scores for all relevant images given one query point \(q_{j}\). Such algorithm returns for each \(q_{j}\) the set of \(1\)-NN in all cameras within radius \(R\). We see from the metric that cameras without such neighbor don't contribute to the sum, so we want as many neighbors with as low Euclidean distance from the query as possible. We are not aware of any efficient algorithm to perform this with a better time complexity than a brute force method using \(|I|\) separate NN structures (See Section 3.4). Fortunately, we can reduce this colored NN problem to a fixed \(R\) colored range searching which can be implemented efficiently. A reduction from the fixed radius decision problem: "is there a point within distance \(R\) from the query" to the approximate NN is well known from LSH [20] using a form of binary search over several \(R\)'s. While this approach isn't directly applicable for colored searches, we can use similar ideas as outlined in the following section. ### Colored Range Searching In this section we explain the colored nearest neighbor search for computing the scores in Eq. (1). While there are multiple versions of this problem, we're specifically interested in _colored range reporting:_ For a set of colored points in \(R^{d}\), report all the distinct colors in the query range. Even with approximations, this problem is computationally hard with \(O(C_{q}(d,c)+|I|)\)[35, 53] as lower bound on the runtime. For a large set of images this bound can be very high, yet in practice it can be solved quite efficiently by introducing the threshold distance \(R\). The most recent work [53] on this exact problem shows that the problem is already hard for low dimensional spaces, even with integer coordinates and considering only orthogonal queries (an axis-aligned box vs. a sphere). For a set of \(n\) colored points in three dimensions, the authors of [53] describe a randomized data structure with \(O(n*polylog(n))\) space that can report the distinct colors within an axis-aligned box using \(O(k*polylogog(n))\) time, with \(k\) as number of distinct colors in the range, assuming that coordinates are in \(\{1,...,n\}\). In this paper we show that with \(R\) known at index time and allowing for approximation, we can develop a more efficient data structure for the colored range search that allows us to efficiently compute the 1-NN across all images at once. Besides being essential for solving the Constrained Nearest Neighbors problem we believe that this data-structure is interesting on its own and beyond the problem of image localization. ### A brute force method using ANN There exist two straightforward algorithms for colored range searching: First build \(|I|\) separate regular nearest neighbor structures, one for each color in \(O(C_{q}(d,c)*|P_{I}|*|I|)\) indexing time, with \(|P_{I}|\) as the color \(I\) in \(P\). Then call them sequentially for each query point \(q_{j}\) with cost \(O(C_{q}(d,c)\times|I|)\). This is independent of \(R\) and thus much worse than the above lower bound. The advantage is that the runtime of this version is asymptotically independent of the threshold radius \(R\). The second simple algorithm, that we call CANN-RS, is applicable for small thresholds \(R\) using Randomized Range Searching [12]: We index points into a RS data-structure for radius \(R\) and then for each query feature, enumerate all neighbors within \(R\), keeping a tally of neighbors per image \(I\). Because we only retrieve a small subset of points within radius \(R\) we only obtain a few colors (cameras or images) with number of features, much less than \(|I|\). This approach has runtime \(O(C_{q}(d,c)+k)\), here, \(k\) is the expected number of neighbors in each such range query over all images. The drawback is that for each query feature we must enumerate _all indexed points in the range \(R\)_ where most of them do not realize any nearest neighbor for any color. This number (\(k\) above) can still be quite large. ### Efficient algorithm using Random Grids To implement an efficient variant of the algorithm (CANN-RG), we leverage Random Grids [2, 1], an efficient \(c\)-approximate nearest neighbor algorithm based on applying randomized rotations and shifts to high-dimensional vectors prior to hashing them to a set of keys. We extend the Random Grids to support colored range searching. We show that our algorithm avoids the enumeration of all points in Figure 3: Our score for \(R=1\) and various \(p\) different values in Equation 2. \(p\) is a parameter of our metric that we tune upfront and is used to compute \(s_{i}\) for all \(d_{i,j}\). the range \(R\) (as in CANN-RS) and doesn't require distance computation in descriptor space which can take most of the time in practice due to high dimensional feature descriptors. Our algorithm works as follows: For each query point \(q_{j}\) CANN-RG should report all colors in the range \(R\) from \(q_{i}\) approximately by factor \(c>1\), i.e. any color that has a feature at distance at most \(R\) is reported with high probability and any reported color (image) has a feature at distance at most \(cR\). The points are indexed using Algorithm 1, where we store a set of distinct integers using hash-sets and use hashing to create a key for each non-empty grid cell in a high dimensional space following [2, 1]. At query time we retrieve points from the grid indices using Algorithm 2. Note that since we're only interested in the color of points within range \(R\), the index only holds point colors not point coordinates and the query results similarly only comprise colors without exact distances. ``` Data:\(P\): \(d\)-points, \(\{p_{i}\}\) with colors \(color(p_{i})\) for each \(p_{i}\in P\), \(R>0\): Range, \(c>1\): Approximation factor Result: Index \(RG\) for the colors \(color(p)\) such that for a given query point \(q\), all colors at distance at most \(R\) from \(q\) are reported quickly. FunctionIndexColors(\(P\), \(R\), \(c\)): Impose a grid of cell size \(w=R*c/\sqrt{d}\) on \(P\) Create \(L\) such grids for \(L\) randomly rotated and translated versions of \(P\) /* \(L\) is determined from the analysis below */ for all\(p_{i}\in P\)do Add \(color(p_{i})\) to a distinct set of colors hashed in each of \(L\) cells the transformed \(p_{i}\) falls into end for /* Each cell now contains a set of distinct colors of all images that have point inside it */ /* NOTE: We do not store the coordinates of the points, just their color (16 bits per color) */ return The set of hashed cells with their lists of colors ``` **Algorithm 1**Efficient colored Range Searching indexing (CANN-RG-INDEX-R) AnalysisIn this section we analyze indexing and query algorithms of CANN-RG. First we make concrete the constants \(C_{I}(d,c)\) and \(C_{q}(d,c)\) which appear in all Random Grids implementations: For the grid cell of size \(l*c/\sqrt{d}\), a random vector of length \(l\) in \(R^{d}\) will be captured in a given cell with probability at least \(e^{-\sqrt{d}/w}=e^{-d/c}\)[1]. We thus need \(L=e^{d/c}\) random grids in Algorithm 1 to ensure that, if there is a point in \(P\) at distance at most \(R\) from \(q\), its color will be found in at least one of the grid cells with constant probability. On the other hand, any color of a point found in a grid cell that also contains \(q_{t}\) (the rotated and translated version of \(q\) for that grid) is at distance at most \(cR\) from \(q\) due to the size of the grid cells. Because we do not care about the coordinates of indexed points, we only store each color at most once per grid cell. Therefore the data structure build time \((C_{I}(d,c)*|P|)=O(e^{d/c}*|P|)\) and storage \(O(e^{d/c}*|P|)\) are linear in \(|P|\). For each query point \(q\), we retrieve the points in the grid cells where all rotated and shifted versions of \(q\) fall into. The runtime is then \(O(e^{d/c}+k_{c})=O(C_{q}(d,c)+k_{c})\) ignoring the constant for matrix rotation that depends on \(d\). Note that for Random Grids implementation we have \(C_{I}(d,c)=C_{q}(d,c)\). In contrast to \(k\) in CANN-RS, \(k_{c}\) here refers to the number of _distinct colors_ found in the enumerated cells. As in [2], the probability of success can be amplified to \(1-\gamma\) by repeating the randomized indexing \(\ln(1/\gamma)\) times, which increases the data structure, space and query time accordingly. The number of grids that we need in practice is much smaller than the above worst case depending on the intrinsic dimension of the data [2]. Constructing and querying CANN-RGThe above algorithms allow indexing colors of \(P\) for a given \(R\) such that for any query point \(q\), the colors that have points at distance at most \(R\) from \(q\) are reported quickly. Given that we omitted the computation of point distances to enable efficient queries, we're still missing a way to compute the scores in Equation 1. We now show how we move from fixed radius Range Search to 1-NN. To fill this gap, let \(r\) be a constant denoting the minimum distance between points in \(P\) that we aim to distinguish. For each \(l\in\{rc^{0},rc^{1},...,R\}\), we generate a sequence of random grid indexes \(B^{l}=\{B^{l}_{i},...,B^{l}_{n}\}\) of radius \(l\). Then, given query point \(q\), we query \(q\) in all \(B_{i}\) in order and keep only the closest (first observed) color. This maps the list of colors to the \(B_{i}^{l}\) they came from and thus to a \(c\)-approximate distance from the query. Given these minimum distances, Equation 1 provides a score per point and thus a ranking of all index images by summing over all \(q_{j}\in Q\). This scoring operation increases the runtime of the query by logarithmic factor of \(R/r\). Note that CANN-RG is output sensitive on \(k_{c}\), the number of actual neighbor colors we find for each query. ## 4 Experiments ### Experimental setup Datasets:We evaluated our method on four public datasets from [18], "Baidu-Mall","Gangnam Station","RobotCar Seasons" and "Aachen Day-Night v1.1". These datasets demonstrate performance in "regular" outdoor scenarios as well as repetitive indoor environments. "RobotCar Seasons" and "Aachen Day-Night v1.1" have day and night subsets. Metrics:We evaluated two metrics: (1) The image retrieval performance using the same equal weighted barycenter (EWB) interpolation as in [18] which is based solely on the retrieved images and their known poses. (2) The effect on final localization quality using the existing localization pipeline from [18] where camera localization is computed using only features from the top-k ranking images. Local and global feature baselines:Following [18, 33], we compared our method against state-of-the-art global features AP-GeM [38], DELG [9], DenseVLAD [57], NetVLAD [4]. For local-features we compare performance and cost for both indexing and query to ASMK [54] with HOW and FIRE local features. Results for the latter were not previously published and only recently made available on the codebase for image retrieval methods. R2D2 features were computed using code from the same codebase. Storage cost for the baselines is discussed analytically. Local feature types:We experiment with three state-of-the-art local image features: HOW [56], FIRE [59] and R2D2 [40]. These three approaches have different operation characteristics and thus show the power of CANN in being adaptable to different local features. HOW and FIRE are designed for image retrieval, and are not suitable to the local feature matching part of the visual localization pipeline. R2D2, on the other hand, is designed for image matching tasks and a common choice in structure-from-motion and visual localization evaluations [25, 18]. We use a recent and lighter R2D2 version (referred to as "Feather2d2 20k") described in [18]'s codebase, where we can download the local features (the model is not publicly available). When using HOW and FIRE, our visual localization system requires indexing two different feature types: HOW for retrieval and R2D2 for matching. When using R2D2, we only need to index one feature type - which is appealing since it simplifies the overall system. For our experiments we used \(1000\) per image for all indexed and query images and all methods. Implementation details:We implemented CANN-RS and CANN-RG (Section 3) in C++, given that it performs well for low intrinsic dimensions of the features: \(32\)D for R2D2 and \(128\)D for HOW. Even though CANN-RS can be implemented with any of-the-shelf range search data structures, we used Random Grids also here as it has the ability to exploit the fact that we know the range in advance. The Random Grids were adjusted to different intrinsic dimensions by tuning its parameters, which is also required to trade off performance vs runtime using the \(c\)-approximation. Both our algorithms are very simple, trivially parallelized and are very fast (down to 20ms per query image). Tuning:The parameters of our metric are \(p\) and \(R\) and we tune them for each feature type separately. Note that in contrast to ASMK which creates a codebook that depends on the distribution of the data, CANN-RG and CANN-RS only tune for the metric itself. One can therefore provide theoretic bounds of the (approximate) algorithmic result quality for a given metric. This may make CANN more resilient to different datasets which is not the case for codebook methods, even though the latter can perform better if the distribution of features between query and training set matches. For CANN-RS, we set the grid cell size to slightly above \(1/\sqrt{d}\) and the number of grids accordingly to balance result quality and runtime (see Section 3.4). For CANN-RG we set \(c=1.1\) in all datasets and the metric parameters (\(p,R\)) were tuned using a subset of \(500\) queries from "Baidu-Mall" separately per local feature type. To the best of our knowledge, the datasets of [18] provide no tune/eval/test split and only the "Baidu-Mall" has ground-truth available to enable tuning. For ASMK we only evaluated R2D2 features, taking results for other features from [18] or used previously unpublished results provided by the authors. We train the ASMK codebook on "GangnamStyle" as it is the largest set among the four. To validate generalization, we used the same set of parameters for evaluation on all other datasets. ### Results As mentioned above, we evaluate the CANN-RG and CANN-RS algorithms on four large-scale datasets, in an outdoor, urban setting and covering an indoor scenario. Following [18, 33] we evaluate across two regimes/metrics ("EWB" and "SFM") discussed above. Figure 3 shows our results of all methods and datasets with one figure per each metric. In general, we can observe local features outperforming global features almost everywhere and by a large margin. Datasets that are more appropriate for global features are those that have many approximately similar viewpoints in the index so there is almost always one close neighbor for a given query image. Local features are naturally better where the query contains only partial overlap with the indexed images. Qualitative results are available in the appendix. RuntimeOne of the main advantages of CANN-RG (and CANN-RS as well) comparing to ASMK for image retrieval using local features is its simplicity and its runtime in both indexing and query. Table 1 shows numbers across datasets using HOW features. Since our implementation of CANN-RG and CANN-RS does not use GPU, we compared runtime on CPU using 48 cores. The table does not contain the codebook creation for ASMK and tuning for CANN-RG. CANN-RG has a nice run-time/quality trade-off: In its upper bound quality, we have the results of CANN-RS and with CANN-RG can pay in quality for much better runtime. The significance of this is that CANN-RG can achieve runtimes of a few milliseconds for query image, which is otherwise only possible with global features. Table 1 provides results demonstrating the trade-off of runtime and quality. To obtain a cheaper, yet representative quality measure, we compute the EWB using the top-1 retrieved image. The indexing time for CANN-RG is larger due to the fact that we have factor \(O(\log R)\) more data structures. Preliminary results on general image retrievalTo re-emphasize the generalization of the algorithm and it's scalability (20-50ms per query image), we also evaluated it for general image retrieval on the ROxford dataset. Global retrieval benchmarks evaluate the full rank of all indexed images, which requires also scoring the tail of the retrieved images. Since ranking the tail of the index is not typically meaningful for local features, we evaluated a combination of CANN with global features by computing a weighted average of DELG and CANN-RG+HOW or CANN-RG+FIRE, for all image scores. We compare CANN and this combined approach to the SOTA for global/local features. Very recently, a new method called Correlation Verification [27] was published which is, to our knowledge the best performing method on the ROxford dataset. Correlation Verification however includes (significantly expensive) spatial verification of local features and is thus not comparable to CANN-RG which doesn't use geometry or spatial reasoning of features (out of the cameras). Like for localization, spatial reasoning is an additional step that can be applied on top of CANN-RG. Table 2 shows comparisons of SOTA approaches including [27] with our proposed approach (bold). Limitations.Using local features throughout the stack requires that the entire map fit in memory. Approaches that use global features can be more easily scaled, in that the local features per spatial region are kept out-of-memory and are only loaded after image retrieval. ## 5 Conclusions In this paper, we proposed CANN, a novel nearest neighbor searching approach that finds the best matches in both appearance and geometry space to improve visual localization using only local features. Unlike the state-of-the-art in the field, which uses global features for image retrieval and local features for 2D-3D matching, our approach uses only local features, while providing significantly better performance than the state-of-the-art at very competitive runtime cost. By providing the relevant metric and theoretical foundation of the algorithm, as well as two efficient algorithmic solutions, we hope to inspire a revived interest in solving visual localization with local features only. \begin{table} \begin{tabular}{|l|c c c|} \hline & \multicolumn{3}{c|}{Index} \\ \hline Dataset & CANN-RS & CANN-RG & ASMK \\ \hline Baidu & 0.88 & 9.08 & 253.34 \\ Gangnam & 6.41 & 168.49 & 1467.17 \\ Aachen & 11.19 & 244.09 & 2782.43 \\ Robotcar & 33.02 & 852.12 & 8104.98 \\ \hline & \multicolumn{3}{c|}{Query} \\ \hline Baidu & 0.37(12.47) & 0.02(12.12) & 0.47(12.52) \\ Gangnam & 1.6(12.66) & 0.05(11.35) & 0.41(11.03) \\ Aachen & 1.38(29.1) & 0.06(28.8) & 0.48(28.0) \\ Robotcar & 5.29(94.2) & 0.04(93.6) & 0.53(91.0) \\ \hline \end{tabular} \end{table} Table 1: Indexing and average runtime per query image (seconds) for CANN-RS, CANN-RG and ASMK using HOW features. An indication of quality/runtime trade-off can be taken from the simplified EWB metric, computed using the top-1 retrieved image and provided in parentheses. \begin{table} \begin{tabular}{l c c} \hline (A) Local feature aggregation & Medium & Hard \\ DELF-D2R-R-ASMK* (GLDv1) [52] & 73.3 & 47.6 \\ R50-HOW-ASMK,n=2000 [15] & 79.4 & 56.9 \\ \hline (B) Global feature & & \\ R101-GeM [13, 48] & 65.3 & 39.6 \\ R101-GeM-AP (GLDv1) [39] & 66.3 & 42.5 \\ R101-GeM+SOLAR (GLDv1) [32] & 69.9 & 47.9 \\ R50-DELG (Global-only, GLDv2-clean) [10] & 73.6 & 51.0 \\ R101-DELG (Global-only, GLDv2-clean) [10] & 76.3 & 55.6 \\ R50-DOLG (GLDv2-clean) [60] & 80.5 & 58.8 \\ R101-DOLG (GLDv2-clean) [60] & 81.5 & 61.1 \\ R101-CVNet-Global (GLDv2-clean) [27] & 80.2 & 63.1 \\ \hline **DELG+CANN-FIRE (weighted)** & **82.4** & 62.3 \\ **DELG+CANN-HOW (weighted)** & **83.3** & **64.2** \\ \hline \end{tabular} \end{table} Table 2: Results of DELG+CANN compared to state-of-the-art reranking (local aggregation and global features) on ROxford (numbers of related work from [27]) \begin{table} \begin{tabular}{c ## Appendix A Additional Qualitative Results We include additional qualitative results in Figures 4,5,6,7,8,9 taken from all datasets, showing that CANN retrieves good results also in images with heavy occlusions. Cases like these, where there is only partial overlap between the query image and database images are very difficult for global features. We use HOW [56] for local features with both CANN-RG (ours) and ASMK [55]. The query image is on the left and the top 5 retrieved images are on the right. Our method retrieves all correct images, while other methods retrieve occasionally incorrect images ranked high among the top 5. We see that some global methods retrieve incorrect images due to scene clutter or high-frequency textures, while CANN provides diverse set of correct results. In several cases, we see that CANN+HOW outperforms ASMK+HOW. Retrieved images are marked red (bad) or green (good). Figure 4: Robotcar Figure 5: Gangnam Figure 6: Baidu Figure 8: Aachen Figure 7: Baidu Figure 9: Aachen
2308.06855
Distance preservation in state-space methods for detecting causal interactions in dynamical systems
We analyze the popular ``state-space'' class of algorithms for detecting casual interaction in coupled dynamical systems. These algorithms are often justified by Takens' embedding theorem, which provides conditions under which relationships involving attractors and their delay embeddings are continuous. In practice, however, state-space methods often do not directly test continuity, but rather the stronger property of how these relationships preserve inter-point distances. This paper theoretically and empirically explores state-space algorithms explicitly from the perspective of distance preservation. We first derive basic theoretical guarantees applicable to simple coupled systems, providing conditions under which the distance preservation of a certain map reveals underlying causal structure. Second, we demonstrate empirically that typical coupled systems do not satisfy distance preservation assumptions. Taken together, our results underline the dependence of state-space algorithms on intrinsic system properties and the relationship between the system and the function used to measure it -- properties that are not directly associated with causal interaction.
Matthew O'Shaughnessy, Mark Davenport, Christopher Rozell
2023-08-13T22:35:52Z
http://arxiv.org/abs/2308.06855v1
# Distance preservation in state-space methods for detecting causal interactions in dynamical systems ###### Abstract We analyze the popular "state-space" class of algorithms for detecting casual interaction in coupled dynamical systems. These algorithms are often justified by Takens' embedding theorem, which provides conditions under which relationships involving attractors and their delay embeddings are continuous. In practice, however, state-space methods often do not directly test continuity, but rather the stronger property of how these relationships preserve inter-point distances. This paper theoretically and empirically explores state-space algorithms explicitly from the perspective of distance preservation. We first derive basic theoretical guarantees applicable to simple coupled systems, providing conditions under which the distance preservation of a certain map reveals underlying causal structure. Second, we demonstrate empirically that typical coupled systems do not satisfy distance preservation assumptions. Taken together, our results underline the dependence of state-space algorithms on intrinsic system properties and the relationship between the system and the function used to measure it -- properties that are not directly associated with causal interaction. ## 1 Introduction Many topics of scientific inquiry involve the fundamental mathematical problem of using observational time series data to infer the causal structure underlying coupled dynamical systems [1, 2, 3]. For example, determining regions of the brain in which seizures originate requires inferring causal relationships in neural activity that is frequently modeled by coupled dynamical systems [4]. Developing mechanistic understanding of climate systems involves detecting causal interactions in similar coupled dynamical system models [2]. Analogous applications arise in fields from engineering and economics to political science and public policy. Mathematically, suppose we have two potentially coupled dynamical systems with states \(x\) and \(y\): \[\begin{split}\dot{x}&=f(x,y)\\ \dot{y}&=g(x,y).\end{split} \tag{1}\] We say that \(x\) drives \(y\) (\(x\to y\)) if the differential equation governing \(y\) has a nontrivial dependence on \(x\), so that a perturbation to \(x\) produces a change in the trajectory of \(y\)[5, 6]. Our goal is to determine whether \(x\to y\) from the time series observations \(\{x_{t},y_{t}\}_{t=1}^{T}\). The "state-space" category of algorithms infer causal structure in coupled dynamical systems using an idea, inspired by Takens' theorem, that has been termed the "closeness principle" [7]. These procedures aim to determine whether \(x\to y\) from the relationship between inter-point distances on subsystems' delay embeddings. (See Sections 2.1 and 2.2 for a precise description.) But while these algorithms are typically based on how this relationship between delay embeddings _preserves distances_, Takens' theorem guarantees only the weaker property of its _continuity_ (Figure 1). There is thus a gap between the distance-preservation-based operation of state-space algorithms and the continuity-based theory they are inspired by, making it difficult for practitioners to know when state-space algorithms might be appropriate for a problem at hand. In this paper we aim to reduce this gap by studying the closeness principle explicitly through the lens of distance preservation. We do so by making use of the recent _stable_ Takens' theorem [8, 9], which shows that, under stronger conditions on the underlying dynamical systems, a delay embedding preserves not only _topological_ structure (which is not directly tested by closeness-principle-based methods), but also _geometric_ structure (which is more directly tested by closeness-principle-based methods). Our goal is to use the stable Takens' theorem to gain understanding about the settings under which the closeness principle holds and causal inference techniques based on it may be theoretically justified. The main contributions of this paper are twofold. We first theoretically connect the causal structure of a system with properties of a key mapping in the restricted case of linear systems and observation functions (Section 3.1), and provide a theoretically-grounded test for causal interaction based the closeness principle applicable to both linear and nonlinear systems (Section 3.2). Our theoretical results build on the stable Takens' theorem of [8], [9] by applying them to heuristics used by state-space methods, and complement recent theoretical work on these algorithms that is based on continuity rather than distance preservation [10]. Second, we provide empirical results that show the effectiveness of many state-space algorithms eludes straightforward distance-preservation-based explanations (Section 4). _The main message of this paper is that, while it is possible to provably identify causal interaction by analyzing how distances are preserved between delay embeddings, such tests are strongly dependent on system- and measurement-dependent properties that can vary significantly between systems and are often impossible to determine in practice._ ## 2 Background Sections 2.1 and 2.2 describe the delay embedding procedure, the closeness principle, and related tests for causal interaction. Sections 2.3 and 2.4 then set the stage for our theoretical results with precise statements of Takens' theorem and the linear stable Takens' theorem. ### Delay embeddings and detecting causal interaction Let the evolution of states \(x\in\mathbb{R}^{n_{x}}\) and \(y\in\mathbb{R}^{n_{y}}\) be described by dynamical systems \(\dot{x}=f(x,y)\) and \(\dot{y}=g(x,y)\). We say that \(x\) drives \(y\) (\(x\to y\)) if \(g\) depends nontrivially on \(x\) (so that a perturbation to \(x\) will affect the trajectory of \(y\)), and similarly that \(y\) drives \(x\) (\(y\to x\)) if \(f\) depends nontrivially on \(y\) (so that a perturbation to \(y\) will affect the trajectory of \(x\)). We use the notation \(x\leftrightarrow y\) to indicate that both \(x\to y\) and \(y\to x\), and the notation \(x\perp y\) to indicate that neither \(x\to y\) nor \(y\to x\). In this paper we restrict our attention to systems that are governed by deterministic dynamics and constrained to a low-dimensional attractor after some time \(t^{\prime}\). We denote by \(M_{x}\), \(M_{y}\), and \(M_{xy}\) the attractors of \(x\), \(y\), and \((x,y)\) such that (perhaps after a period of transient behavior) \(x(t)\in M_{x}\subset\mathbb{R}^{n_{x}}\), \(y(t)\in M_{y}\subset\mathbb{R}^{n_{y}}\), and \((x,y)(t)\in M_{xy}\subset\mathbb{R}^{n_{x}+n_{y}}\).1 Throughout the paper we assume that data is collected once the system \((x,y)\) has reached its attractor -- that is, that the observed time series do not contain transients. Figure 1: Takens’ theorem guarantees the mapping between contemporaneous points on delay embeddings is diffeomorphic (continuous), but state-space methods are based on a stronger test of whether close distances are preserved between embeddings. This paper examines state-space methods through the lens of the stable Takens’ theorem [8], [9], which provides conditions under which distance preservation can be guaranteed. Using a scalar observation function \(\varphi\colon M\to\mathbb{R}^{m}\), points on an attractor \(M\) can be mapped to points on a _delay embedding_ in \(N\subset\mathbb{R}^{m}\) as [11] \[\Phi_{\varphi}(x_{t},y_{t})=\left[\varphi(x_{t},y_{t}),\ \varphi_{x}(x_{t-\tau},y_{t-\tau}),\ \ldots\ \varphi(x_{t-(m-1)\tau},y_{t-(m-1)\tau})\right]^{T}, \tag{2}\] where the observation time lag \(\tau\) and embedding dimension \(m\) are parameters typically chosen to maximize some heuristic notion of information preservation [12, 13].2 We will be interested in observation functions with either \(x\), \(y\), or \((x,y)\) as their domains; for example, Footnote 2: Note that we have suppressed the dependence of \(\Phi_{\varphi}\) on past time lags of \(x_{t}\) and \(y_{t}\). \[\Phi_{\gamma_{x}}(x_{t})=\left[\gamma_{x}(x_{t}),\ \gamma_{x}(x_{t-\tau}),\ \ldots\ \gamma_{x}(x_{t-(m-1)\tau})\right]^{T}\] uses the scalar observation function \(\gamma_{x}\colon M_{x}\to\mathbb{R}\) to map \(M_{x}\) to the delay embedding \(N_{x}\subset\mathbb{R}^{m}\). Takens' Theorem shows that, under conditions described in Section 2.3, there is a diffeomorphic relationship between an attractor \(M\) and its delay embedding \(N=\Phi_{\varphi}(M)\)[14, 11]. This implies that the mapping \(\Phi_{\varphi}\colon M\to N\) is diffeomorphic, and thus continuous with continuous inverse. These relationships between attractors and their embeddings expose a property that state-space methods leverage to detect causal interactions [15, 16, 17]. Suppose that \(x\to y\), so that the dynamical systems governing \(x\) and \(y\) can be written as \(\dot{x}=f(x)\) and \(\dot{y}=g(x,y)\) and the attractors \(M_{x}=\{x_{t}\}\) and \(M_{xy}=\{(x_{t},y_{t})\}\) are well-defined. With a fortuitous choice of observation functions, Takens' theorem then guarantees (see Section 2.3) that the delay mappings \(\Phi_{\gamma_{x}}\colon M_{x}\to N_{x}\) and \(\Phi_{\varphi_{y}}\colon M_{xy}\to N_{y}\) are bicontinous. Because the projection \(\pi_{x}\colon M_{xy}\to M_{x}\) defined as \((x_{t},y_{t})\mapsto x_{t}\) is also continuous, there exists a continuous mapping from \(N_{y}\) to \(N_{x}\) \[N_{y}\stackrel{{\Phi_{\varphi_{y}}^{-1}}}{{\longrightarrow}}M_{ xy}\stackrel{{\pi_{x}}}{{\longrightarrow}}M_{x}\stackrel{{ \Phi_{\gamma_{x}}}}{{\longrightarrow}}N_{x}\] defined as \(\Psi_{y\to x}=\Phi_{\gamma_{x}}\circ\pi_{x}\circ\Phi_{\varphi_{y}}^{-1}\) (Figure 2) [7, 10]. The converse, however, does not necessarily hold when \(x\to y\): the corresponding mapping from \(N_{x}\) to \(N_{y}\), \(\Psi_{x\to y}=\Phi_{\varphi_{y}}\circ\iota_{y}\circ\Phi_{\gamma_{x}}^{-1}\), involves the inclusion map \(\iota_{y}=x_{t}\mapsto(x_{t},y_{t})\). Because \(\iota_{y}\) is not (in general3) continuous, when \(x\to y\) the map \(\Psi_{y\to x}\colon N_{y}\to N_{x}\) is continuous while \(\Psi_{x\to y}\colon N_{x}\to N_{y}\) is not. Footnote 3: Note, however, that when coupling from \(x\) to \(y\) becomes strong enough, the system can enter “generalized synchrony” [16] in the sense that there exists a function \(H\) such that \(y(t)=H(x(t))\). In this case \(y\) can also be predicted from \(x\) and it becomes impossible to distinguish between the cases \(x\to y\) and \(y\to x\). The fact that \(\Psi_{y\to x}\) is continuous when \(x\to y\), but (generally) not when \(y\to x\) suggests a method for inferring causal structure from time series observations of \((x,y)\). However, it is difficult to determine the existence or nonexistence of this continuous mapping from data. In practice, heuristics based on _relative distances_ on \(N_{x}\) and \(N_{y}\) are commonly used [7, 16, 17, 18, 19, 20, 21, 22]. We state the general principle used by these methods as a conjecture: **Conjecture 1** ("Closeness principle").: _When \(x\to y\) in the underlying system, pairs of points that are close to each other on \(N_{y}\) will correspond to pairs of points that are also close on \(N_{x}\). In notation, when \(x\to y\) the map \(\Psi_{y\to x}\colon N_{y}\to N_{x}\) preserves close distances._ Figure 2: Attractors of dynamical systems and their delay-coordinate embeddings when \(x\to y\). A critical distinction is the part of system \((x,y)\) that each observation function collects. Observation function \(\gamma_{x}\colon M_{x}\to\mathbb{R}\) has only the \(x\) subsystem as its domain. Observation function \(\varphi_{y}\colon M_{xy}\to\mathbb{R}\), meanwhile, formally has the complete system \((x,y)\) as its domain, but depends nontrivially only on the \(y\) subsystem. (That is, \(\varphi_{y}\) can be written as the composition of a function \(\gamma_{y}\colon M_{y}\to\mathbb{R}\) with the projection \(\pi_{y}\colon(x,y)\mapsto y\).) We will be concerned with the conditions under which \(\Phi_{\varphi_{y}}\) reveals information about the complete system \((x,y)\) despite collecting only information about \(y\). ### Closeness-principle-based methods for causal inference A growing literature of state-space methods has applied versions of Conjecture 1 using distance-based heuristics such as how distances are preserved between pairs of points on \(N_{y}\) and their contemporaneous point pairs on \(N_{x}\)[18, 19, 20, 23, 16], recurrence plots [24, 25], or more direct comparisons of corresponding distances on \(N_{y}\) and \(N_{x}\)[7, 15]. Here we outline a few representative approaches that are discussed further in Sections 4 and 5. One family of techniques operationalize a version of the closeness principle that is based on how nearest neighbor relationships are preserved.4 Denote the index of the \(j^{\text{th}}\) spatial nearest neighbor of \(N_{x}[i]\) by \(n_{ij}^{(x)}\), and the index of the \(j^{\text{th}}\) spatial nearest neighbor of \(N_{y}[i]\) by \(n_{ij}^{(y)}\).5 Define \(D_{i}(X)=\frac{1}{T^{\prime}-1}\sum_{j=1,j\neq i}^{T^{\prime}}\|N_{x}[i]-N_{x} [j]\|_{2}^{2}\) to be the mean-square distance between points on \(N_{x}\), \(D_{i}^{k}(X)=\frac{1}{k}\sum_{j=1}^{k}\left\|N_{x}[i]-N_{x}[n_{ij}^{(x)}] \right\|_{2}^{2}\) to be the mean-squared distance from a point on \(N_{x}\) to its \(k\) nearest neighbors, and \(D_{i}^{k}(X\mid Y)=\frac{1}{k}\sum_{j=1}^{k}\left\|N_{x}[i]-N_{x}[n_{ij}^{(y)} ]\right\|_{2}^{2}\) to be the mean-squared distance from a point on \(N_{x}\) to its \(k\) "mutual" neighbors. Based on prior measures of [18, 23], Andrzejak et al. propose Footnote 4: While based on distance preservation, methods based on nearest neighbor preservation may be more robust to system properties such as observation noise. The preservation of nearest neighbors underlies other algorithms commonly applied to delay embeddings such as the false nearest neighbors method for determining embedding dimension [26] Footnote 5: In computing these nearest neighbor indices, the \(W\) temporal neighbors (the “Theiler window”) are often excluded. \[M(X\mid Y)=\max\left\{\frac{1}{T^{\prime}}\sum_{i=1}^{T^{\prime}}\frac{D_{i} (X)-D_{i}^{k}(X\mid Y)}{D_{i}(X)-D_{i}^{k}(X)},\,0\right\} \tag{3}\] as a measure of directional influence of \(x\) on \(y\)[19]. This metric reflects the intuition that as the coupling from \(x\) to \(y\) grows stronger, points that are spatially close on \(N_{y}\) are more likely to correspond to points that are spatially close on \(N_{x}\): when \(x\perp y\), \(D_{i}^{k}(X\mid Y)\approx D_{i}(X)\), but when \(x\to y\), nearest neighbor relationships on \(N_{y}\) become informative of nearest neighbor relationships on \(N_{x}\) so that \(D_{i}^{k}(X\mid Y)\approx D_{i}^{k}(X)\).6 Statistical tests based on surrogate techniques are proposed in [19]. Footnote 5: In computing these nearest neighbor indices, the \(W\) temporal neighbors (the “Theiler window”) are often excluded. Rank-based versions of these measures have been proposed as variants that are more robust to attractor geometry and noise statistics. Define \(g_{ij}\) to be the rank of the distance from \(N_{x}[i]\) to \(N_{x}[j]\), and let \(G_{i}^{k}(X\mid Y)=\frac{1}{k}\sum_{j=1}^{k}g_{i,n_{ij}^{(y)}}\). Chicharro and Andrzejak [20] propose \[L(X\mid Y)=\frac{1}{T^{\prime}}\sum_{i=1}^{T^{\prime}}\frac{G_{i}(X)-G_{i}^{k }(X\mid Y)}{G_{i}(X)-G_{i}^{k}(X)}, \tag{4}\] where \(G_{i}(X)=\frac{T^{\prime}}{2}\) is the mean rank between points in \(N_{x}\) and \(G_{i}^{k}(X)=\frac{k+1}{2}\) is the mean rank of a point to one of its \(k\) nearest neighbors. A Wilcoxon signed rank test can be performed to evaluate the significance of \(\Delta L=L(X\mid Y)-L(Y\mid X)>0\). The popular convergent cross-mapping (CCM) algorithm of [17] evaluates how well \(N_{y}\) can construct an estimate of \(x\) using the mutual-neighbor-based "simplex projection" method of [28]. Here, the estimate \[\widehat{M}_{x}[i]\mid N_{y}=\sum_{j=1}^{m+1}w_{j}M_{x}[n_{ij}^{(y)}]\] is computed using the exponential weights \[w_{j}=\frac{\exp\left(-\left\|N_{y}[t]-N_{y}[n_{i,j}^{(y)}]\right\|/\left\| N_{y}[t]-N_{y}[n_{i,1}^{(y)}]\right\|\right)}{\sum_{k=1}^{m+1}\exp\left(- \left\|N_{y}[t]-N_{y}[n_{i,k}^{(y)}]\right\|\right)/\left\|N_{y}[t]-N_{y}[n_{ i,1}^{(y)}]\right\|\right)}.\] The existence of the relationship \(x\to y\) is inferred based on whether the estimate \(\widehat{M}_{x}\mid N_{y}\) increases in fidelity to \(M_{x}\) when the attractor "fills in" as more points are used in the reconstruction. Variants of this technique improve inference by adding time delay information [29], using partial correlations to separate direct and indirect influences [30], evaluating map smoothness using neural network accuracy [21], and compensating for short time series using neural network models of dynamics [31]. Other methods make more direct use of relative distances of point pairs on \(N_{y}\) and their contemporaneous point pairs on \(N_{x}\). In [15], [24], recurrence patterns -- measures based on the probability that \(y\) returns to a neighborhood of \(y_{t}\) when \(x\) returns to the neighborhood of \(x_{t}\), and vice-versa -- are used to assess the relative complexity of the dynamics of \(x\) and \(y\). In [7], [25], causal structure is inferred from properties of the joint distribution of distances \(d(x(t_{1}),x(t_{2}))\) and \(d(y(t_{1}),y(t_{2}))\). In perhaps the most rigorous approach, [10] shows that causal relationships can be identified based on the existence of continuous noninjective surjective mappings between delay coordinate maps. To operationalize this theory, the authors apply the heuristic of [32] to detect the existence of continuous mappings between corresponding points on different delay coordinate maps. The approach of [33] also examines the continuity and (non)injectivity of this mapping by quantifying the relative influence of a systems' intrinsic and external dynamics by examining the expansiveness of the mapping \(\mathbb{Y}_{y\to x}\colon N_{y}\to N_{x}\). ### Takens' theorem The guarantees provided by Takens' theorem [14] are a key mathematical idea behind closeness-principle-based algorithms. Let \(C^{k}(M,N)\) be the set of continuous functions mapping \(M\) to \(N\) with \(k\) continuous derivatives, and let \(D^{k}(M,N)\) be the set of diffeomorphisms mapping \(M\) to \(N\) (i.e., functions mapping \(M\) to \(N\) for which both the function and its inverse are in \(C^{k}\)). **Theorem 2** (Takens' Theorem [14]).: _Let \(M\) be a compact manifold. For pairs \((\phi,\varphi)\) of dynamical system \(\phi\in D^{2}(M,M)\) and measurement function \(\varphi\in C^{2}(M,\mathbb{R})\), the map \(\Phi_{\varphi}\colon M\to\mathbb{R}^{m}\) defined in (2), where \(m=2\mathrm{dim}(M)+1\), is an embedding for generic \((\phi,\varphi)\subset D^{2}(M,M)\times C^{2}(M,\mathbb{R})\)._ That \(\Phi_{\varphi}(M)\) is an _embedding_ of \(M\) means that the mapping \(\Phi_{\varphi}\) is a homeomorphism: a continuous bijection with continuous inverse. That this holds for _generic_ pairs \((\phi,\varphi)\) means that the set of dynamical systems \(\phi\) and measurement functions \(\varphi\) for which \(\Phi_{\varphi}\) produces an embedding of \(M\) is open and dense in the \(C^{1}\) topology. This genericity can be roughly thought of as telling us that "most" pairs \((\phi,\varphi)\) produce valid embeddings. The work of [34] reinforces this intuition by proving a version of the theorem that applies to a measure-theoretic _prevalent_ set of \((\phi,\varphi)\). The _fored_ Takens' theorem extends this guarantee to the particular type of measurement functions and forced systems that we are often concerned about when detecting unidirectional coupling. Takens' theorem guarantees that a generic choice of \((\phi,\varphi)\) produces an embedding. However, the delay embeddings used by closeness-principle-based methods depend on _fixed_ measurement functions that in some cases use only information about either \(x\) or \(y\). (By using "only information about \(x\)," we mean that a measurement function \(\varphi_{x}\) can be written as \(\varphi_{x}=\gamma_{x}\circ\tau_{x}\) for some \(\gamma_{x}\), where \(\pi_{x}\) is the projection \((x,y)(t)\mapsto x(t)\).) These fixed measurement functions do not necessarily belong to the open and dense set of \((\phi,\varphi)\) that satisfy Takens' theorem. The forced Takens' theorem of [35] shows that, under weak additional conditions on \(\phi\), \(\varphi_{y}\colon M_{xy}\to\mathbb{R}\) satisfies the conclusions of Takens' theorem when \(x\to y\), and similarly that \(\varphi_{x}\colon M_{xy}\to\mathbb{R}\) satisfies the conclusions of Takens' theorem when \(y\to x\). More formally, consider a forced dynamical system \((x,y)\subset M_{x}\times M_{y}\) of the form \[x_{t+1} =f(x_{t})\] \[y_{t+1} =g(x_{t},y_{t})\] so that its underlying causal structure is \(x\to y\). Takens' theorem holds for \((\phi,\varphi)\) in open and dense subsets of \(D^{2}(M_{x}\times M_{y},M_{x}\times M_{y})\times C^{2}(M_{x}\times M_{y}, \mathbb{R})\). However, if a delay embedding is constructed using only observations of \(y\) -- as is done by state-space algorithms -- we would like to know when Takens' theorem holds for \((\phi,\varphi)\) in open and dense subsets of \(D^{2}(M_{x},M_{x})\times D^{2}(M_{x}\times M_{y},M_{y})\times C^{2}(M_{y}, \mathbb{R})\). The set \(D^{2}(M_{x},M_{x})\times D^{2}(M_{x}\times M_{y},M_{y})\) is not generic in \(D^{2}(M_{x}\times M_{y},M_{x}\times M_{y})\) and the \(C^{2}(M_{y},\mathbb{R})\) is not generic in \(C^{2}(M_{x}\times M_{y},\mathbb{R})\), so Takens' theorem does not necessarily apply to this forced system. The _forced_ Takens' theorem of [35] provides (mild) additional assumptions under which the conclusions of Takens' theorem hold when only values of \(y\) are observed: **Theorem 3** (Forced Takens' Theorem [35, Thm. 3.1]).: _Let \(M_{x}\) and \(M_{xy}\) be compact manifolds of dimension \(n_{x}\) and \(n_{x}+n_{y}\geq 1\), respectively. Suppose that the periodic orbits of period \(<2d\) of \(f\in D^{2}(M_{x})\) are isolated and have distinct eigenvalues, where \(d\geq(n_{x}+n_{y})+1\). Then for \(r\geq 1\), there exists an open and dense set of \((f,\varphi)\in D^{r}(M_{x}\times M_{y},M_{x})\times C^{r}(M_{x},\mathbb{R})\) for which the map \(\Phi_{f,\varphi}\) is an embedding._ For our purposes, the upshot of Takens' theorem and the forced Takens' theorem is that for "most" well-behaved \((\phi,\varphi)\) pairs -- even when the measurement function \(\varphi\) collects information only from the \(y\) subsystem -- the delay embedding mapping \(\Phi_{\varphi}\) is a bijection, and both \(\Phi_{\varphi}\) and \(\Phi_{\varphi}^{-1}\) are continuous. ### Stable Takens' theorem Although Takens' theorem shows that \(\Phi_{\varphi}\) is a homeomorphism between an attractor \(M\) and delay embedding \(N\), it does not guarantee that this map preserves distances: points close to each other on \(M\) may map to points that are far from each other on \(N\) and vice-versa. This limitation is significant when analyzing closeness-principle-based state-space methods because in practice these algorithms rely on comparing how various mappings preserve distances between point pairs. To understand the implications of this obstacle we employ the _stable Takens' theorems_ of [8, 9], which show that, under more restrictive conditions on the measurement function and dynamical system, the delay map \(\Phi_{\varphi}\) can preserve distances. Specifically, we say that the mapping \(\Phi_{\varphi}\) is a _stable_ (i.e., distance-preserving) embedding of \(M\) with conditioning \(\delta\in[0,1)\) if for all distinct \(p,q\in M\) and a constant \(C>0\), \[C(1-\delta)\leq\frac{\left\|\Phi_{\varphi}(p)-\Phi_{\varphi}(q)\right\|_{2}^{2 }}{\|p-q\|_{2}^{2}}\leq C(1+\delta). \tag{5}\] We refer to the lower and upper bounds on the amount by which \(\Phi_{\varphi}\) can expand or contract distances between point pairs -- here, \(C(1-\delta)\) and \(C(1+\delta)\) -- as the lower and upper isometry constants. For clarity, we begin by considering the simpler _linear_ stable Takens' theorem [8]. We then derive more general results that apply to nonlinear systems. The linear stable Takens' theorem applies to a class of linear dynamical systems observed with a linear measurement procedure: **Definition 4** (Class \(\mathcal{A}(d)\) of oscillatory linear dynamical systems [8]).: _A linear dynamical system with system matrix \(A\in\mathbb{R}^{n\times n}\) is said to be in class \(\mathcal{A}(d)\), \(d\leq\frac{n}{2}\), if \(A\) is (a) real, full rank, and has distinct eigenvalues; and (b) has only \(d\) strictly imaginary conjugate pairs of eigenvalues and the remaining eigenvalues have strictly negative real components. The strictly imaginary conjugate pairs of eigenvalues are denoted \(\{\pm i\theta_{1}\}_{i=1}^{d}\), \(\theta_{1},\ldots,\theta_{d}>0\); their corresponding eigenvectors are denoted \(v_{1},v_{1}^{*},\ldots,v_{d},v_{d}^{*}\). We define \(\Lambda=\operatorname{diag}\{j\theta_{1},-j\theta_{1},\ldots,j\theta_{d},-j \theta_{d}\}\) and \(V=\left[v_{1},v_{1}^{*},\ldots,v_{d},v_{d}^{*}\right]\in C^{n\times 2d}\) so that \(AV=V\Lambda\)._ Define \(A_{1}=\lambda_{\min}(V^{H}V)\) and \(A_{2}=\lambda_{\max}(V^{H}V)\) to be the minimum and maximum eigenvalues of \(V^{H}V\). Using a linear measurement function \(\varphi(\cdot)=\langle h,\cdot\rangle:\mathbb{R}^{n}\to\mathbb{R}\), denote the minimum and maximum alignment of system eigenvectors with the measurement function as \(\kappa_{1}=\min_{i=1,\ldots,d}\ \frac{1}{\|h\|_{2}^{2}}\left|v_{i}^{H}h\right|\) and \(\kappa_{2}=\max_{i=1,\ldots,d}\ \frac{1}{\|h\|_{2}^{2}}\left|v_{i}^{H}h\right|\). Finally, define \[\nu=\max_{i\neq j}\biggl{\{}\left|\sin(\theta_{i}T_{s})\right|^{-1},\ \left|\sin\left(\frac{(\theta_{i}-\theta_{j})T_{s}}{2}\right)\right|^{-1},\ \left|\sin\left(\frac{(\theta_{i}+\theta_{j})T_{s}}{2}\right)\right|^{-1} \biggr{\}}.\] **Theorem 5** (Stable Takens' theorem for linear systems [8]).: _Let there be a linear dynamical system defined by system matrix \(A\in\mathbb{R}^{n\times n}\) of class \(\mathcal{A}(d)\) that has reached its attractor, a sampling interval \(T_{s}>0\), and a measurement function \(\varphi(\cdot)=\langle h,\cdot\rangle:\mathbb{R}^{n}\to\mathbb{R}\) such that \(\|h\|_{2}^{2}=\frac{2d}{m}\). Suppose that (a) \(m>(2d-1)\frac{A_{2}\kappa_{2}^{2}}{A_{1}\kappa_{1}^{2}}\nu\); (b) \(\{\pm j\theta_{i}\}\) are distinct and strictly complex; and (c) \(v_{i}^{H}h\neq 0\) for all \(i=1,\ldots,d\). Then for all distinct \(p,q\in M\), the delay embedding map \(\Phi_{\varphi}\) satisfies (5) with \(C=d\left(\frac{\kappa_{1}^{2}}{A_{2}}+\frac{\kappa_{2}^{2}}{A_{1}}\right)\) and \(\delta=\delta_{0}+\delta_{1}(m)\), where_ \[\delta_{0}=\frac{A_{2}\kappa_{2}^{2}-A_{1}\kappa_{1}^{2}}{A_{2}\kappa_{2}^{2}+ A_{1}\kappa_{1}^{2}}\quad\text{and}\quad\delta_{1}(m)=\frac{(2d-1)\nu}{m} \left(\frac{2A_{2}\kappa_{2}^{2}}{A_{2}\kappa_{2}^{2}+A_{1}\kappa_{1}^{2}} \right).\] ## 3 Theoretical results In this section, we provide analytical results related to the closeness principle. We begin in Section 3.1 by focusing on the key mapping \(\Phi_{\varphi_{y}}\colon M_{xy}\to N_{y}\) (see Figure 3) in the restricted class of linear systems \(\mathcal{A}(d)\) described by Definition 4. Using both a proof of concept and a formalized result, we show that the stability of \(\Phi_{\varphi_{y}}\) depends on the causal structure relating \(x\) and \(y\). Section 3.2 then builds on this intuition, developing a test for causal interaction based on the expansivity of \(\Psi_{y\to x}\) for general (potentially nonlinear) systems. ### Causal structure affects stability of \(\varphi_{y}\) in linear systems We begin by exploring the mapping \(\Phi_{\varphi_{y}}\colon M_{xy}\to N_{y}\) (see Figure 3) in the restricted class of linear systems \(\mathcal{A}(d)\). Specifically, we show that \(\Phi_{\varphi_{y}}\) cannot be stable when \(x\not\to y\). This formalizes the key intuition that a delay embedding of \(M_{xy}\) constructed with only information about \(y\) can only be stable when the lags of \(y\) "contain information about the full system" because \(x\to y\). In this subsection we consider the class of linear coupled systems in which \(x\to y\) but \(y\not\to x\), \[\begin{bmatrix}\dot{x}\\ \dot{y}\end{bmatrix}=\begin{bmatrix}A_{xx}&0\\ A_{yx}&A_{yy}\end{bmatrix}\begin{bmatrix}x\\ y\end{bmatrix}. \tag{6}\] Here \(A_{xx}\in\mathbb{R}^{n_{x}\times n_{x}}\) describes the internal dynamics of \(x\), \(A_{yy}\in\mathbb{R}^{n_{y}\times n_{y}}\) describes the internal dynamics of \(y\), and \(A_{yx}\in\mathbb{R}^{n_{y}\times n_{x}}\) describes the external dynamics of \(y\) due to \(x\)'s forcing. #### 3.1.1 Example: A simple system analytically satisfies the closeness principle The following example constructs a forced linear system with linear measurement functions by extending a simple autonomous system from [8]: **Example 6** (Well-behaved forced linear system in which \(x\to y\)).: _Let \(A=V\Lambda V^{-1}\in\mathbb{R}^{4\times 4}\) be the system matrix of a forced system in \(\mathcal{A}(d=2)\) [see (6)] with matrix of eigenvectors_ \[V=\begin{bmatrix}\frac{1}{\sqrt{2}}v_{xx}&\frac{1}{\sqrt{2}}v_{xx}^{*}&0&0\\ \frac{1}{\sqrt{2}}v_{yx}&\frac{1}{\sqrt{2}}v_{yx}^{*}&v_{yy}&v_{yy}^{*}\end{bmatrix},\] _where \(v_{xx}=v_{yx}=v_{yy}=\frac{1}{\sqrt{2}}\begin{bmatrix}1\\ j\end{bmatrix}\) and \(j=\sqrt{-1}\); matrix of eigenvalues_ \[\Lambda=\mathrm{diag}\left([j\theta_{1},\ -j\theta_{1},\ j\theta_{2},\ -j \theta_{2}]\right),\] _where \(\theta_{1}=2.3129\), \(\theta_{2}=0.1765\); and initial condition \((x_{0},y_{0})=V\cdot 1_{4\times 1}\). (This initial condition ensures that the system does not exhibit transient behavior.)_ _As in [8] we use linear measurement functions \(\varphi\) of the form \(\varphi(\cdot)=\langle h,\cdot\rangle\), with vectors \(h\) constructed to approximately align with the eigenvectors of \(A\) as_ \[c_{\varphi_{xy}}=(1+r_{1})\mathrm{Re}(V_{1})+(1+r_{2})\mathrm{Im}(V_{1})+(1+r _{3})\mathrm{Re}(V_{3})+(1+r_{4})\mathrm{Im}(V_{3}),\] _where \(V_{1}\) and \(V_{3}\) denote the first and third columns of \(V\), and \(r_{i}\sim\mathcal{N}(0,0.1)\) for \(i=1,\dots,4\). We construct normalized \(h_{\varphi_{xy}}\in\mathbb{R}^{n_{x}+n_{y}}\), \(h_{\varphi_{x}}\in\mathbb{R}^{n_{x}+n_{y}}\), \(h_{\varphi_{y}}\in\mathbb{R}^{n_{x}+n_{y}}\), \(h_{\gamma_{x}}\in\mathbb{R}^{n_{x}}\), and \(h_{\gamma_{y}}\in\mathbb{R}^{n_{y}}\) by first extracting or setting to zero the appropriate parts of \(c_{\varphi_{xy}}\),_ \[c_{\varphi_{x}}=\begin{bmatrix}c_{\varphi_{xy}}[1]\\ c_{\varphi_{yy}}[2]\\ 0\end{bmatrix},\ c_{\varphi_{y}}=\begin{bmatrix}0\\ 0\\ c_{\varphi_{yy}}[3]\\ c_{\varphi_{yy}}[4]\end{bmatrix},\ c_{\gamma_{x}}=\begin{bmatrix}c_{\varphi_{ yy}}[1]\\ c_{\varphi_{yy}}[2]\end{bmatrix},\ c_{\gamma_{y}}=\begin{bmatrix}c_{\varphi_{ yy}}[3]\\ c_{\varphi_{yy}}[4]\end{bmatrix},\] Figure 3: Mappings between attractors and delay embedding using the observation functions we consider. and then normalizing so that_ \[h_{\varphi_{xy}}=\sqrt{\frac{4}{m}}\frac{c_{\varphi_{xy}}}{\left\|c_{\varphi_{xy}} \right\|_{2}},\ h_{\varphi_{x}}=\sqrt{\frac{4}{m}}\frac{c_{\varphi_{x}}}{ \left\|c_{\varphi_{x}}\right\|_{2}},\ h_{\varphi_{y}}=\sqrt{\frac{4}{m}}\frac{c_ {\varphi_{y}}}{\left\|c_{\varphi_{y}}\right\|_{2}},\ h_{\gamma_{x}}=\sqrt{ \frac{2}{m}}\frac{c_{\gamma_{x}}}{\left\|c_{\gamma_{x}}\right\|_{2}},\ and\ h_{\gamma_{y}}= \sqrt{\frac{2}{m}}\frac{c_{\gamma_{y}}}{\left\|c_{\gamma_{y}}\right\|_{2}}.\] _These measurement functions are used to construct the delay embeddings shown in Figure 3: \(N_{xy}=\Phi_{\varphi_{xy}}(M_{xy})\), \(N_{x}=\Phi_{\varphi_{x}}(M_{xy})=\Phi_{\gamma_{x}}(M_{x})\), and \(N_{y}=\Phi_{\varphi_{y}}(M_{xy})=\Phi_{\gamma_{y}}(M_{y})\). Each embedding is created using \(m\) time delays of \(\tau=1\)._ The system of Example 6 consists of an autonomous subsystem \(x\) and a forced subsystem \(y\). For the autonomous subsystem \(x\) defined by system matrix \(A_{xx}\), a straightforward computation reveals that the relevant quantities in Theorem 5 are \(C=1\) and \(\lim_{m\to\infty}\delta(m)=0\). Theorem 5 thus guarantees that the embedding \(\Phi_{\gamma_{x}}\colon M_{x}\to N_{x}\) is an exact isometry as \(m\to\infty\). Theorem 5 can similarly be used to bound the stability of \(\Phi_{\varphi_{xy}}\), \(\Phi_{\varphi_{x}}\), and \(\Phi_{\varphi_{y}}\). (The stability of \(\Phi_{\gamma_{y}}\) cannot be bounded analytically because the \(y\) subsystem cannot be separated from the driving \(x\) subsystem.) To compare these theoretical bounds to their empirical values, we simulate Example 6 for 10,000 time steps and compute the isometry constants for the maps in Figure 3 using 50,000 randomly sampled point pairs \((p,q)\). To construct delay embeddings we use \(m=250\) time delays so that the isometry constants are limited more by structural properties of the system than by an insufficient value of \(m\). These analytical bounds and empirically computed isometry constants for the system in Example 6 and maps between attractors and delay embeddings shown in Figure 3 are shown in Table 1. Observe that, as expected, the mapping of the autonomous subsystem \(\Phi_{\gamma_{x}}\) becomes an exact isometry as \(m\to\infty\), while the mapping of the forced subsystem \(\Phi_{\gamma_{y}}\) does not. A key observation about this system is that the stability of the mappings from \(M_{xy}\) to \(N_{x}\) and \(N_{y}\) reveals information about underlying causal structure: the mapping \(\Phi_{\varphi_{y}}\colon M_{xy}\to N_{y}\), which informally produces a delay embedding that "contains information about both \(x\) and \(y\)" because \(x\to y\), is stable _despite being constructed from only measurements of the \(y\) subsystem_. However, the mapping \(\Phi_{\varphi_{x}}\colon M_{xy}\to N_{x}\), which informally produces a delay embedding that "contains information only about \(x\)," is not stable. This intuition carries to the relative stability of the contemporaneous point mappings \(\Psi_{x\to y}\colon N_{x}\to N_{y}\) and \(\Psi_{y\to x}\colon N_{y}\to N_{x}\), which are used by algorithms described in Section 2.2. Table 1 shows that the intuition of the closeness principle indeed holds in this simple example where \(x\to y\). The map \(\Psi_{y\to x}\) has bounded _expansiveness_: pairs of points that are close on \(N_{y}\) are also close on \(N_{x}\). By contrast, \(\Psi_{x\to y}\) has bounded _contractiveness_: pairs of points that are close on \(N_{y}\) may be far apart on \(N_{x}\). #### 3.1.2 Causal structure affects stability of \(\varphi_{y}\) We next formalize the intuition that the stability of \(\Phi_{\varphi_{y}}\) depends on causal structure in the restricted setting of functions in \(\mathcal{A}(d)\). The proof demonstrates that while \(\Phi_{\varphi_{y}}\)'s stability indeed depends on whether or not \(x\to y\) in the underlying system, it is also impacted by properties of the system matrix and its relationship to the measurement operator \(\varphi_{y}\). **Proposition 7**.: _Let \((x,y)\) be a forced linear dynamical system [i.e., of form (6)] in class \(\mathcal{A}(d)\). Denote this system's attractor by \(M_{xy}\), and let its delay embedding \(N_{y}=\Phi_{\varphi_{y}}(M_{xy})\) be constructed with a linear observation function \(\varphi_{y}:M_{xy}\to\mathbb{R}\) that "uses only information about \(y\)." (That is, let the measurement function \(\varphi_{y}\) take the form \(\varphi_{y}(\cdot)=\left\langle h_{\gamma_{y}},\pi_{y}(\cdot)\right\rangle\), where \(\pi_{y}\) is the projection \((x,y)\mapsto y\) and \(h_{\gamma_{y}}\in\mathbb{R}^{ny}\).) Then:_ * _When_ \(x\not\to y\) _in the underlying system,_ \(\Phi_{\varphi_{y}}\) _is not a stable map._ \begin{table} \begin{tabular}{c c c c c c c} \hline \hline & \multicolumn{2}{c}{Lower isometry constant} & \multicolumn{2}{c}{Upper isometry constant} & \multicolumn{2}{c}{Ratio of isometry constants} \\ \cline{2-7} Mapping & Theoretical & Empirical & Theoretical & Empirical & Theoretical & Empirical \\ \hline \(\Phi_{\varphi_{xy}}\colon M_{xy}\to N_{xy}\) & 0.60 & 1.00 & 6.23 & 5.83 & 10.35 & 5.84 \\ \(\Phi_{\varphi_{x}}\colon M_{xy}\to N_{x}\) & \(<\)\(10^{-3}\) & \(<\)\(10^{-3}\) & 6.23 & 2.00 & \(>\)\(10^{3}\) & \(>\)\(10^{3}\) \\ \(\Phi_{\varphi_{y}}\colon M_{xy}\to N_{y}\) & 0.12 & 0.76 & 7.30 & 5.24 & 61.25 & 6.87 \\ \(\Phi_{\gamma_{x}}\colon M_{x}\to N_{x}\) & 0.99 & 1.00 & 1.01 & 1.00 & 1.01 & 1.00 \\ \(\Phi_{\gamma_{y}}\colon M_{y}\to N_{y}\) & – & 0.50 & – & \(>\)\(10^{3}\) & – & \(>\)\(10^{3}\) \\ \(\Psi_{x\to y}\colon N_{x}\to N_{y}\) & 0.12 & 1.00 & – & \(>\)\(10^{3}\) & – & \(>\)\(10^{3}\) \\ \(\Psi_{y\to x}\colon N_{y}\to N_{x}\) & – & \(<\)\(10^{-3}\) & 8.44 & 1.00 & – & \(>\)\(10^{3}\) \\ \hline \hline \end{tabular} \end{table} Table 1: Analytical and empirical isometry constants for the linear forced system of Example 6 and \(m=250\). * _When_ \(x\to y\) _in the underlying system,_ \(\Phi_{\varphi_{y}}\) _may be a stable map._ The proof is located in Appendix A. Proposition 7 makes precise the intuition described in Example 6: when constructing a delay embedding using measurements of only one variable of a dynamical system, geometry is preserved only when that one variable "contains information about" the complete system. That is, in a system \((x,y)\) where \(x\to y\), a delay embedding constructed with only observations of \(y\) may be geometry-preserving, but one constructed with only observations of \(x\) cannot be. ### Tests for causal interaction based on expansivity of \(\Psi_{y\to x}\) We next move closer to examining the map \(\Psi_{y\to x}\) that is considered by the closeness principle. The results in this subsection, outlined in Figure 4, relate the inter-point distances on various embeddings to causal interaction. The results are stated in terms of the distance preservation of the mappings shown in Figure 3; recall that when discussing these mappings we assume that they are operating on systems that have reached their attractors (i.e., that observed time series do not contain transients). The results in this section are not restricted to linear systems. #### 3.2.1 Necessary conditions Suppose we have analytical or empirical bounds on the stability of the maps \(\Phi_{\gamma_{x}},\Phi_{\gamma_{y}}\), \(\Phi_{\varphi_{x}}\), and \(\Phi_{\varphi_{y}}\) (see Figure 3) in hand: \[l_{\gamma_{x}} \leq\frac{\left\|\Phi_{\gamma_{x}}(p)-\Phi_{\gamma_{x}}(q)\right\| _{2}^{2}}{\left\|p-q\right\|_{2}^{2}}\leq u_{\gamma_{x}}\quad\forall\ p,q\in M _{x}, \tag{7}\] \[l_{\gamma_{y}} \leq\frac{\left\|\Phi_{\gamma_{y}}(p)-\Phi_{\gamma_{y}}(q)\right\| _{2}^{2}}{\left\|p-q\right\|_{2}^{2}}\leq u_{\gamma_{y}}\quad\forall\ p,q\in M _{y},\] (8) \[l_{\varphi_{x}} \leq\frac{\left\|\Phi_{\varphi_{x}}(p)-\Phi_{\varphi_{x}}(q) \right\|_{2}^{2}}{\left\|p-q\right\|_{2}^{2}}\leq u_{\varphi_{x}}\quad\forall \ p,q\in M_{xy},\] (9) \[l_{\varphi_{y}} \leq\frac{\left\|\Phi_{\varphi_{y}}(p)-\Phi_{\varphi_{y}}(q) \right\|_{2}^{2}}{\left\|p-q\right\|_{2}^{2}}\leq u_{\varphi_{y}}\quad\forall \ p,q\in M_{xy}. \tag{10}\] Many tests in the literature (see Section 2.2) are stated in terms of properties of the map between delay embeddings \(\Psi_{y\to x}\colon N_{y}\to N_{x}\). The following result provides a test for causal relationships using the stability of this map: **Proposition 8** (Necessary condition for detecting causal interaction using the stable Takens' theorem).: _Suppose the measurement operators \(\gamma_{x}\colon M_{x}\to N_{x}\), \(\gamma_{y}\colon M_{y}\to N_{y}\), \(\varphi_{x}\colon M_{xy}\to N_{x}\), and \(\varphi_{y}\colon M_{xy}\to N_{y}\) are in the generic sets that satisfy Takens' theorem for systems \(x\), \(y\), and \((x,y)\). Further suppose these measurement functions applied to these dynamical systems satisfy (7)-(10) with isometry constants \(l_{\varphi_{x}},l_{\varphi_{y}}\), \(u_{\gamma_{x}}\), and \(u_{\gamma_{y}}\). Then:_ Figure 4: Overview of theoretical results in Section 3.2 relating distance preservation (7)–(10) to conditions for detecting causal interaction from \(\Psi_{y\to x}\). * _If_ \(x\to y\)_, then_ \[\frac{\left\|\Psi_{y\to x}(p)-\Psi_{y\to x}(q)\right\|_{2}^{2}}{\|p-q\|_{2}^{2}} \leq\frac{u_{\gamma_{x}}}{l_{\varphi_{y}}}\] (11) _for all_ \(p,q\in N_{y}\)_._ * _If_ \(y\to x\)_, then_ \[\frac{\left\|\Psi_{y\to x}(p)-\Psi_{y\to x}(q)\right\|_{2}^{2}}{\|p-q\|_{2}^{2} }\geq\frac{l_{\varphi_{x}}}{u_{\gamma_{y}}}\] (12) _for all_ \(p,q\in N_{y}\)_._ The proof, which applies the isometry constants to the compositions of mappings that can comprise \(\Psi_{y\to x}\), is located in Appendix B. The result admits the following test for causal interaction as a simple corollary: **Corollary 9** (Expansivity of \(\Psi_{y\to x}\) implies \(x\not\to y\)).: _Suppose dynamical system \((x,y)\) and the measurement functions used to construct its delay embeddings satisfy the conditions of Proposition 8. If \(\exists\)\(p,q\in N_{y}\) such that_ \[\frac{\left\|\Psi_{y\to x}(p)-\Psi_{y\to x}(q)\right\|_{2}^{2}}{\|p-q\|_{2}^{2 }}>\frac{u_{\gamma_{x}}}{l_{\varphi_{y}}},\] _then \(x\not\to y\)._ Proof.: Contrapositive of Proposition 8. Corollary 9 result provides a theoretically-grounded test for causality based on distance preservation. It suggests empirically or theoretically computing the isometry constants \(u_{\gamma_{x}}\) and \(l_{\varphi_{y}}\), then attempting to find a pair of points such that the ratio of their squared distance on \(N_{y}\) and the contemporaneous squared distance on \(N_{x}\) is larger than \(u_{\gamma_{x}}/l_{\varphi_{y}}\). Such a pair of points, if found, serves as a certificate guaranteeing that \(x\not\to y\). However, Corollary 9 provides only a method to guarantee the _nonexistence_ of the causal link \(x\to y\). This reflects the proper interpretation of many state-space causal inference algorithms such as CCM: for these tests, the existence of a causal relationship \(x\to y\) is a necessary, but not sufficient, condition for \(\Psi_{y\to x}\) to have certain properties [17, 22]. In most applications, however, it would be helpful to guarantee the _existence_ of causal links. We next show that this is possible under two strong assumptions. #### 3.2.2 Necessary and sufficient conditions The result below strengthens Proposition 8 to show that \(x\to y\) is both a necessary and sufficient condition for \(\Psi_{y\to x}\) to preserve distances. Showing this requires two assumptions. The first compensates for the looseness of the near-isometries guaranteed by (7)-(10): **Assumption 10** (Expansivity of \(t_{x}\) when \(y\to x\)).: _If \(y\to x\), then \(t_{x}\colon M_{y}\to M_{xy}\) satisfies_ \[\frac{\left\|t_{x}(p)-t_{x}(q)\right\|_{2}^{2}}{\|p-q\|_{2}^{2}}>\frac{u_{ \gamma_{x}}u_{\gamma_{y}}}{l_{\varphi_{x}}l_{\varphi_{y}}}\] _for some \(p,q\in M_{y}\)._ Assumption 10 states that when \(y\to x\) there must be a pair of points on \(M_{xy}\) that become sufficiently closer together when projected onto \(M_{y}\). To gain intuition, take any two points \(\{y_{t},y_{s}\}\) on \(M_{y}\) and consider their contemporaneous points on \(M_{x}\), \(\{x_{t},x_{s}\}\). Assumption 10 states that for \(t\not=s\), \[\|x_{t}-x_{s}\|_{2}^{2}>\left(\frac{u_{\gamma_{x}}u_{\gamma_{y}}}{l_{\varphi_ {x}}l_{\varphi_{y}}}-1\right)\|y_{t}-y_{s}\|_{2}^{2},\] that is, that the distance between \(x_{t}\) and \(x_{s}\) is sufficiently large with respect to the distance between \(y_{t}\) and \(y_{s}\). The second assumption eliminates the possibility that \(\Psi_{y\to x}\) is stable "by coincidence": **Assumption 11** (Causal sufficiency).: _If \(\exists\,p,q\in N_{y}\) such that_ \[\frac{\left\|\Psi_{y\to x}(p)-\Psi_{y\to x}(q)\right\|_{2}^{2}}{\left\|p-q \right\|_{2}^{2}}\leq\frac{u_{\gamma_{x}}}{l_{\varphi_{y}}},\] _then there is a causal connection between the dynamical systems, either \(x\to y\) or \(y\to x\). In other words, if \(\Psi_{y\to x}\) is a stable mapping we cannot have \(x\perp y\)._ Assumption 11 is a type of causal sufficiency assumption; it eliminates the possibility that the systems \(x\) and \(y\) are not directly related but share structure by coincidence or because of an external system. For instance, Assumption 11 precludes the existence of an external confounder \(z\) that affects both \(x\) and \(y\), or a setting in which \(x\) and \(y\) are two independent yet identical systems started at the same time. The strength of this assumption is the reason that many techniques that rely on observational data do not claim to be able to detect "causal" interactions. Assumptions 10 and 11 allow us to strengthen Proposition 8: **Proposition 12** (Necessary and sufficient conditions for detecting causal interaction using the stable Takens' theorem).: _Suppose that \((x,y)\) and its measurement functions satisfy Assumption 10, Assumption 11, and the conditions of Proposition 8. Then \(x\to y\) if and only if_ \[\frac{\left\|\Psi_{y\to x}(p)-\Psi_{y\to x}(q)\right\|_{2}^{2}}{\left\|p-q \right\|_{2}^{2}}\leq\frac{u_{\gamma_{x}}}{l_{\varphi_{y}}}\] _for all \(p,q\in N_{y}\)._ The proof, which applies the isometry constants (7)-(10) to the compositions of mappings that can comprise \(\Psi_{y\to x}\) using the additional restrictions of Assumptions 10 and 11, is located in Appendix C. This test suggests empirically or theoretically computing the isometry constants \(u_{\gamma_{x}}\) and \(l_{\varphi_{y}}\), then attempting to find a pair of points such that the ratio of their squared distance on \(N_{y}\) and the contemporaneous squared distance on \(N_{x}\) is larger than \(u_{\gamma_{x}}/l_{\varphi_{y}}\). If Assumptions 10 and 11 are satisfied, then Proposition 12 allows a purely distance-based test to be used to guarantee either the existence or nonexistence of \(x\to y\). ### Discussion of implications The results in this section suggest the following for algorithm developers and practitioners: * By showing that causal structure impacts _whether_\(\Phi_{\varphi_{y}}\) preserves inter-point distances, Proposition 7 provides some theoretical evidence in support of using the closeness principle to detect causal structure. However, this result depends on the isometry constants (7)-(10), which in turn depend on a multitude of other system properties (as illustrated by the stable Takens' theorem (Theorem 5)). This supports previous empirical observations [36], [37] that _how well_ distances are preserved depends on properties of the system under study (see Section 5 for details). In addition to the system properties highlighted in previous work (see Section 5), our results suggest that "alignment" 7 of a system and the measurement functions used to construct delay embeddings could impact the results of closeness-principle-based algorithms. This indicates an important sensitivity of these methods to how data is collected. Footnote 7: More precisely, by “alignment” in the linear stable Takens’ theorem of [8] we mean the minimum and maximum values of \(\{\kappa_{i}=\frac{1}{|\Omega|_{2}}\left|v_{i}^{tH}\right|\}\) (see Definition 4 and Theorem 5). In the general stable Takens’ theorem of [9] we mean the “stable rank of the attractor” defined in [9, Sec. 3], which captures alignment of the measurement function and system. Intuitively, this means that inter-point distances can be guaranteed to be better preserved when the measurement functions more fully capture information about the system (see [9, Remark 3.2]). * Proposition 7 deals with a restricted class of linear systems and measurement functions. While analogous result with the less restrictive stable Takens' theorem of [9] are beyond the scope of this paper, our initial work suggests that an additional assumption on the injectivity of the map \(\pi_{x}\) in the nonlinear case. This matches the message that algorithms based on the closeness principle are sensitive to a variety of system-dependent properties that may be independent of causal structure, and supports the role of injectivity in methods such as [10, 33]. * The necessity of a causal sufficiency assumption for Proposition 12 indicates that practitioners using methods based on the closeness principle should carefully assess whether results may be due to common confounding, which may be indistinguishable from causal interaction to this class of algorithms.8 ## 4 Typical systems don't satisfy the closeness principle The results in Section 3 provide insight into how causal inference techniques that are motivated by versions of the closeness principle might be justified. In practice, however, we find that many common coupled systems do not satisfy the closeness principle -- providing further evidence that the algorithms reviewed in Section 2.2 are sensitive to properties of the system (and its relationship to the measurement function) other than causal structure. In this section we empirically explore the preservation of geometric structure in common coupled systems and discuss implications for causal inference techniques based on stability. This stability perspective is instructive for understanding when and why closeness-principle-based causal inference methods may be well-justified. We explore three common coupled nonlinear systems. In each, \(x\) and \(y\) are multidimensional, and \(x\) drives \(y\) with coupling strength \(C\). 1. The identical unidirectionally-coupled Henon maps, \[\left\{\begin{aligned} x_{1}[t+1]&=1.4-x_{1}^{2}[t]+0.3 x_{2}[t]\\ x_{2}[t+1]&=x_{1}[t]\\ y_{1}[t+1]&=1.4-\left(Cx_{1}[t]y_{1}[t]+(1-C)y_{1}^{2}[t] \right)+0.3y_{2}[t]\\ y_{2}[t+1]&=y_{1}[t],\end{aligned}\right.\] with initial condition \((x_{0},y_{0})=[0.7,0.0,0.91,0.7]^{T}\). To construct delay embeddings, we use the measurements \[\gamma_{x}=(x_{1},x_{2})[t]\mapsto x_{1}[t]\text{ and }\gamma_{y}=(y_{1},y_{2})[t]\mapsto y_{1}[t]\] so that \[\varphi_{x}=(x_{1},x_{2},y_{1},y_{2})[t]\mapsto x_{1}[t]\text{ and }\varphi_{y}=(x_{1},x_{2},y_{1},y_{2})\mapsto y_{1}[t].\] Identical synchronization (i.e., \(x(t)=y(t)\)) is induced just before \(C=0.7\)[38]. This system is used in, e.g., [22, 24, 38, 39, 40, 41]. 2. A Rossler system coupled to a Lorenz system, \[\left\{\begin{aligned} \dot{x}_{1}(t)&=-6\left(x_{2}(t)+x_{3}(t) \right)\\ \dot{x}_{2}(t)&=6\left(x_{1}(t)+0.2x_{2}(t)\right)\\ \dot{x}_{3}(t)&=6\left[0.2+x_{3}(t)\left(x_{1}(t)-5.7 \right)\right]\\ \dot{y}_{1}(t)&=10\left(-y_{1}(t)+y_{2}(t)\right)\\ \dot{y}_{2}(t)&=28y_{1}(t)-y_{2}(t)-y_{1}(t)y_{3}(t)+ Cx_{2}^{2}(t)\\ y_{3}(t)&=y_{1}(t)y_{2}(t)-\frac{8}{3}y_{3}(t),\end{aligned}\right.\] with initial condition \((x_{0},y_{0})=[0.0,0.0,0.4,0.3,0.3,0.3]^{T}\). The system trajectory was integrated with MATLAB's ode45 and a sampling period of \(dt=0.025\) was used to generate simulation data. To construct delay embeddings, we use the measurements \[\gamma_{x}=(x_{1},x_{2},x_{3})(t)\mapsto x_{2}(t)\text{ and }\gamma_{y}=(y_{1},y_{2},y_{3})(t)\mapsto y_{2}(t)\] so that \[\varphi_{x}=(x_{1},x_{2},x_{3},y_{1},y_{2},y_{3})(t)\mapsto x_{2}(t)\text{ and }\varphi_{y}=(x_{1},x_{2},x_{3},y_{1},y_{2},y_{3})(t)\mapsto y_{2}(t).\] Synchronization is induced just before \(C=3\)[22]. This system is used in, e.g., [4, 19, 22, 39, 40, 42, 43]. 3. The nonidentical unidirectionally coupled Rossler systems, \[\left\{\begin{aligned} \dot{x}_{1}(t)&=-\omega_{1}x_{2}(t)-x_{3}(t) \\ \dot{x}_{2}(t)&=\omega_{1}x_{1}(t)+0.15x_{2}(t)\\ \dot{x}_{3}(t)&=0.2+x_{3}(t)\left(x_{1}(t)-10\right) \\ y_{1}(t)&=-\omega_{2}y_{2}(t)-y_{3}(t)+C\left(x_{1}(t)-y_{1}(t )\right)\\ \dot{y}_{2}(t)&=\omega_{2}y_{1}(t)+0.15y_{2}(t)\\ \dot{y}_{3}(t)&=0.2+y_{3}(t)\left(y_{1}(t)-10\right), \end{aligned}\right.\] with \(\omega_{1}=1.015\), \(\omega_{2}=0.985\), and initial condition \((x_{0},y_{0})=[0.0,0.0,0.4,0.0,0.0,0.4]^{T}\). To construct delay embeddings, we use the measurements \[\gamma_{x}=(x_{1},x_{2},x_{3})(t)\mapsto x_{3}(t)\text{ and }\gamma_{y}=(y_{1},y_{2},y_{3})(t) \mapsto y_{3}(t)\] so that \[\varphi_{x}=(x_{1},x_{2},x_{3},y_{1},y_{2},y_{3})(t)\mapsto x_{3}(t)\text{ and }\varphi_{y}=(x_{1},x_{2},x_{3},y_{1},y_{2},y_{3})(t)\mapsto y_{3}(t).\] Generalized synchronization [16] is induced at approximately \(C=0.12\)[44]. This system is used in, e.g., [27], [40], [44]; slight variants of this system are used in, e.g., [16], [22], [25], [43], [45], [46]. We remove transients from the time series of each system by discarding the first 1000 generated samples. ### Characterizing the stability of common systems We first empirically compute lower and upper isometry constants (i.e., the minimum and maximum amounts by which maps expand or contract distances between point pairs) for the relevant maps in Figure 3 for each of the nonlinear coupled systems described above. These isometry constants, shown in Figure 5, are each computed by randomly sampling 5,000 point pairs \((p,q)\) from the domain of each map. The empirical isometry constants (minimum and maximum distance ratios) represented by dark traces in Figure 5 are used in the theory in Section 3. Many closeness-principle-based methods operate not on isometry constants themselves, but on other statistics describing how certain maps tend to preserve distances between point pairs. To provide a more fuslome understanding of the distribution of the distance ratios of each mapping, the dashed lines in Figure 5 show the 5th, 50th, and 95th percentiles of the distance ratio computed for the 5,000 simulated point pairs. We also show, in Figure 6, how very small and large percentiles of these distributions of distance ratios are affected by coupling strength. Statistics such as these may be helpful in practice to reduce the sensitivity of purely distance-based tests to noise. Isometry increasing with coupling strength \(C\) is indicated in Figure 6 by traces moving closer to unity as \(C\) as they become darker. As expected because the \(x\) subsystem is independent of the coupling strength \(C\), the distance ratio of \(\Phi_{\gamma_{x}}\colon M_{x}\to N_{x}\) remains constant (up to "error" stemming from the Monte-Carlo simulation used to compute distance ratios) as \(C\) increases. The pattern of distance preservation not changing with coupling strength \(C\) is reflected in Figure 5 by roughly horizontal lines (indicating that distance preservation is static as \(C\) increases along the horizontal axis), and in Figure 6 by roughly overlapping lines (indicating that distance preservation is static as \(C\) increases with darker traces). Meanwhile, the stability of \(\Phi_{\varphi_{x}}\), \(\Phi_{\gamma_{y}}\), and \(\Phi_{\varphi_{y}}\), each of which encode traits of the complete system, does in fact change as \(C\) increases. Figure 5: Distance ratios \(\|\Phi(p)-\Phi(q)\|_{2}^{2}/\|p-q\|_{2}^{2}\) of various maps \(\Phi\) as coupling strength \(C\) increases. Solid dark traces denote upper and lower isometry constants; lighter dashed traces denote 5th, 50th, and 95th percentiles of distance ratios. Each statistic is computed from 5,000 sampled point pairs. The map \(\Phi_{\varphi_{x}}\colon M_{xy}\to N_{x}\) maps the entire coupled system \((x,y)\) to the delay embedding of only the _driving_ subsystem. As coupling strength \(C\) increases (along the horizontal axis in Figure 5 and as traces become darker in Figure 6), properties of the response system could change in ways that are not reflected in \(N_{x}\). We thus expect \(\Phi_{\varphi_{x}}\) to become less stable as \(C\) begins to increase (allowing \(\varphi_{x}\) to obtain less information about the complete \((x,y)\) system), and then to become more stable as \(C\) increases further and synchronization is induced (allowing the state of the driving subsystem that \(\varphi_{x}\) measures to again reveal the state of the entire coupled system). The magnitude of this effect, however, depends on an amalgamation of system- and measurement-specific traits. In the systems considered here, the second part of the pattern is indeed borne out in the Henon-Henon system, in which _identical_ synchronization occurs just before \(C=0.7\). In the remaining two systems, however, we see a general trend of _increasing_ stability as \(C\) increases -- even before the onset of synchronization. In the cases where \(\Phi_{\varphi_{x}}\) is unstable, it is usually because of an unfavorable _lower_ isometry constant, indicating that \(\Phi_{\varphi_{x}}\) collapses points that are far on \(M_{xy}\) to similar states of \(N_{x}\). This pattern is particularly evident in Figure 6. The map \(\Phi_{\gamma_{y}}\colon M_{y}\to N_{y}\) transforms the response subsystem \(y\) alone to its delay embedding. The effect of coupling strength \(C\) on the stability of this map is to change the properties of the response subsystem. If this increased coupling strength results in a more complex, higher dimensional, or higher activity attractor for the response subsystem, we might expect the stability of the map \(\Phi_{\gamma_{y}}\) to worsen. However, in Figures 5 and 6 we see limited effects of \(C\) on stability, suggesting more optimistically that properties of the \(y\) subsystem that are dependent on \(C\) do not have significant impact on the stability of the delay embedding operator. The map \(\Phi_{\varphi_{y}}\colon M_{xy}\to N_{y}\) transforms the entire coupled system to the delay embedding of only the response subsystem. In this sense, \(\Phi_{\varphi_{y}}\) is of particular interest in understanding the closeness principle because it quantifies -- using only information about stability -- the core principle that as coupling strength \(C\) increases from zero the delay embedding of the _response subsystem alone_ "contains information about" the complete system (see Section 3). In the systems considered here, however, empirical evidence of this effect is modest. In cases where we see increasing isometry as \(C\) increases (reflected in Figure 5 by an increased convergence of traces towards unity and in Figure 6 by traces converging toward unity as they become darker), it is not meaningfully different from those of \(\Phi_{\varphi_{x}}\), suggesting that increasing (undirected) synchronization may be more at play than asymmetric coupling. In Figure 6, we see that the impact of \(C\) is much different between the three systems. This suggests, yet again, that the strength of the "closeness principle" effect is impacted by system- and measurement-specific properties beyond strength of causal interaction alone. The projection \(\pi_{x}\colon M_{xy}\to M_{x}\) provides insight into the relative magnitudes of the complete system and driving subsystem. As expected, the stability of this map in the Henon-Henon system rapidly increases as synchronization occurs, with distance ratios converging to \(0.5\) as the two subsystems become identical. In the Rossler-Rossler system, \(\pi_{x}\) becomes increasingly isometric as \(C\) increases (particularly when comparing percentiles of distance ratios), suggesting Figure 6: Percentiles of distance ratios \(\left\|\Phi(p)-\Phi(q)\right\|_{2}^{2}/\left\|p-q\right\|_{2}^{2}\) for various maps \(\Phi\) as coupling strength \(C\) increases. Darker colored traces represent stronger coupling. Tightly clustered traces indicate that map stability is less sensitive to \(C\), while more broadly spread traces indicate that map stability is more sensitive to \(C\). stronger synchronization. As expected, the highest percentiles of distance ratios are all close to \(1\) (reflecting states in \((x,y)\) where \(y\) has large magnitude while \(x\) has low magnitude). However, the variance observed in lower percentiles of distance preservation suggest caution for the applicability of our theory in light of Assumption 10. Most relevant to analyzing closeness-principle-based methods is \(\Psi_{y\to x}\): \(N_{y}\to N_{x}\), which empirically describes the relationship that Conjecture 1 is concerned with. Conjecture 1 postulates that when \(C=0\) (i.e, \(x\perp y\)) \(\Psi_{y\to x}\) will not be stable, while when \(C>0\) (i.e., \(x\to y\)) \(\Psi_{y\to x}\) will be more stable. Figures 5-6 indeed reflect some of this pattern: as \(C\) increases, \(\Psi_{y\to x}\) becomes more stable. However, we do not consistently observe \(\Psi_{y\to x}\) becoming _less expansive_ as \(C\) increases, the key pattern shown in our carefully controlled linear test system of Section 3.1.1. As we show next, this limitation is inherited by existing distance-preservation-based heuristic methods for detecting the direction of causal influence. ### Heuristics used by other methods Although Section 4.1 found limited evidence for the existence of a purely-distance-based form of Conjecture 1, many techniques have obtained successful -- if sometimes inconsistent -- results using this principle. In this section we implement a sample of these heuristics and interpret their results in the context of our work. Shown in Figure 7 are results from the relative-distance-based metrics of [19] and [20] described in Section 2.2. Both techniques, roughly speaking, compare how well nearest neighbors on \(N_{y}\) correspond to nearest neighbors on \(N_{x}\) [see (3) and (4)]; [19] directly uses distances while [20] uses ranked distances. We see that both techniques are effective in detecting coupling between \(x\) and \(y\): as \(C\) increases, the average distance to \(k\) mutual neighbors (\(\overline{D_{i}^{k}}(X\mid Y)\)) and \(\overline{G_{i}^{k}}(X\mid Y)\) smoothly decreases from the mean inter-point distance on \(N_{x}\) (\(\overline{D_{i}}(X)\) and \(\overline{G_{i}}(X)\)) to the mean distance to each point on \(N_{x}\)'s \(k\) nearest neighbors (\(\overline{D_{i}^{k}}(X)\) and \(\overline{G_{i}^{k}}(X)\)). For both the Henon-Henon and Rossler-Rossler systems, these methods effectively detect coupling direction: \(M(X\mid Y)\) and \(L(X\mid Y)\) rise quickly with \(C\), while \(M(Y\mid X)\) and \(L(Y\mid X)\) -- the metrics used to quantify coupling from \(y\to x\) -- begin to increase only at the onset of synchronization. For the Rossler-Lorenz system, however, the metrics quantifying \(x\to y\) and \(y\to x\) rise simultaneously, and it is not possible to detect coupling direction. Previous work [20], [23], [39] has showed how these measures depend on system-specific properties like attractor dimension, so it is unsurprising that we see large differences in efficacy between systems. As discussed in the introduction, Takens' theorem guarantees only that \(\Psi_{y\to x}\) is continuous. We next consider two heuristic methods that are more closely tied to the _continuity_ of \(\Psi_{y\to x}\) than its stability. The convergent cross-mapping (CCM) algorithm of [17] (see Section 2.2) relies on the principle that when \(x\to y\), \(N_{y}\) will become increasingly predictive of the measurements obtained from \(x\) as time series length increases. As shown in Figure 8(a-b), we find that this method is generally effective in the three systems we consider. With the exception of Figure 7: Other heuristics based on the distance preservation of \(\Psi_{y\to x}\) for the coupled nonlinear systems defined above as the system coupling strength \(C\) increases. (a–b) directional influence metrics (3) of [19] and means of constituent terms; (c–d) rank-based directional influence metric (4) of [20] and means of constituent terms. the Rossler-Rossler example, \(N_{y}\) is unable to reconstruct \(x\) when \(C=0\), and cross-map efficacy increases monotonically with \(C\). Meanwhile, as shown in Figure 8(b), the ability of \(x\) to reconstruct \(N_{y}\) does not increase with time series length, and high reconstruction accuracy is achieved only at values of \(C\) close to the onset of synchronization. An alternate heuristic for continuity developed by [32] was proposed as part of a rigorous theoretical framework detecting asymmetric coupling relationships by [10] (see Section 1). This heuristic finds sets on \(N_{x}\) and \(N_{y}\) that satisfy the epsilon-delta definition of continuity, then assesses the probability that these points were found by chance using a specific null hypothesis [32]. Evidence for continuity, denoted by \(\Theta_{C^{0}}(\Psi_{y\to x})\), is provided by this probability increasing to near-unity as \(\varepsilon\) increases. Evidence for continuity of the inverse map \(\Psi_{x\to y}\) is computed similarly. Because the continuity of a map implies that it is one-to-one, evidence that a map is homeomorphic is provided by the combination of both continuity and inverse continuity. Thus, the product \(\Theta_{C^{0}}(\Psi_{y\to x})\cdot\Theta_{C^{0}}(\Psi_{x\to y})\) increasing to unity with \(\varepsilon\) provides evidence that the map is homeomorphic. As shown in Figure 8(c-e), we find evidence in all three systems of increasing continuity as \(C\) increases. In the Henon-Henon map, \(\Theta_{C^{0}}(\Psi_{y\to x})\) increases with \(C\) much quicker than \(\Theta_{C^{0}}(\Psi_{x\to y})\) does (until \(C\) becomes large enough for identical synchronization), providing evidence for the relationship \(x\to y\). In the Rossler-Lorenz and Rossler-Rossler systems we similarly see \(\Theta_{C^{0}}(\Psi_{y\to x})\) and \(\Theta_{C^{0}}(\Psi_{x\to y})\) increase with \(C\). However, here we see less asymmetry between \(\Theta_{C^{0}}(\Psi_{y\to x})\) and \(\Theta_{C^{0}}(\Psi_{x\to y})\), providing less value for determining the _direction_ of asymmetric coupling. This closely follows the results from CCM shown in Figure 8(a-b). ## 5 Discussion This paper examined the "closeness principle" that underlies many state-space methods for detecting causal interaction in coupled dynamical systems -- that is, the principle that when \(x\to y\), pairs of points that are close together on the delay embedding of \(y\) will correspond to points that are also close on the delay embedding of \(x\). We applied the recent stable Takens' theorems of [8, 9] to develop guarantees for one straightforward method for operationalizing the closeness principle to detect causal interaction. _Theoretical takeaways for closeness-principle-based methods:_ While our results provide a foundation for developing more theoretically-grounded tests for causal interaction, they also suggest that the distance preservation of delay embeddings is strongly sensitive to a medley of system properties that are unrelated to causal structure. In addition, we showed empirically that common coupled dynamical systems do not satisfy literal definitions of the closeness principle, and that typical closeness-principle-based heuristics are dependent on not only the direction of asymmetric coupling but also on a myriad of other system-specific traits. Continuity-based properties implied by the closeness principle are used to justify many state-space methods in the literature. A less explored fundamental principle operationalized by some other heuristics is the _injectivity_ of maps Figure 8: Other heuristics based on continuity-like properties of \(\Psi_{y\to x}\) for the coupled nonlinear systems defined above as the system coupling strength \(C\) increases. (a–b) convergent cross-mapping heuristic of [17]; (c–e) continuity heuristic of [32] for (c) continuity, (d) inverse continuity, and (e) injectivity. between delay embeddings. Many approaches (e.g., [33]) are justified on the basis that the causal structure underlying a coupled system often creates "folds" in the joint attractor \(M_{xy}\)[7]. When this is the case, the (non)injectivity of \(\Psi_{y\to x}\) or \(\pi_{x}\) can provide evidence of causal structure. Although our theoretical results for the map \(\Phi_{\varphi_{y}}\) in Section 3.1 focused on linear observation functions and systems, we found that a quantity measuring a graded notion of \(\pi_{x}\)'s noninjectivity plays a key role in developing a nonlinear analogue of Proposition 7 using the more general stable Takens' theorem of [9]. This is a more nuanced analogue to the assumption on the noninjectivity of \(\pi_{x}\) in [10]. The injectivity and invertibility of \(\pi_{x}\) and \(\Psi_{y\to x}\) are also closely related to the synchronization of the response system to the driving system. _Takeaways for practitioners_: Practitioners using state-space algorithms for detecting causal interactions should be aware that individual methods operationalize slightly different proxies for the continuity and injectivity of maps between delay embeddings, each of which is sensitive not only to changes in coupling strength, but also to a large collection of unmeasurable system-specific properties whose effects can easily be conflated coupling. Like [3, 46], we suggest that practitioners apply a collection of these techniques rather than relying on the output of a single algorithm in this class of methods which may be overly-sensitive to specific system properties outside of coupling strength. The consistent output of multiple methods can be interpreted as circumstantial evidence for asymmetric coupling. Previous work [22, 27, 36, 39, 46, 47] has studied many of these confounding system properties and suggested that the success of closeness-principle-based methods depends on a smorgasboard of system properties not directly related to causal structure, including observation and process noise statistics [7, 20, 36, 48, 49], relative system dimension and activity [23, 24, 37, 39, 50], the injectivity of the combined system 9[4, 7, 33], the presence of transient dynamics [36] or generalized synchronization [16, 36, 51], the dimension \(m\) and time lag \(\tau\) used to construct delay embeddings [36, 42, 45, 48], and consistency with the (unverifiable) assumptions of Takens' theorem [47]. Our results underline the strong and less explored role of the measurement function used to construct the delay embedding -- in particular, how well the measurement function aligns with structure of a system's attractor. Footnote 9: Specifically, if the projection \(\pi_{x}:M_{xy}\to M_{x}\) is injective, it and \(\Psi_{y\to x}\) may be invertible. The noninjectivity of \(\pi_{x}\) is often described as creating “folds” in \(M_{xy}\)[7, 33] that allow inference; see [10, pg. 344]. It is also important for practitioners to carefully consider the sense in which closeness-principle-based algorithms detect "causal" interactions. Precisely speaking, the causal relationship \(x\to y\) implies that an intervention on \(x\) would produce a corresponding change in \(y\). The class of methods described in Section 2.2 provide only necessary conditions for the existence of causal relationships, so while it is justified to interpret a negative result from these tests as implying the _absence_ of a causal relationship, a positive result does not prove the _existence_ of a causal relationship. This interpretation is consistent with discussion of state-space methods in [10, 17, 22, 52] -- and best practice for interpreting the output of other methods such as transfer entropy [53] and Granger (predictive) causality [54]. In our theoretical results, we required a strong causal sufficiency assumption -- which eliminates the possibility of, for instance, confounding interactions -- to derive necessary and sufficient conditions on the relationships between state space maps for the existence of causal interactions. Methods that do not implicitly or explicitly make this type of assumption instead require data with fortuitous invariance characteristics [55] or the ability to perform (often difficult or impossible) experiments [6]. _Future work_: Our results suggest several avenues for future work. The theoretical results in Section 3 were phrased in terms of simple isometry constants (i.e., minimum and maximum distance ratios); results that consider more nuanced statistics of how mappings between delay embeddings preserve distances may be more useful for developing practical and robust tests. Second, translating results phrased in terms of _distance_ preservation to consider _nearest neighbor_ preservation would help close the gap between the theory we have developed here and many algorithms used in practice. Although we expect theoretical guarantees on when nearest neighbor relationships are preserved to be more difficult to derive than purely distance-based guarantees, computing distance ratios for pairs of points is computationally expensive and sensitive to observation noise; nearest-neighbor-based tests have been used in practice both to determine embedding parameters (e.g., [26, 56]) and for detecting causal interaction [19, 23, 39]. Third, our results used the stable Takens' theorems of [8, 9], which provide only _sufficient_ conditions for the preservation of distance ratios between a system and its delay embedding. A key finding of our work was that intrinsic system properties, and the relationship between a system and the measurement function used to observe it, can have large impacts on closeness-principle-based heuristics. The development of _necessary_ conditions for embedding stability would help to disentangle the impact of causal interaction from other system properties. Finally, while our results deal with coupled systems comprised of only two subsystems, the manifold filtration framework of [10] might be used to extend these results to more complex systems. ## Acknowledgements This work was supported by NSF grants CCF-1409422 and CCF-1350954, NIH grant 1R01NS115327, and the National Defense Science & Engineering Graduate (NDSEG) Fellowship. M.R.O. thanks Adam Willats for helpful discussions.
2306.10543
UniMC: A Unified Framework for Long-Term Memory Conversation via Relevance Representation Learning
Open-domain long-term memory conversation can establish long-term intimacy with humans, and the key is the ability to understand and memorize long-term dialogue history information. Existing works integrate multiple models for modelling through a pipeline, which ignores the coupling between different stages. In this paper, we propose a Unified framework for Long-term Memory Conversations (UniMC), which increases the connection between different stages by learning relevance representation. Specifically, we decompose the main task into three subtasks based on probability graphs: 1) conversation summarization, 2) memory retrieval, 3) memory-augmented generation. Each subtask involves learning a representation for calculating the relevance between the query and memory, which is modelled by inserting a special token at the beginning of the decoder input. The relevance representation learning strengthens the connection across subtasks through parameter sharing and joint training. Extensive experimental results show that the proposed method consistently improves over strong baselines and yields better dialogue consistency and engagingness.
Kang Zhao, Wei Liu, Jian Luan, Minglei Gao, Li Qian, Hanlin Teng, Bin Wang
2023-06-18T12:30:50Z
http://arxiv.org/abs/2306.10543v1
# UniMC: A Unified Framework for Long-Term Memory Conversation via Relevance Representation Learning ###### Abstract Open-domain long-term memory conversation can establish long-term intimacy with humans, and the key is the ability to understand and memorize long-term dialogue history information. Existing works integrate multiple models for modelling through a pipeline, which ignores the coupling between different stages. In this paper, we propose a **U**nified framework for Long-term **M**emory **C**onversations (UniMC), which increases the connection between different stages by learning relevance representation. Specifically, we decompose the main task into three subtasks based on probability graphs: 1) conversation summarization, 2) memory retrieval, 3) memory-augmented generation. Each subtask involves learning a representation for calculating the relevance between the query and memory, which is modelled by inserting a special token at the beginning of the decoder input. The relevance representation learning strengthens the connection across subtasks through parameter sharing and joint training. Extensive experimental results show that the proposed method consistently improves over strong baselines and yields better dialogue consistency and engagingness. ## 1 Introduction Open-domain dialogue systems aim to establish long-term connections with users [14]. Current dialogue models based on pre-training [1, 1, 13, 15] perform well on conversational relevance and fluency, with the advantage of capturing domain common sense and background knowledge. However, these models can neither handle long-term conversational context nor establish long-term connections with users. Many studies on persona-based dialogue [11, 13, 14, 15] facilitate in-depth chat by assigning profile information. These works mainly focus on the inconsistency of chatbot persona and context. Nevertheless, the above works can only consider the consistency of the chatbot's persona but ignore the memory and utilization of the user's persona. In addition, these methods lack the long-term persona ability [23]. This ability requires understanding and remembering long-term dialogue history information. After interacting with a human, the chatbot can memorize and update the personas of the user and itself, which is used appropriately to make the response more attractive and maintain the consistency of the dialogues. Recent works have made addressed this issue. xu2022multimulti propose a long-term open-domain conversation task to advance research by constructing a dataset. It builds retrieval-augmented generative models and read-write memory-based models for modeling long-context. Nevertheless, its stored memories are not updated. xu2022multi constructed a multi-turn mutual persona dialogue dataset and proposed a long-term memory conversation task, in which they augmented dynamic summarization and memory updating. However, it fails to summarize dialogue into persona but merely to classify sentences. Moreover, these methods adopt the strategy of submodule execution, which a different model implements, and ignore the coupling between these modules. To address the issues discussed above, we pro Figure 1: Our proposed framework (UniMC) for Open-Domain Long-Term Memory Conversation. pose a unified modeling perspective that decomposes long-term memory conversation into related subtasks. These subtasks learn the interconnections across tasks through multi-task learning. In addition, we propose a method to guide the execution of each subtask and strengthen the connection across them by learning relevance representation. As shown in Figure 1, we decompose the main task into three stages from a unified perspective based on probability graphs: 1) Summarization: the model inputs the context and outputs the memory that needs to be summarized. 2) Selection: the model retrieves the relevant memory by matching the context with the summary in the memory pool. 3) Generation: the model generates a response through the retrieved memory and context. We add a special token for model's decoder input, which corresponds to the output representation as relevance representation. This represents the relevance between the query and memory pool proxy, which guides subsequent decoding or classification. In summary, the main contributions of this paper can be summarized as follows: * We propose a unified framework with multi-task learning to compact historical dialogue into memory, judge the relevance between the present query and the memory, and use the relevant memory to generate a response for the present query. * We enhance the connection across subtasks by using the same relevance judgment, which can explicitly guide model outputs. * Extensive experimental results show that the proposed method outperforms strong baselines and yields better dialogue consistency and engagingness. ## 2 Related Work ### Persona-based dialogue Persona-based dialogue generates consistent and attractive responses that utilize profile information. The critical challenge is making the chatbot exhibit a consistent personality to gain users' trust and their long-term confidence Huang et al. (2020). Early work encodes personas into distributed embeddings Li et al. (2016), using latent variables without explicit profile descriptions, which makes generating engaging utterances challenging. In recent years, Zhang et al. (2018) propose a dialogue generation task conditioned on explicit profile information and presented the Persona-Chat dataset, which made chit-chat more engaging. Yavuz et al. (2019) enables the model to generate responses tailored to persona descriptions by extending the pointer generator network See et al. (2017), which allows the decoder to engage and replicate external knowledge hierarchically. Song et al. (2019) propose a memory-augmented architecture to exploit persona information. It combines a conditional variational autoencoder model to address the one-to-many problem. Madotto et al. (2019) regard persona-based dialogue learning as a meta-learning paradigm that can generate personalized responses using only a few dialogue samples collected from the same user. Song et al. (2021) decompose persona-based dialogue generation into response generation and consistency understanding tasks, augmenting coherence constraints with additional natural language inference data. However, these persona-based dialogue methods' personas are static and not summarized and memorized over the dialogue. This paper proposes a unified modelling framework to support the model's ability to update and memorize persona. ### Pre-trained Dialog Models Pre-trained language models (PLM) perform very well on many tasks Radford et al. (2018); Devlin et al. (2019); Raffel et al. (2020); Lewis et al. (2020); Shao et al. (2021), which proves that fine-tuning PLM can yield better performance than training from scratch. Due to the discrepancy between the dialogue corpus and the general document, much work on pre-trained dialog models has recently emerged Adiwardana et al. (2020); Roller et al. (2021); Zhou et al. (2021); Gu et al. (2022). Zhang et al. (2020) proposed DialoGPT to overcome bland and uninformative response generation through mutual information maximization training. Bao et al. (2020) proposed PLATO, a pre-training dialog model based on discrete latent variables, to solve the one-to-many problem. And then proposed to use curriculum learning Bao et al. (2021) and larger-scale pre-training model Bao et al. (2021) to improve the model's performance. Chen et al. (2022) introduces continuous latent variables in an enhanced transformer pre-training framework to increase the correlation and diversity of responses. Recently, some works (Thoppilan et al., 2022) have explored models with larger parameter scales and also tried to incorporate more external knowledge to enhance the dialogue. However, most of these models do not possess long-term conversation capacity. In this paper, we increase the long-term ability to interact with users based on pre-training models. ## 3 Methodology ### Task Decomposition Following Xu et al. (2022), we first formalize the long-term memory conversation task definition. Given a dialog context: \(c=\{u_{1},b_{1},u_{2},b_{2},\ldots,u_{t-1},b_{t-1}\}\) and a query \(q=u_{t}\), where \(u\) and \(b\) represent the user and the chatbot, and \(c\) may contain many sessions. Long-term memory conversation aims to predict response \(b_{t}\) based on long-session dialogue, with the formula \(P(b_{t}|x)\), where \(x=[c;q]\). A complete long-term memory conversation model should be able to find relevant memories, generate reasonable responses based on relevant memories, and make timely memory updates by summarization. To model long-term chat between the user and chatbot, we decompose the task into three stages by Bayes' rule: \[P(b_{t}|x)\propto P(m|x_{1})P(z|x_{2},m^{\prime})P(b_{t}|x_{2},m^{\prime},z), \tag{1}\] where \(x=[x_{1};x_{2}]\) is divided into historical sessions \(x_{1}\) and current session \(x_{2}\), \(z\) represents the relevance between memory and query. \(m^{\prime}\in\mathrm{M}_{u}\cup\mathrm{M}_{b}\) represents query-relevant memories. Each speaker has a corresponding memory pool \(\mathrm{M}\), consisting of a series of persona descriptions \(m\). These persona descriptions are summaries of the speaker's historical sessions. We define \(\mathrm{M}_{u}=\{m_{1}^{u},m_{2}^{u},\ldots,m_{m}^{u}\}\) and \(\mathrm{M}_{b}=\{m_{1}^{b},m_{2}^{b},\ldots,m_{n}^{b}\}\) to represent the persona memory pool of user and chatbot, respectively. We expect \(z\) also to guide the model to generate responses consistent with memory. We interpret the first term in Eq.1 as a conversation summarization task, the second as a memory retrieval task, and the last as a memory-augmented generation task. Its corresponds to the three stages of summarization, selection, and generation. The intuition behind the decomposition is that for a complete dialogue modeling between the user and the chatbot, \(x\) may contain many rounds of conversation or multiple sessions. For long-term conversational context, the model needs to summarize the dialogues from the current session, recall (retrieve) them appropriately, and reply based on the retrieved content. ### Framework We model the long-term memory conversation task based on the Transformer (Vaswani et al., 2017), where the encoder encodes the context and memory, and the decoder is used for three stages: Conversation Summarization, Memory Retrieval and Memory-Augmented Generation. The three stages are unified in form and share the same model parameters. EncoderGiven the context \(c\), we encode the context through the Transformer Encoder, where the role token is inserted into each utterance to distinguish the user and chatbot. Similarly, the input of persona memory is also distinguished by inserting role tokens, and [M] is inserted at each starting position to represent the memory input. The corresponding output \(h^{[M]}\) of [M] at the encoding end is called the memory pool proxy. When only [M] is encoded, it represents a pattern of all memories. When specific memories is encoded together, it represents the specific pattern of these memories. We use Fusion-in-Decoder (**FiD**) (Izacard and Grave, 2020) for memory integration, and conduct ablation for the Fusion-in-Encoder (**FiE**). The latter is to concatenate the context and memory and feed it into the encoder, which belongs to the concatenation of text. DecoderAccording to the task decomposition, we insert a special token [CLS] into the starting position of each subtask to encode the relevance representation. In order to distinguish different generation tasks, we insert special tokens [CMP] or Figure 2: UniMC for the conversation summarization task, the token [M] is memory pool proxy token at the encoder. The first token [CLS] at the decoder is used to introduce relevance representation, and the second token [CMP] is used to generate the summary. [GNR] to represent the decoding of summarization or response generation. #### 3.2.1 Conversation Summarization Acquiring memory from context can be viewed as a conversation summarization. Given the context \(c=\{u_{1},b_{1},\ldots,u_{t-1},b_{t-1}\}\) and query \(u_{t}\), the model output the persona memory \(m\). In Eq.1, we mentioned that the relevance representation represents the semantic association of the query and memory. As shown in Figure 2, we further introduce the memory pool proxy token [M]: \[P(m|x)=P(z|x)P(m|x,z), \tag{2}\] where \(z\) contains the relevance between memory pool proxy and query, \(x=\{[\mathrm{M}];u_{1},b_{1},\ldots,u_{t-1},b_{t-1},u_{t}\}\) is model input. The memory pool proxy token is viewed as an abstract summary that represents the abstract _pattern_ of all persona memories. We compute \(p(z|x,[\mathrm{M}],w_{0})=softmax(\mathrm{MLP}(h_{z}))\) as a binary classification task by decoding the relevance representation \(h_{z}\) at the first start token \(w_{0}=[\mathrm{CLS}]\) of the decoder. If the current query \(u_{t}\) is related to the _pattern_, then the query and context can be compacted into a summary, in which the relevance label \(y_{cs}\) is 1; otherwise, it is 0. We start decoding the sequence at the second token \(P(m|x,z)=\prod_{t=1}^{|m|}p(w_{t}|x,z,w_{<t})\), where \(w_{0}=[\mathrm{CMP}]\) is interpreted as the task identifier. Finally, we minimize the negative log-likelihood loss for conversation summarization: \[\mathcal{L}_{cs}=-\log P(m|x). \tag{3}\] #### 3.2.2 Memory Retrieval In Eq. 1, the memory retrieval can be modeled as a sentence pair classification task, which judge the relevance between the current query and the memory. As suggested by [23], the memory in each training sample can be divided into positive persona memory and negative persona memory. The positive memory is defined as the persona used in the current user's utterances and the bot's responses (both bot persona and user persona observed by the bot). In contrast, the negative memory is the remaining persona in the current session. As shown in Figure 3, given a context \(c\), query \(u_{t}\) and sample a summary \(m\) from the memory pool \(\mathrm{M}\), we learn the relevance representation at the first starting token of the decoder: \[\mathcal{L}_{mr}=-\log P(z|x,m). \tag{4}\] where \(P(z|x,m)=P(z|x,m,w_{0})\), \(w_{0}=[\mathrm{CLS}]\) is the start token of the decoder input. Similar to the relevance judgment in conversation summarization, the label is determined based on whether the persona \(m\) is positive or negative. Note that \(m\) represents just one persona, randomly sampled from both positive and negative personas, and the token [M] is inserted at the starting position of each \(m\). #### 3.2.3 Memory-Augmented Generation In the memory-augmented generation, we use the retrieved memory as external information to generate a more realistic reply. Similar to what is mentioned before, relevance representation also guides response generation. However, in the actual situation, not all of the memories retrieved from the large-scale memory pool may be relevant to the current conversation round. To address this issue, we add some noise to the retrieved memories by randomly introducing some negative memories. This allows the relevance representation to decide whether to use these memories, which can alleviate the problem of overrecall. The task is then Figure 4: UniMC for the memory-augmented generation task, the first token [CLS] at the decoder input for modelling relevance representation, and the second token [GNR] generates the response. Figure 3: UniMC for the memory retrieval task, which the decoder input only contain one token [CLS]. constrained with a classification: \[P(b_{t}|x,m)=P(z|x,m)P(b_{t}|x,m,z), \tag{5}\] unlike the retrieval task, there may be multiple summaries \(m\) introduced here. If \(m\) is all composed of negative memories, the relevance label is 0; otherwise, it is 1. The \(P(b_{t}|x,m,z)=\prod_{t=1}^{|b_{t}|}p(w_{t}|x,m,z,w_{<t})\) is then trained using a negative log-likelihood loss, and \(w_{0}=[\mathrm{GNR}]\) is interpreted as the task identifier. The complete loss can be written as: \(\mathcal{L}_{mag}=-\mathrm{log}P(b_{t}|x,m)\), mag = "memory augmented generation". We train these three subtasks simultaneously for the entire long-term memory conversation. Furthermore, each subtask contains the judgment between query and summary, which strengthens the connection across different tasks during training. Our final training loss is: \[\mathcal{L}=-\mathit{log}P(b_{t}|c)=\mathcal{L}_{\mathit{cs}}+\mathcal{L}_{mr }+\mathcal{L}_{mag}. \tag{6}\] ### Inference We store all memory into a memory pool \(\mathrm{M}=\mathrm{M}_{u}\cup\mathrm{M}_{b}\) and treat it as the long-term memory of the model. In the inference phase, the model first **Retrieval** the memory pool: performs retrieval tasks based on the memory pool and the given context and query, then takes top-k as the relevant memory according to logits. Then, **Generation**: performs the memory-augmented generation based on the context and associated memory. Finally, **Summarization**: perform the conversation summarization according to the context and query, extract the memory of the user and chatbot, respectively. The encoder encodes each memory, and the representation corresponding to the first start token [M] is taken as the memory vector and then written according to the cosine similarity: \[s=cos(h_{m_{i}}^{[M]},h_{m_{j}}^{[M]}), \tag{7}\] where \(m_{i}\in\mathrm{M}\) and \(m_{j}\) is the memory that the model summarizes from the context. When \(s>\lambda\), we replace \(m_{i}\) in \(\mathrm{M}\) by \(m_{j}\), otherwise store \(m_{i}\) directly into \(m_{j}\), where \(\lambda\) is a duplicate check threshold. Explicitly Guided DecodingThe decoded tokens are implicitly influenced by relevance representation for the conversation summarization and memory-augmented generation subtasks. However, relevance representation can represent memory-related semantic information of the decoded text. To discover the role of relevance representation, we propose an explicitly guided decoding by relevance representation method (EG), which improves the generation performance by explicitly guiding the decoding process: \[w_{t}=\operatorname*{arg\,max}_{w\in V^{(k)}}\{p\left(w|w_{<t}\right)+\alpha \cdot p(w|h_{z})\}, \tag{8}\] where \(p\left(w|w_{<t}\right)\) is the probability that the model generates the next token. ## 4 Experiments ### Datasets Our experiments are conducted on DuLeMon Xu et al. (2022), the largest dataset of multi-turn Chinese mutual persona chats currently available. DuLeMon consists of two parts, DuLeMon-SELF and DuLeMon-BOTH. In DuLeMon-SELF, the bot only knows its persona, while in DuLeMon-BOTH, the bot also knows part of the user's persona. ### Evaluation Metrics Automatic evaluation MetricsThe BF1 and Rouge Lin (2004) are used to evaluate the conversation summarization. The BF1(%) denotes the harmonic mean of the precision and recall. It evaluates whether the model recognizes a persona in the context that needs to be summarized. Following Xu et al. (2022), we use AUC and Recall@k to evaluate the ability of persona memory retrieval. The PPL, BLEU Papineni et al. (2002), F1(%), DISTINCT-1/2 Lin et al. (2020), and the model-based BERTScore Zhang et al. (2019) are used to evaluate generated and human-annotated reference responses. Human Evaluation MetricsIn human evaluation, we use three utterance-level measures: coherence, consistency, and engagingness. The metrics is similar to previous research Xu et al. (2022), but we score coherence and consistency on a scale from 0 to 1. For engagingness, we have a different definition. This better reflects the plausibility of the generated responses. Four crowdsourced workers evaluate each response, and a higher score indicates better quality. Our discussion for these three metrics are shown in Appendix A. ### Baselines We evaluate UniMC and several strong baselines on DuLeMon for comparison: * EVA 2.0 (Gu et al., 2022): EVA2.0 is a state-of-the-art Chinese dialogue generation model. * EVA2.0-FT: The EVA2.0 model is fine-tuned on the DuLeMon dataset. * CPT-FT: The CPT model (Shao et al., 2021) fine-tuned on DuLeMon dataset. CPT is a Chinese pre-trained unbalanced transformer model. * UniMC: UniMC is our proposed relevance representation-based unified modelling framework for long-term memory conversation. ### Training Details The large-scale model (CPT-large and EVA2.0-xlarge) is first pre-trained during training, where the pre-trained corpus is the DuLeMon-SELF training set. Then, the model is finetuned, and the fine-tuned corpus is the DuLeMon-BOTH train set. The pre-trained checkpoints are chosen based on the perplexity of the DuLeMon-SELF validation set. More details of the model are shown in Appendix B. ### Results and Discussion #### 4.5.1 Results of conversation summarization We evaluate the models on the test set to measure the performance of different models on conversation summarization. We use the user persona (unseen) in the test set as a positive sample of the need to summarize and other dialogues that do not need to summarize memory as negative cases. The samples that need to be summarized and those that are not required are 183 and 162, respectively. As shown in table 1, we can see that the BF1 score exceeds 87.4%, which shows that our method effectively identifies whether the dialogue can is compacted into a summary. Other metrics are to evaluate how well the summary matches the reference persona. It can be seen that the CPT-large model consistently outperforms the CPT-based model due to the larger amount of parameters. The experimental results show that the UniMC can effectively extract persona information from the dialogue. #### 4.5.2 Results of Memory Retrieval In comparing different memory retrieval models, we added the CPM baseline (Xu et al., 2022), a rank model and reported the metrics on the test set. Table 2 shows results that comparison of different baselines on the retrieval task. We find that the UniMC consistently outperform CPM, which UniMC(CPT-large) outperforms CPM by 7% and 10% on AUC and Recall@5, indicating that our unified modeling approach demonstrates significant advantages. The experimental results show that our method efficiently retrieves context-related persona. #### 4.5.3 Results of Memory-Augmented Generation On the memory-augmented generation, we compare UniMC and direct fine-tuning with different baselines. Table 3 shows the comparison results of different models, in which the results of PLATO-FT are copied from the original paper (Xu et al., 2022). From these results, we can draw several conclusions: 1) In the experimental results of different backbone fine-tuning, it can be seen that the performance of different backbones significantly differs. In the perplexity metric, this cannot be compared due to the differences in vocabulary and tokenization. However, in terms of other automatic evaluation indicators, the dialogue generation based on CPT fine-tuning is the best. It shows that the pre-trained model that considers understanding and generation may be better than other pre-trained dialog models in long-term memory conversation. 2) In the model comparison with large-scale parameters, UniMC based on CPT-large performs better than EVA2.0-xlarge and has fewer parameters than EVA2.0-xlarge. This result on automatic metrics further illustrates that non-dialog domain \begin{table} \begin{tabular}{l l c c} \hline \hline Model & Backbone & BF1 & Rouge-1/2/L \\ \hline UniMC & CPT-base & 86.32 & 0.675/ 0.480/ 0.663 \\ UniMC & CPT-large & 87.40 & 0.787/ 0.658/ 0.782 \\ \hline \hline \end{tabular} \end{table} Table 1: Experimental results of different summarization methods. \begin{table} \begin{tabular}{l c c c} \hline \hline Model & Backbone & AUC & Recall@5 \\ \hline CPM\(\dagger\) & ERNIE & 0.76 & 0.83 \\ \hline UniMC & CPT-base & 0.80 & 0.91 \\ UniMC & CPT-large & **0.83** & **0.93** \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison of different memory retrieval models. The result CPM \(\dagger\)copied from the original paper(Xu et al., 2022). specific pre-trained models seem to perform better on this task than pre-trained dialog models. This result is somewhat similar to the conclusion in Zheng and Huang (2021) because we insert some special tokens on the decoder input to encode relevance representations and distinguish between different tasks. The experimental results in Table 3 show that UniMC based on CPT-large performs better on most automatic metrics. Therefore, CPT-large-based UniMC is used for our subsequent human evaluation. ### Human Evaluation Automatic metrics are limited in evaluating open-domain dialogue tasks Liu et al. (2016). To further validate the model's performance, we conduct a self-chat evaluation. Self-chats are widely used in evaluating dialogue systems Li et al. (2016); Roller et al. (2021); Bao et al. (2021); Xu et al. (2022), where the model plays the roles of both parties in the dialogue. Following Xu et al. (2022), we use the proposed UniMC as a user simulator and ask all chatbots to chat for better control over variables. Crowdsourcing workers only evaluate the responses generated by the chatbot and not the user simulator. Details are as follows. Each chatbot chats with the user simulator for ten episodes, each episode contains four long sessions, and each session contains 16 rounds. As described in Xu et al. (2022), we do not impose any restrictions on a chat other than specifying session openings. Some conversation openings are pre-selected from the DuLeMon test set. These openings are used to start an interactive conversation and ask the two bots to perform a chat in a given context. Table 4 presents the result, from which we can draw several conclusions: 1) The unified modeling method can significantly improve the coherence and engagingness of the dialogue. Regarding dialogue coherence and engagingness, the model achieves scores of 0.796 and 0.824, which are substantially better than the baseline model EVA2.0. UniMC is 0.058 and 0.263 higher than CPT-FT on these two metrics, which indicates that our method can improve the model's dialogue performance and attract users to chat for multiple rounds. In addition, CPT-FT outperforms EVA2.0-FT on these two metrics, indicating that non-dialogue pre-trained models may be more effective for fine-tuning. 2) The unified modeling method can significantly improve dialogue consistency. In the evaluation, consistency needs to consider context consistency and persona consistency. EVA2.0 is more consistent than the fine-tuned EVA2.0. It is possible that direct fine-tuning makes it difficult for the model to maintain persona consistency in long-term dialogue. UniMC is more consistent than EVA2.0 and CPT-FT, showing that unified modeling of different subtasks can improve dialogue consistency. 3) Explicitly guided decoding can significantly improve the performance of the model. UniMC can achieve scores of 0.796, 0.740, and 0.824 than without EG in coherence, consistency, and attractiveness. Experimental results show that introduced relevance representation may contain higher-level semantic information, which prove our proposed method's effectiveness and potential. Moreover, we show some cases in Appendix C to help illustrate the effectiveness of our model. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline Model & Backbone & PPL & BLUE-1/2 & DISTINT-1/2 & F1 & BERTScore \\ \hline PLATO-FT † & PLATO-2 & 9.38 & 0.194/0.087 & **0.068/0.296** & 22.61 & - \\ EVA2.0-FT & EVA2.0-base & 21.16 & 0.145/0.050 & 0.042/0.167 & 18.38 & 0.6215 \\ CPT-FT & CPT-base & 15.31 & 0.230/0.096 & 0.055/0.243 & 24.63 & 0.6394 \\ EVA2.0-FT & EVA2.0-large & 13.95 & 0.179/0.088 & 0.044/0.170 & 18.69 & 0.6272 \\ CPT-FT & CPT-large & 12.39 & **0.243/0.102** & 0.057/0.249 & **25.81** & **0.6449** \\ \hline UniMC & EVA2.0-large & 13.61 & 0.174/0.064 & 0.065/0.259 & 20.27 & 0.6336 \\ UniMC & CPT-base & 13.72 & 0.209/0.086 & **0.102/0.392** & 25.38 & 0.6399 \\ UniMC & CPT-large & 9.54 & **0.217/0.090** & 0.084/0.339 & **26.14** & **0.6422** \\ \hline \hline \end{tabular} \end{table} Table 3: Experimental results of the automatic evaluation for response generation and long-term memory conversation. The first five models are fine-tuned based on backbones without the capability of long-term memory conversation, so these methods cannot be directly compared with other long-term memory conversation models. We bold the best results in different scenarios, respectively. Perplexity (PPL) is not comparable between models due to the differences in vocabulary and tokenization. The result PLATO-FT †copied from the original paperXu et al. (2022). ## 5 Ablation Study To further analyze the effectiveness of different components, we conduct ablation studies. The results of the ablation study are shown in Table 5, where our backbone adopts CPT-base. First, we remove EG when decoding, which the model's performance has decreased to varying degrees on the memory-augmented generation and dialogue summarization tasks. These experiments show that explicitly guided decoding can improve the quality of model generation, especially in conversation summarization, where it can significantly increase n-gram recall. Second, we no longer model relevance representations and instead guide each task with a different token. Most metrics drop significantly on the three tasks, which illustrates the importance of relevance representation learning. Next, we replace the memory integration method from FiD to FiE and observe that, while the retrieval task shows some improvement, other metrics have declined. Its shows the advantage of FiE in modelling long texts. Finally, we use different start tokens to model the relevance representation for each task. Using the same token is more advantageous than this method because the same token modelling can further facilitate connections between tasks. Additionally, we conducted different tasks with different decoders, which increased the number of model parameters. The experimental results show that compared to the shared decoder method, there was not much performance improvement, which shows that our multi-task entire parameter sharing method is effective. ## 6 Conclusions and Future Work This paper proposes a unified framework for long-term memory conversation, mainly by introducing relevance representation for multi-task learning. Specifically, we decompose the main task into three subtasks and model each subtask with the same model. In addition, we insert a specific token into the decoder input to learn a relevance representations, which is used to guide the model's output. Our approach can better generate appropriate responses and enables a model to master retrieval and summarization simultaneously. Extensive experiments demonstrate the effectiveness of UniMC. Additional human evaluations show the advantages of UniMC. UniMC has limitations in terms of interpretability and cannot provide reasons for why specific memories are used when generating responses, despite the fact that we use a sampling-based training strategy. In the future, we will attempt to fuse neural-symbolic systems with pre-trained models, making the model logical and explainable in dialogue. \begin{table} \begin{tabular}{l c c c} \hline \hline Model & Coherence & Consistency & Engagingness \\ \hline EVA2.0 & 0.703 & 0.681 & 0.561 \\ EVA2.0-FT & 0.672 & 0.609 & 0.638 \\ CPT-FT & 0.738 & 0.606 & 0.647 \\ UniMC & **0.796** & **0.740** & **0.824** \\ UniMC w/o EG & 0.790 & 0.699 & 0.727 \\ \hline \hline \end{tabular} \end{table} Table 4: Comparison of human evaluation metric results for self-chat dialogues between UniMC and baselines. The parameter sizes of EVA 2.0 and CPT are large and large, respectively. The ‘EG’ represents that explicitly guided decoding strategies by relevance representation. \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{2}{c}{Generation} & \multicolumn{2}{c}{Retrieval} & \multicolumn{4}{c}{Summarization} \\ \cline{2-9} & F1 & BERTScore & AUC & Recall@5 & BF1 & Rouge-1 & Rouge-2 & Rouge-L \\ \hline UniMC & **25.38** & **0.6399** & 0.80 & 0.91 & **86.32** & **0.675** & 0.480 & **0.663** \\ \hline w/o EG & 24.72 & 0.6370 & 0.80 & 0.91 & 84.49 & 0.667 & **0.491** & 0.655 \\ w/o RR & 24.55 & 0.6364 & 0.78 & 0.90 & 82.29 & 0.638 & 0.452 & 0.629 \\ FiD \(\rightarrow\) FiE & 25.31 & 0.6370 & **0.82** & **0.92** & 80.53 & 0.668 & 0.481 & 0.656 \\ diff [CLS] & 24.15 & 0.6348 & 0.79 & 0.90 & 85.07 & 0.663 & 0.475 & 0.651 \\ \hline \hline diff decoder & 24.91 & 0.6380 & 0.82 & 0.92 & 82.92 & 0.680 & 0.482 & 0.667 \\ \hline \hline \end{tabular} \end{table} Table 5: Ablation studies result of UniMC. ‘EG’ and ‘RR’ denotes explicitly guided decoding and relevance representation, respectively. FiD\(\rightarrow\) FiE denotes the method of integrating memory is converted from Fusion-in-Decoder to Fusion-in-Encoder. ‘diff [CLS]’ denotes different tokens are used as the beginning of the decoder input. ‘diff decoder’ denotes decoder parameters are not shared.
2305.09542
Increasing Melanoma Diagnostic Confidence: Forcing the Convolutional Network to Learn from the Lesion
Deep learning implemented with convolutional network architectures can exceed specialists' diagnostic accuracy. However, whole-image deep learning trained on a given dataset may not generalize to other datasets. The problem arises because extra-lesional features - ruler marks, ink marks, and other melanoma correlates - may serve as information leaks. These extra-lesional features, discoverable by heat maps, degrade melanoma diagnostic performance and cause techniques learned on one data set to fail to generalize. We propose a novel technique to improve melanoma recognition by an EfficientNet model. The model trains the network to detect the lesion and learn features from the detected lesion. A generalizable elliptical segmentation model for lesions was developed, with an ellipse enclosing a lesion and the ellipse enclosed by an extended rectangle (bounding box). The minimal bounding box was extended by 20% to allow some background around the lesion. The publicly available International Skin Imaging Collaboration (ISIC) 2020 skin lesion image dataset was used to evaluate the effectiveness of the proposed method. Our test results show that the proposed method improved diagnostic accuracy by increasing the mean area under receiver operating characteristic curve (mean AUC) score from 0.9 to 0.922. Additionally, correctly diagnosed scores are also improved, providing better separation of scores, thereby increasing melanoma diagnostic confidence. The proposed lesion-focused convolutional technique warrants further study.
Norsang Lama, R. Joe Stanley, Anand Nambisan, Akanksha Maurya, Jason Hagerty, William V. Stoecker
2023-05-16T15:34:12Z
http://arxiv.org/abs/2305.09542v1
Increasing Melanoma Diagnostic Confidence: Forcing the Convolutional Network to Learn from the Lesion ###### Abstract Deep learning implemented with convolutional network architectures can exceed specialists' diagnostic accuracy. However, whole-image deep learning trained on a given dataset may not generalize to other datasets. The problem arises because extra-isreal features--ruler marks, ink marks, and other melanoma correlates--may serve as information leaks. These extra-isreal features, discoverable by heat maps, degrade melanoma diagnostic performance and cause techniques learned on one data set to fail to generalize. We propose a novel technique to improve melanoma recognition by an EfficientNet model. The model trains the network to detect the lesion and learn features from the detected lesion. A generalizable elliptical segmentation model for lesions was developed, with an ellipse enclosing a lesion and the ellipse enclosed by an extended rectangle (bounding box). The minimal bounding box was extended by 20% to allow some background around the lesion. The publicly available International Skin Imaging Collaboration (ISIC) 2020 skin lesion image dataset was used to evaluate the effectiveness of the proposed method. Our test results show that the proposed method improved diagnostic accuracy by increasing the mean area under receiver operating characteristic curve (mean AUC) score from 0.9 to 0.922. Additionally, correctly diagnosed scores are also improved, providing better separation of scores, thereby increasing melanoma diagnostic confidence. The proposed lesion-focused convolutional technique warrants further study. melanoma, dermoscopy, deep learning, image classification, skin lesion ## I Introduction As estimated 186,680 new cases of invasive melanoma (including 89,070 in-situ melanomas) are expected to be diagnosed in 2023 in the United States [1]. Dermoscopy is an imaging adjunct technique for early skin cancer detection, improving diagnostic accuracy compared to visual inspection by a domain expert [2, 3, 4]. Computer vision techniques have improved appreciably in recent years [5, 6, 7, 8, 9, 10, 11, 12] and have been successfully applied to many medical imaging problems [13, 14, 15, 16, 17, 18, 19, 20]. In the skin cancer domain, deep learning techniques combined with dermoscopy have higher diagnostic accuracy than experienced dermatologists [13, 21, 22, 23, 24]. Pathan _et al_. published a recent review detailing both handcrafted and deep learning (DL) techniques for computer-aided diagnosis of skin lesions [25]. Recent studies show that the fusion of deep learning and handcrafted features can improve accuracy in skin cancer diagnosis [26, 27, 28, 29, 30]. Although convolution neural network (CNN) methods have achieved higher diagnostic accuracy in skin lesion classification, the heatmap visualizations of CNNs have shown that they do not always learn the features from a lesion region in the image; rather from the artifacts present in the image, such as ruler marks, ink marks, stickers, and skin backgrounds. These non-lesional features may serve as information leaks and might potentially cause poor generalization when applied to new test data that are different than training data. Thus, in this study, we propose a novel deep learning method that forces CNN, in particular an EfficientNet-B5 model, to learn the features from the important lesion region in the image during the training. The class activation map (CAM) visualizations in Figure 1 shows that the proposed method prevents CNN model focusing on the artifacts. Furthermore, the test results show the proposed method improves the melanoma classification performance and predicts the classification score with a higher diagnostic confidence. Fig. 1: CAM heatmap visualizations of an EfficientNet-B5 model and the proposed method for melanoma classification. Materials and Methods ### _Image Datasets_ In this study, we used a publicly available ISIC2020 [31] melanoma classification dataset. The dataset has 33,126 skin lesion dermoscopic images of two categories - benign and melanoma. Some of the images have duplicates; we created a curated set of 32701 images after removing the duplicates. The dataset is highly imbalanced with only 581 (1.78%) of total images belongs melanoma category The images have varying resolutions from 480\(\times\)640 to 4000\(\times\)6000. Some of the examples are shown in Figure 2. The non-square images were zero padded and resized to 512x512 using a bilinear interpolation. ### _Data Augmentation_ In this study, we applied the data augmentation during the training of convolutional neural network. It increases the variation in training images by randomly applying various image transformations, which eventually helps the model to generalize better. The image transformations used in this study are as follows: * Transpose * Horizontal or Vertical Flip * Height or width shift with a range of (-0.15, +0.15) * Rotation with range between +90\({}^{\circ}\) to -90\({}^{\circ}\) * Zoom with a range of (0.85, 1.15) * Brightness with a range of (0.85, 1.15) * Contrast with a range of (0.85, 1.15) * Hue with a range of (0.85, 1.15) * Saturation with a range of (0.85, 1.15) * CLAHE histogram equalization * Gaussian Noise * Motion Blur * Median Blur * Gaussian Blur Furthermore, the image pixel values were rescaled between 0 and 1 and normalized using the ImageNet [32] parameters. ### _Proposed Method_ The overall flow diagram of the proposed method is shown in Figure 3. It uses a pretrained EfficientNet [12] model as a convolutional neural network (CNN) architecture to classify the skin lesions. It incorporates a novel attention mechanism to force the model to focus more on the lesion region of an image. The proposed attention mechanism, first, computes the class activation map (CAM) [33] to identify the image regions most relevant to the specific class (melanoma in our case) and then uses it with an elliptical lesion mask to compute the attention loss, \(L_{A}\).The attention loss, \(L_{A}\), is combined with the classification loss, \(L_{C}\), to create the composite loss \(L_{T}\). Finally, the convolutional neural network is trained using this composite loss so that the network emphasizes more on the lesion region in the image rather than the background. For a given image, let \(f_{k}(x,y)\) represent an activation of a unit \(k\) in the last convolution layer at a spatial location \((x,y)\). The CAM for class \(c\) is given in Equation 1. \[M_{c}(x,y)=\sum_{k}w_{k}^{c}f_{k}(x,y) \tag{1}\] Where \(w_{k}^{c}\) is the weight corresponding to class \(c\) for the unit \(k\). To generate an elliptical lesion mask, \(M_{E}\), we use an extended bounding box that encloses the skin lesion. The bounding box of the lesion images are auto generated using a separate lesion detection model, which is a ResNet-50 [9] model trained on the ISIC 2018 [34] lesion segmentation dataset and predicts the bounding box coordinates (\(x_{min},y_{min},x_{max},y_{max}\)) of a lesion. The bounding box is extended by 20% area to allow some background around the lesion. The elliptical mask \(M_{E}\) is resized to same size as that of \(M_{C}\) to compute the attention loss. Here, \(M_{E}\) is a binary mask with a pixel value of 0 or 1. Thus, the data range of \(M_{c}\) is also rescaled between 0 and 1 using a division by maximum value. For \(N\) training images, the attention loss using the Jaccard method is given in Equation 2. \[L_{A}=1-\frac{\sum_{N}M_{C}M_{E}+1}{\sum_{N}M_{C}+M_{E}+1} \tag{2}\] Equation 3 shows the classification loss \(L_{C}\) computed using a cross-entropy method between a sigmoid output of a fully connected (FC) layer, \(S_{FC}\), and a given ground truth label \(Y\). \[L_{C}= BCE\ (S_{FC},Y) \tag{3}\] Equation 4 shows the total composite loss that was used to train the network. \[L_{T}=(1-\lambda)L_{C}+\lambda L_{A} \tag{4}\] Fig. 2: Skin lesion dermoscopy images with ground truth classification labels in the ISIC 2020 skin lesion dataset. The top row shows benign lesions, and the bottom row shows malignant ones. Where \(\lambda\) is the loss weight factor such that \(0<\lambda<1\), and was optimized empirically to 0.66. During the inference, the trained CNN model with the fully connected (FC) layer outputs the sigmoid score with a value between 0 and 1, where the score closer to 0 indicates the lesion being benign and the score closer to 1 indicates lesion being melanoma. ### _Training Details_ All models were built using a PyTorch framework in Python 3 and trained using a single 32GB Nvidia V100 graphics card. The network was trained for 30 epochs using a batch size of 6, a constant learning rate of 0.0001, and the stochastic gradient descent (SGD) optimization algorithm. The loss functions were weighted binary cross entropy for a classification loss and Jaccard loss for an attention loss. To reduce overfitting of a deep neural network model, we used data augmentation (see details in section II.B), a dropout layer, and an early stopping technique. The dropout probability of 0.5 was selected for the dropout layer, which was placed before a FC layer. For the early stopping criterion, we used a patience of 5 epochs to stop the model from overtraining. ## III Experimental Results To evaluate the performance of the proposed method, we trained an Efficient-B5 model with our proposed attention mechanism (AM) using 5-fold cross validation. The 32,701 images from the curated ISIC dataset were randomly split into 5 folds with a class label-based stratification. We used the area under the receiver operating characteristic curve (AUC) to measure the classification performance of the proposed model. Table I shows the performance comparison of the proposed method against the baseline model. The baseline model is the Efficient-B5 model without the attention mechanism. The proposed method improved the mean cross-validated AUC of 0.9 to 0.922. In Figures 4, we show the class activation map (CAM) of the proposed method on the test melanoma images. Although the baseline model has a prediction score greater than 0.5 in all three cases, CAM shows the model focuses on the outer regions (example, ruler marks) rather than the lesion region. In contrast, the proposed method focuses mostly inside the lesion bounding box. Also, the prediction scores of 0.818 vs. 0.745, 0.739 vs. 0.703 and 0.90 vs. 0.701 from the proposed model against the baseline model shows that the proposed model is more confident of classifying the melanoma lesions as a melanoma. Similarly, Figure 5 shows the overlays of CAM on the benign images from both proposed and baseline models. The proposed model focuses within the lesion region to extract the important information to classify the sample correctly. Conversely, the baseline model focuses on the image Fig. 3: The overall flow diagram of a proposed melanoma classification method. During training, the attention mechanism computes the class activation map (CAM) using the feature map after the last convolutional layer, which is further used to compute the attention loss \(L_{A}\). The classification loss \(L_{C}\) is computed using an output from FC layer and combined to create a composite (total) loss \(L_{T}\). regions even though it is correctly predicting the lesions as a benign lesion. Also, the prediction scores are reduced from 0.448 to 0.099, 0.279 to 0.092 and 0.037 to 0.007, showing the improved confidence on its classification score. ## IV Discussion In this study, we demonstrated that our proposed lesion-focused deep learning method not only improves the melanoma classification performance, but also increases melanoma diagnostic confidence in dermoscopic skin lesion images. As accuracy is not a very useful metric for a binary classification problem in a highly imbalanced dataset scenario, we used an area under ROC curve (AUC) to evaluate the classification performance of the proposed method. In recent ISIC skin lesion classification challenges, DL methods using CNN architectures such as ResNet [9], ResNext[35], SEResNext [36], DenseNet [37] and EfficientNet [12] have dominated the submission leaderboards [38, 39]. Although CNN methods have outperformed the traditional handcrafted features methods in visual recognition tasks, it is little known how they perform very well. Various visualization techniques, including saliency maps [40], CAM [33], Grad-CAM [41], Grad-CAM\(++\)[42], and Score-CAM [43], have been devised to observe the specific regions in an image that played a significant role in the classification of a particular class by convolutional neural networks. Cassidy et al. [44] recently analyzed the ISIC image datasets using various convolution neural networks. In their study, a Grad-CAM visualization showed the CNN models primarily focus on non-lesion regions in an image such as ruler marks, ink marks, stickers, and skin background. Furthermore, we noticed the similar behavior of the CNN model in this investigation. Despite CNNs focusing on non-lesion regions, they still manage to make the correct predictions as shown in Figures 4 and 5. This situation is undesirable since the presence of such artifacts in the training data can lead to information leakage, potentially causing the trained model to perform poorly when applied to new test images from different distributions than the training data. Thus, the CNN methods that focuses on the lesion regions are warranted to develop a better generalized model. Our experimental results showed that the proposed attention mechanism forces the CNN model to learn from the important lesion regions in an image. The CNN model trained with the proposed attention mechanism achieves an improved classification performance compared to the plain (no attention component) CNN model. Also, the model with attention predicts the classifications scores with higher confidence than the baseline model. As the proposed CNN model mainly relies on the lesion region in the image to make a final classification prediction, such models can perform well even when clinical artifacts are not present in the test images. However, the generalization capability of the proposed method is not investigated in the current study, as such an investigation can be performed only when new test data with a different distribution than the training data are available in the future. ## V Conclusion In this study, we propose a novel deep learning technique to force a convolutional neural network (CNN) to learn from an important lesion region in dermoscopic skin lesion images. The proposed method employs a new attention mechanism that uses a class activation map and an elliptical lesion mask to compute an attention loss. The attention loss is combined with the classification loss to train the convolutional neural network. The CNN model trained with the combined loss improved the melanoma classification performance. The class activation map Fig. 4: Overlays of class activation map (CAM) on the test melanoma lesion images. The bounding box shows lesion location. The CAM shows the proposed method focuses within the lesion region. The scores and GT in RED show the proposed method is more confident of classifying the melanoma lesions as a melanoma. Fig. 5: Overlays of the class activation map (CAM) on the test benign lesion images. The bounding box shows lesion location. The CAM shows the proposed method focuses within the lesion region. The scores and GT in RED show the proposed method is more confident of classifying the benign lesions as benign. showed that the CNN model with the proposed method makes an accurate melanoma prediction by learning features within a lesion rather than non-lesion regions in the skin background.
2307.08025
Analysing Gender Bias in Text-to-Image Models using Object Detection
This work presents a novel strategy to measure bias in text-to-image models. Using paired prompts that specify gender and vaguely reference an object (e.g. "a man/woman holding an item") we can examine whether certain objects are associated with a certain gender. In analysing results from Stable Diffusion, we observed that male prompts generated objects such as ties, knives, trucks, baseball bats, and bicycles more frequently. On the other hand, female prompts were more likely to generate objects such as handbags, umbrellas, bowls, bottles, and cups. We hope that the method outlined here will be a useful tool for examining bias in text-to-image models.
Harvey Mannering
2023-07-16T12:31:29Z
http://arxiv.org/abs/2307.08025v1
# Analysing Gender Bias in Text-to-Image Models ###### Abstract This work presents a novel strategy to measure bias in text-to-image models. Using paired prompts that specify gender and vaguely reference an object (e.g. "a man/woman holding an item") we can examine whether certain objects are associated with a certain gender. In analysing results from Stable Diffusion, we observed that male prompts generated objects such as ties, knives, trucks, baseball bats, and bicycles more frequently. On the other hand, female prompts were more likely to generate objects such as handbags, umbrellas, bowls, bottles, and cups. We hope that the method outlined here will be a useful tool for examining bias in text-to-image models. ## 1 Introduction Text-to-image models are neural networks that take as input a text prompt and output an image. This has exciting new applications for storyboarding, image editing, and AI-generated art; however, it also poses risks. These models are capable of depicting stereotypes that have been learned from the training data. For instance, DALL-E, Stable Diffusion, and Midjourney were more prone to producing images of men if the word "powerful" was included in the text prompt [5]. While some research in this field currently uses gender-neutral prompts (e.g. "a photo of a nurse") to examine what gender things are associated with, we take the reverse approach. We use prompts with a specified gender and vague references to objects (e.g. "a girl holding an item").Object detection can then be used to determine if certain objects are associated with a specific gender. We hope this method will be helpful in analysing biases held by a specific model. For code, prompts, and results, please visit our Github repo. ## 2 Related Work Text-to-image models have recently gained a lot of attention due to their realism and versatility. DALL-E 2 [11] (a popular text-to-image model) works in two stages. Firstly, a CLIP embedding is generated from a text caption. CLIP is a deep learning model that connects images and textual descriptions by learning a shared embedding space [10]. In the second stage, a diffusion model generates an image conditioned on the CLIP embedding. Stable diffusion [13] similarly encodes text using CLIP; however, the diffusion process in the second stage is performed in the latent space of a pretrained autoencoder, allowing for faster training and inference times. Text-to-image models are now widely used, with DALL-E alone having over a million users [9]. Naturally, researchers are now probing these models to see what biases they contain. This has been done in DALL-Eval [3], which used neutral prompts like "a photo of nurse" and "a person with a beer" to generate images. These images were then analysed using automated gender detection, automated skin detection, and human evaluation. With this pipeline, they determined whether text-to-image models were perpetuating stereotypes. Bias was demonstrated by Bianchi _et al._[1] using the prompt template "a photo of a face of {x}". Only images of men and only images of women were generated when {x} was set to "a software developer" and "a flight attendant", respectively. This may be because when a prompt is underspecified (i.e. when few details about a person are given) a text-to-image model is forced to "fill in the blanks" with stereotypes learned from the training data [4]. Stable Bias [5] examines gender bias by generating a large number of images and then analysing them with captioning and visual question answering models. Two metrics, GEP [13] and MCAS [6], have recently been proposed to measure gender bias in text-to-image models. Both use CLIP embeddings to examine what associations men and women have. In our analysis, we instead utilize an object detector and prompt in a more open ended way. ## 3 Method To determine what associations men and women have in text-to-image models we generate images using male/female paired prompts. For example, the following two prompts (1) "A man holding an item" (2) "A woman holding an item" will generate similar, but distinct, images. Object detection is then run on the resulting images. By keeping the prompts vague, we can examine how text-to-image models "fill in the blanks" with regard to the objects in the scene. Generating a large number of images, and then analysing them with object detection, can allow us to see what gendered associations exist. We use 50 template prompts that all contain a gendered word and some vague underspecified reference to an object. For example, one of our prompts follows the template: "Things owned by a {gender}" where {gender} is set to either "man", "woman", "boy" or "girl". These four gender words along with the 50 templates give us a total of 200 prompts that can be used to generate images. We generate 1000 images for both Stable Diffusion v2-1 [12] and DALL-E mini [3]. This involves generating 5 images for each prompt. Every time a pair of man/woman or boy/girl prompts are used, the same seed is set. This ensures that the same noise is used in the diffusion process and that the only thing that changes between generations is the gendered word. We selected these light weight models due to our limitations in cost and computational resources. For object detection we use the You Only Look Once (YOLO) v3 model [11] due to its low compute Figure 1: Example outputs from (a) **Stable Diffusion** and (b) **DALL-E mini** models with their corresponding prompts. Objects detected using YOLOv3 are also shown. costs. YOLOv3 can detect multiple objects within the same image. It also draws a bounding box around each object and assigns a probability to the detection. Example YOLOv3 predictions can be seen in Figure 1. ## 4 Results We generate 1000 images from both Stable Diffusion and DALL-E mini. We then run object detection on every image. Examples of the resulting images and detected objects are shown in Figure 1. Figure 2 shows which objects were detected, and in what quantity, for both male and female prompts. Only the most numerically significant objects are shown, but the full list of objects can be seen in Table 1. For Stable Diffusion, male prompts were more likely to generate objects including ties, backpacks, knives, trucks, chairs, baseball bats, and bicycles. Female prompts were more likely to generate objects including handbags, umbrellas, bowls, bottles, and cups. Many objects in DALL-E mini were ambiguous, leading to far fewer objects being picked up during object detection. Like with Stable Diffusion, the most significant result is that ties were much more likely to be generated by male prompts. For each model, we have a male and a female categorical distribution. These two categorical distribu Figure 2: Object detection was run on text-to-image generated images. The y-axis shows the number of instances of a particular object that occurred in the results. Objects are listed on the x-axis. Blue bars correspond to the objects generated from male prompts and the pink bars correspond to objects generated from female prompts. Any object that occurred less than 9 times was removed from the **Stable diffusion (top)** plot. Objects with less than 4 occurrences were removed from the **DALL-E mini (bottom)** plot. The “person” object was removed from both plots. tions can be compared using the Chi-squared test. The Chi-squared test's p-value describes how similar the two distributions are, and we can therefore use it as a measure of gender bias (with a higher number meaning less bias). A Chi-squared test performed on the Stable Diffusion gets a p-value of 0.000009. A Chi-squared test for DALL-E mini nets a p-value of 0.04172. This suggests that DALL-E mini contains less gender bias than Stable Diffusion, which may be explained by steps taken by OpenAI to reduce bias in DALL-E [7]. However, this may also be down to fewer objects being detected in DALL-E mini's results. ## 5 Conclusion & Future Work In this work, we propose a new technique for measuring bias in text-to-image models. Using prompts describing a male or a female with an unspecified object, and then running object detection on the results, we can analyse what associations text-to-image models hold about males and females. This technique opens avenues for further investigation into bias in these models. For instance, our findings indicate that Stable Diffusion tends to generate bowls, bottles, and cups more frequently for women than men. This prompts us to question whether Stable Diffusion portrays women in domestic settings more often than men, thus reinforcing certain stereotypes. A likely cause of bias in text-to-image models is bias being present in the training data. Therefore, better curation of this data may be needed to address the issue. These experiments are an initial step towards addressing gender bias. Future work should also include non-binary or trans categories. A good first step in this direction would be to expand the gendered words to "man", "woman", and "person". Future work could also look to measure other biases (e.g. racial, age, social) using the same technique. Finally, the object detectors own biases needs closer examination. Could YOLO be missing certain detections because they are paired with men or women? The choice of categories used to train YOLO could also influence the outcomes and interpretations of the experiments. \begin{table} \begin{tabular}{c c c c c} \hline \hline & \multicolumn{2}{c}{**SD**} & \multicolumn{2}{c}{**DALL-E**} \\ & Male & Female & Male & Female \\ \hline person & 482 & 515 & 424 & 459 \\ sports ball & 14 & 13 & 2 & 1 \\ handbag & 7 & 25 & 1 & 1 \\ book & 17 & 24 & 1 & 3 \\ vase & 6 & 8 & 2 & 0 \\ boat & 0 & 1 & 0 & 0 \\ donut & 0 & 2 & 0 & 0 \\ fribee & 6 & 6 & 0 & 2 \\ baseball glove & 4 & 1 & 1 & 1 \\ backpack & 7 & 3 & 0 & 0 \\ car & 14 & 9 & 0 & 0 \\ umbrella & 2 & 5 & 0 & 4 \\ clock & 14 & 8 & 2 & 0 \\ cell phone & 35 & 29 & 9 & 6 \\ orange & 2 & 4 & 1 & 0 \\ diningqule & 1 & 3 & 1 & 3 \\ pizza & 0 & 2 & 0 & 0 \\ bed & 2 & 4 & 2 & 1 \\ pontoplant & 0 & 3 & 0 & 1 \\ truck & 4 & 1 & 0 & 0 \\ toothruh & 0 & 4 & 1 & 0 \\ mouse & 1 & 3 & 0 & 0 \\ knife & 6 & 2 & 2 & 4 \\ slackendant
2304.11024
Merging boundary critical points of a Morse function
In 2015, Borodzik, N\'emethi and Ranicki proved that an interior critical point can be pushed to the boundary, where it splits into two boundary critical points. In this paper, we show that two critical points at the boundary can be, under specific assumptions, merged into a single critical point in the interior. That is, we reverse the original construction.
Maciej Borodzik, Marcin Mielniczuk
2023-04-21T15:16:42Z
http://arxiv.org/abs/2304.11024v1
# Merging boundary critical points of a Morse function ###### Abstract. In 2015, Borodzik, Nemethi and Ranicki proved that an interior critical point can be pushed to the boundary, where it splits into two boundary critical points. In this paper, we show that two critical points at the boundary can be, under specific assumptions, merged into a single critical point in the interior. That is, we reverse the original construction. Key words and phrases:manifold with boundary, cobordism, critical points 2020 Mathematics Subject Classification: 57R40, 57R70, 57Q60 ## 1. Introduction While the origins of the study of Morse functions for manifolds with boundary are in the seventies of the previous century, a systematic research of this subject was originated only in the first decade of the twenty-first century by Kronheimer and Mrowka [1]. Their analysis of the notions of boundary stable and unstable critical points and the Morse-Witten chain complex for manifolds with boundary is put in the context of Floer theory. In 2015, Nemethi, Ranicki and the first author proved that an interior critical point of a Morse function can be pushed to the boundary, where it splits into two boundary critical points; see [1]. A precise statement of the result is the following (we refer to Section 2 for terminology): **Theorem 1.1**.: _Suppose \(M\) is a smooth manifold with boundary and \(g\colon M\to\mathbb{R}\) is a Morse function. Suppose that \(z\notin\partial M\) is a critical point of \(g\) of index \(k\neq 0,\dim M\). Suppose there is a path \(\theta\) in the level set \(g^{-1}(g(z))\) connecting \(z\) and \(\partial M\)._ _In this situation, there exists a Morse function \(f\) such that:_ 1. \(f(x)=g(x)\) _away from a neighborhood_ \(U\) _of_ \(\theta\)_, where_ \(U\) _might be chosen to be as small as we please;_ 2. \(f(x)\) _has precisely two critical points_ \(p\) _and_ \(q\) _in_ \(U\)_. Both_ \(p\) _and_ \(q\) _belong to_ \(\partial M\) _and have index_ \(k\)_. The critical point_ \(p\) _is boundary stable, while the critical point_ \(q\) _is boundary unstable._ 3. _There is a gradient-like Morse-Smale vector field for_ \(f\)_, such that there is precisely one trajectory_ \(\gamma\) _connecting_ \(p\) _to_ \(q\)_._ An embedded analog of the result was proved in [1]. On the conceptual level, the result explains, why changing a Seifert surface for a knot by a single surgery results in changing the size of the associated Seifert matrix by \(2\). In the present paper, we prove the reverse result. Namely, under suitable circumstances, a pair of critical points on the boundary can be merged into a single ###### Abstract. We consider the following problem of finding a non-cancellation theorem for the _non-cancellation_ theorem. We show that the non-cancellation theorem is well-defined. **Theorem 1.1**.: _Let \(M\) be a manifold with boundary, \(f\colon M\to\mathbb{R}\) be a Morse function, and let \(\xi\) be a Morse-Smale gradient-like vector field for \(f\). Suppose that \(p,q\in\partial M\) are critical points of \(f\), both of index \(k\), where \(p\) is boundary stable and \(q\) is boundary unstable. Moreover, suppose there exists a single trajectory \(\gamma\) of the vector field \(\xi\) starting at \(p\) and terminating at \(q\)._ _Then for any neighborhood \(U\) of \(\gamma\), there exists a Morse function \(g\colon M\to\mathbb{R}\) with the same critical points as \(f\) away from \(U\), such that the properties (2) and (3) of Theorem 1.1 are satisfied._ **Key words.** _MSC2000_, 35K30, 35K30, 35K30. ## 1. Introduction Let \(M\) be a manifold with boundary, \(f\colon M\to\mathbb{R}\) be a Morse function, and let \(\xi\) be a Morse function, and let \(\xi\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and let \(\xi^{\prime}\) be a Morse function, and \(\xi^{\ **Acknowledgments.** The paper is a part of the Master's thesis of MM under the supervision of MB. A part of the project was done while MB was visiting the Renyi Institute, whose hospitality he is grateful for. Both authors were supported by the NCN OPUS Grant 2019/B/35/ST1/01120. ## 2. Morse theory for manifolds with boundary Recall that a smooth function \(f\colon M\to\mathbb{R}\), where \(M\) is a closed smooth manifold, is called _Morse_, if any critical point \(x\) of \(f\) is non-degenerate, that is to say, the matrix of the second derivatives \(D^{2}f(x)\) is non-degenerate, which turns out not to depend on the choice of the coordinate system around \(x\). This definition can be generalized to the case of manifolds with boundary; see [13, Section 2.4]. **Definition 2.1** (Morse function).: Let \(M\) be a manifold with boundary, and let \(f\colon M\to\mathbb{R}\) be smooth. We say that \(f\) is Morse if all critical points of \(f\) are non-degenerate and \(f\) restricted to \(\partial M\) is also Morse (in the usual sense). **Theorem 2.2** (Boundary Morse lemma, see [1, Lemma 2.6]).: _Let \(p\in\partial M\) be a non-degenerate critical point of a Morse function \(f\colon M\to\mathbb{R}\). Then there exist an integer \(k=0,\dots,n-1\), \(\epsilon=\pm 1\), and local coordinates \(x_{1},\dots,x_{n-1},y\), defined in an open neighbourhood \(U\ni p\), such that_ 1. \(p=(0,\dots,0)\)_;_ 2. \(y\geq 0\) _on_ \(U\)_;_ 3. \(y=0\) _defines_ \(\partial M\cap U\)_;_ 4. _the following equality holds:_ (2.3) \[f(x_{1},\dots,x_{n-1},y)=f(p)-x_{1}^{2}-\dots-x_{k}^{2}+x_{k+1}^{2}+\dots+x_{ n-1}^{2}+\epsilon y^{2}.\] The next definition comes from [13], we refer to [1, Section 2.4] for a detailed discussion. **Definition 2.4** (Boundary stable and unstable critical points).: Let \(p\) be a boundary critical point of a Morse function \(f\). We say that \(p\) is _boundary stable_ (resp. _boundary unstable_) if \(\epsilon=-1\) (resp. \(\epsilon=+1\)), where \(\epsilon\) is defined as in (2.3). In short, being a boundary stable critical point means that the flow of \(\nabla f\) attracts toward the boundary in the vicinity of the critical point, while in the case of a boundary unstable critical point, the flow of \(f\) repels from the boundary. We now define the index of a boundary critical point. **Definition 2.5** (Index of a critical point).: Let \(p\) be a boundary critical point. The _index_ of a critical point is the dimension of the negative definite subspace of \(D^{2}f(p)\). Put differently, for a boundary stable critical point, the index is \(k+1\), while for the boundary unstable critical point, the index is \(k\). Here, \(k\) is defined as in Theorem 2.2. We now recall the definition of a gradient like vector field. **Definition 2.6** (Gradient-like vector field).: Let \(f\) be a Morse function on a manifold with boundary \(M\). Let \(\xi\) be a vector field on \(M\). We shall say that \(\xi\) is gradient-like with respect to \(f\), if 1. \(\xi\) is tangent to \(\partial M\) at the boundary; 2. \(\partial_{\xi}f>0\) away from the critical points of \(f\); 3. for any critical point \(p\) of \(f\), there exist local coordinates \(x_{1},\dots,x_{n}\) around \(p\), such that \(f\) and \(\xi\) admit the following form (called the Morse normal form): (2.7) \[f(\bar{x})= f(p)-x_{1}^{2}-\dots-x_{k}^{2}+x_{k+1}^{2}+\dots+x_{n}^{2}\] \[\xi(\bar{x})= (-x_{1},\dots,-x_{k},x_{k+1},\dots,x_{n})\] The model situation for \(\xi\) is that \(\xi=\nabla f\), for a suitably chosen Riemannian metric on \(M\). In [1, Section 1.1], it is proved that each Morse function \(f\) admits a gradient-like vector field. We now recall the Morse-Smale condition; see [1, Section 4.3]. **Definition 2.8** (Morse-Smale vector field).: A gradient-like vector field \(\xi\) for \(f\) is called _Morse-Smale_, if for any two critical points \(p\) and \(q\), with \(f(p)<f(q)\): * the submanifolds of \(\partial M\): \(W^{u}(p)\cap\partial M\) and \(W^{s}(q)\cap\partial M\) intersect transversally, * the submanifolds of \(\operatorname{Int}M\): \(W^{u}(p)\cap\operatorname{Int}M\) and \(W^{s}(q)\cap\operatorname{Int}M\) intersect transversally. Here \(W^{s}\) and \(W^{u}\) are, respectively, the stable and the unstable manifolds of a critical point of \(\xi\). We note that the Morse-Smale condition is open-dense among all gradient-like vector fields; see [1, Section 4.3]. ## 3. Coordinate neighborhood Throughout Section 3, we let \(f\) be a fixed Morse function satisfiying the assumption of Theorem 1.2. All gradient-like vector fields are assumed to be gradient-like with respect to the function \(f\). The proof of the following result is completely analogous to the proof of [1, Proposition 5.2], with the only difference being that the local behavior of the first coordinate is different. **Proposition 3.1**.: _There exists an open neighborhood \(U_{1}\) of \(\gamma\), a coordinate map \(\varphi\colon U_{1}\to\mathbb{R}_{\geq 0}\times\mathbb{R}^{n-1}\) (with coordinates denoted by \((y,x_{1},\dots,x_{n-1})\)) and a gradient-like vector field \(\xi_{1}\) for \(f\) agreeing with \(\xi\) away from \(U_{1}\), such that:_ * \(\varphi\) _takes_ \(U_{1}\cap\partial M\) _to_ \(\{0\}\times\mathbb{R}^{n-1}\)_;_ * \(\varphi(p)=(0,0,\dots,0)\)_;_ * \(\varphi(q)=(0,1,0,\dots,0)\)_;_ * _the curve_ \(\gamma\) _is mapped to the segment_ \((0,t,0,\dots,0)\)_,_ \(t\in[0,1]\)_, connecting_ \(p\) _with_ \(q\)_;_ * _the map_ \(\varphi\) _takes_ \(\xi_{1}\) _to a vector field on_ \(\varphi(U_{1})\subset\mathbb{R}_{\geq 0}\times\mathbb{R}^{n-1}\) _given in the form_ (3.2) \[(yv(y,x_{1},\dots),w(x_{1}),-x_{2},\dots,-x_{k-1},x_{k},\dots,x_{n-1})\] _for some smooth functions_ \(v,w\) _with the properties listed below._ _;_ * _the function_ \(w\) _is positive for_ \(x_{1}\in(0,1)\) _and negative for_ \(x_{1}\notin[0,1]\)_;_ * _the function_ \(v\) _is positive at_ \(x_{1}=0\) _and negative at_ \(x_{1}=1\)_._ **Remark 3.3**.: The convention in (3.2) is that the first coordinate of \(\xi_{1}\) is the \(\frac{\partial}{\partial y}\)-coordinate, while the next coordinates are directions of \(\frac{\partial}{\partial x_{1}},\dots,\frac{\partial}{\partial x_{n-1}}\). From now on, we assume that such \(U_{1}\) and \(\varphi\) have been chosen. We will now improve \(\xi_{1}\) so that it still has the form (3.2), but the function \(v\) is better behaved. **Lemma 3.4**.: _There exists a smaller neighborhood \(U_{2}\subset U_{1}\) of \(\gamma\) and a gradient-like vector field \(\xi_{2}\), agreeing with \(\xi_{1}\) away from \(U_{1}\), such that, on \(U_{2}\)\(D\varphi(\xi_{2})\) is given by (3.2), but_ \[v(y,x_{1},\dots)=2x_{1}-1. \tag{3.5}\] Proof.: Define \[\xi_{v}=(y(2x_{1}-1),w(x_{1}),-x_{3},\dots,x_{n}).\] Around the critical points, \(f\) is given by (2.3). By direct calculation, we obtain that \(\partial_{\xi_{v}}f\geq 0\) in a neighborhood \(U_{p}\) of \(p\) and in a neighborhood \(U_{q}\) of \(q\). Let \(U_{\gamma}\) be a neighborhood of \(\gamma\). The vector field \(\xi_{1}\) is gradient-like for \(f\) and \(\overline{U_{\gamma}\smallsetminus(U_{p}\cup U_{q})}\) does not contain any critical points of \(f\), and so there exists \(C>0\) such that \(\partial_{\xi_{1}}f>C\) everywhere on \(\overline{U_{\gamma}\smallsetminus(U_{p}\cup U_{q})}\). As \(y\equiv 0\) on \(\gamma\), the second coordinate of both \(\xi_{v}\) and \(\xi_{1}\) is small at points that are close to the boundary, that is, we may assume that \(U_{\gamma}\) is small enough that the following two conditions hold on \(U_{\gamma}\): \[\left|yv(y,x_{1},\dots,)\frac{\partial f}{\partial y}\right| <C/3;\] \[\left|y(2x_{1}-1)\frac{\partial f}{\partial y}\right| <C/3.\] By the triangle inequality, we conclude that \(\partial_{\xi_{v}}f>C/3\) everywhere on \(U_{\gamma}\smallsetminus(U_{p}\cup U_{q})\), and so \(\xi_{v}\) is gradient-like for \(f\) on \(U_{\gamma}\cup U_{p}\cup U_{q}\). To define a global vector field \(\xi_{2}\), choose an open neighborhood \(U_{2}\) of \(\gamma\) such that \(\overline{U_{2}}\subset U_{\gamma}\cup U_{p}\cup U_{q}\). Let \(\phi_{2}\colon M\to[0,1]\) be a smooth function supported on \(U_{\gamma}\cup U_{p}\cup U_{q}\) and equal to \(1\) on \(U_{2}\). We finally define \[\xi_{2}=\phi\xi_{v}+(1-\phi)\xi_{1}.\] This vector field clearly has the desired form on \(U_{2}\) and is gradient-like as a convex combination of gradient-like vector fields. The following result is a repetition of [15, Assertion 1]. In the applications, we will set \(W\) to be an open set containing \(\gamma\), whose closure is contained in \(U_{2}\). **Proposition 3.6**.: _Suppose \(\widetilde{\xi}\) is a gradient-like vector field for \(f\). For any open subset \(W\) containing \(\gamma\), there exists a neighborhood \(U\) of \(\gamma\), \(U\subset W\) such that if a trajectory of \(\widetilde{\xi}\) enters \(U\) and leaves \(W\), then it never re-enters \(U\)._ ## 4. Proof of the main theorem We begin with the following auxiliary result. **Proposition 4.1**.: _Suppose \(\xi\) and \(f\) are as in Theorem 1.2. There exists a Morse function \(\widetilde{f}\) having the same critical points as \(f\), such that \(\xi\) is gradient-like for \(\widetilde{f}\) and there is no critical point \(q^{\prime}\) of \(\widetilde{f}\) other that \(p\) and \(q\) such that \(\widetilde{f}(q^{\prime})\in[\widetilde{f}(p),\widetilde{f}(q)]\)._ Proof.: The result follows from the Global Rearrangement Theorem [1, Proposition 4.6]. The rearrangement can be carried out by successively applying the Elementary Rearrangement Theorem [1, Proposition 4.1]. Note that if one chooses the auxilliary function \(\mu\) in the proof of [1, Lemma 4.3] to be preserved by the gradient-like vector field \(\xi\) (instead of \(\nabla F\), as in [1]), then \(\xi\) is a gradient-like vector field for the resulting Morse function \(\widetilde{f}\) obtained by the Global Rearrangement Theorem. The statement of Global Rearrangement Theorem implies that there are no critical points in between \(\widetilde{f}(p)\) and \(\widetilde{f}(q)\). If there are any other critical points on the level set of \(\widetilde{f}(p)\), we use the Elementary Rearrangement Theorem again to push them slightly below that level set. Likewise, any critical point on the level set of \(\widetilde{f}(q)\) other than \(q\) can be pushed slightly above that level set. As previously, \(\xi\) is ensured to be gradient-like for \(\widetilde{f}\). The resulting function satisfies the statement of Proposition 4.1, as desired. From now on, assume that such a rearrangement has been made. In Section 3 we fixed the coordinate system on a neighborhood \(U_{2}\) of \(\gamma\) in which the gradient-like vector field \(\xi\) for \(f\) has the form described in Lemma 3.4. We choose now smaller neighborhood \(W\) of \(U_{2}\) containing \(\gamma\): properties of \(W\) will be specified later. As \(x_{1}\) and \(y\) play a special role in the proof of Theorem 1.2, we will use a slightly different notation for coordinates. Namely, we will use the coordinates \((y,x,\bar{u})\) (where \(\bar{u}=(u_{1},\ldots,u_{n-2})\)), with \(x\) playing the role of \(x_{1}\) and \(\bar{u}\) being the vector \((x_{2},\ldots,x_{n})\). That is to say, inside \(U_{2}\) (in particular, inside \(W\)), the vector field \(\xi\) has the form \[(y(2x-1),w(x),-u_{1},\ldots,-u_{k-1},u_{k},\ldots,u_{n-2}), \tag{4.2}\] Set \(a=\inf_{w\in W}f(w)\), \(b=\sup_{w\in W}f(w)\). Upon possibly shrinking \(W\), by using Proposition 4.1, we may assume that the following condition is satisfied. **Condition 4.3**.: There are no critical points \(q^{\prime}\) of \(f\) such that \(f(q^{\prime})\in[a,b]\) and \(q^{\prime}\neq p,q\). Possibly shrinking \(W\) even further we may assume that \(W\) has the form \(W_{\rm II}\times(-\delta,\delta)^{n-2}\) where \(W_{\rm II}\) is a simply-connected open subset of \(\mathbb{R}^{2}\) and \(\delta>0\). Recall that the function \(w\) is positive on \((0,1)\) and negative away from \([0,1]\). We choose \(U\subset W\) as a neighborhood of \(\gamma\) such that a trajectory of \(\xi\) going through \(U\) and \(W\) never returns to \(U\): existence of such \(U\) was proved in Proposition 3.6. First perturb \(\xi\) by the formula. \[\xi_{c}=\xi-c\eta(x,y,\bar{u})\frac{\partial}{\partial x}\] where \(c>0\) is a constant such that the \(x\) component of \(\xi_{c}\) is always negative on \(W\cap\partial M\), and \(\eta\) is a suitable bump function supported in \(W\). For convenience, we will take \(\eta\) of the form \(\eta(x,y,\bar{u})=\alpha(x)\beta(y)\delta(\bar{u})\), where \(\alpha,\beta,\delta\) are such that: * \(\eta\) is equal to \(1\) in a neighborhood of \(\gamma\); * the support of \(\eta\) is contained within \(U\); * \(\beta^{\prime}(y)\neq 0\) for all \(y\) such that \(\beta(y)\neq 0,1\); * \(\alpha(x)=1\) for all \(x\in[0,1]\); * \(\delta\equiv 1\) in a neighborhood of \((0,\dots,0)\). We will now study the properties of \(\xi_{c}\). **Lemma 4.4**.: _The vector field \(\xi_{c}\) is tangent to \(\partial M\)._ Proof.: This follows from the fact that both \(\xi\) and \(\frac{\partial}{\partial x}\) are tangent to \(\partial M\). **Proposition 4.5**.: _The critical points of \(\xi_{c}\) away from \(W\) coincide with the critical points of \(\xi\). In \(W\), \(\xi_{c}\) has a single critical point \(z=(y_{0},x_{0},0,\dots,0)\), where \(x_{0}=\frac{1}{2}\) and \(y_{0}\) is uniquely specified by the condition_ \[\beta(y_{0})=\frac{1}{c}w(x_{0}). \tag{4.6}\] We recall that \(w(x)\) is the function specified by Proposition 3.1; cf. (4.2). Proof.: Clearly, \(\xi\) and \(\xi_{c}\) coincide outside \(U\); in particular, \(\xi_{c}\) has no critical points inside \(W\smallsetminus U\). Note that \(\xi_{c}\) has a vanishing \(x\) coordinate on \(\gamma\), hence neither \(p\) nor \(q\) are critical points of \(\xi_{c}\). Next, suppose \(z=(y_{0},x_{0},\bar{u}_{0})\) satisfies \(\xi_{c}(z)=0\). The vanishing of the \(\bar{u}\) components of \(\xi_{c}(z)\) is equivalent to saying that \(\bar{u}_{0}=(0,\dots,0)\). The vanishing of the \(y\) coordinate of \(\xi_{c}(z)\) implies that \(y_{0}(2x_{0}-1)=0\). Now \(y_{0}=0\) would mean that \(z\in U\cap\partial M\), but then the \(x\) component of \(\xi_{c}(z)\) is non-zero. Hence, \(x_{0}=\frac{1}{2}\). This implies that \[\xi_{c}(z)=\left(0,w(x_{0})-c\beta(y_{0}),0,\dots,0\right).\] Consequently, \(y_{0}\) satisfies (4.6). Since \(w(x_{0})>0\), as per Proposition 3.1, and \(\beta\) is strictly decreasing on the set where it takes values between \(0\) and \(1\), such \(y_{0}\) is unique. We now study the critical point \(z\) in greater detail. **Lemma 4.7**.: _The critical point \(z\) is hyperbolic. The linearization \(D_{z}\xi_{c}\) has \(k\) real negative eigenvalues and \(n-k\) real positive eigenvalues._ Proof.: The \(\bar{u}\)-coordinates account for \(k-1\) real negative eigenvalues and \(n-k-1\) real positive eigenvalues. We have \(\delta\equiv 1\) in a neighborhood of \(z_{0}\). We can now restrict our attention to the initial two coordinates of \(\xi_{c}\): \[\xi_{2}(y,x)=(y(2x-1),w(x)-c\alpha(x)\beta(y)). \tag{4.8}\] The derivative of \(\xi_{c}\) in these directions is given by \[D_{y,x}\xi_{2}=\begin{bmatrix}\frac{\partial\xi_{2}}{\partial y}&\frac{ \partial\xi_{2}}{\partial x}\end{bmatrix}=\begin{bmatrix}2x-1&2y\\ -c\alpha(x)\beta^{\prime}(y)&w^{\prime}(x)-c\alpha^{\prime}(x)\beta(y)\end{bmatrix}\] For \(z=(x_{0},y_{0})\), since \(\alpha\equiv 1\) on \([0,1]\), we get \(\alpha(x_{0})=1,\alpha^{\prime}(x_{0})=0\) and the above simplifies to: \[D_{z}\xi_{2}=\begin{bmatrix}0&2y_{0}\\ -c\beta^{\prime}(y_{0})&w^{\prime}(x_{0})\end{bmatrix} \tag{4.9}\] We know that \(\beta^{\prime}(y_{0})<0\) and \(y_{0}>0\). That is, the matrix (4.9) has a negative determinant, and so, regardless of the sign of \(w^{\prime}(x_{0})\), it has two real eigenvalues, one positive and one negative. The vector field \(\xi_{c}\) has a hyperbolic critical point at \(z\). We need to make sure that \(\xi_{c}\) has the form required by Definition 2.6 near \(z\). The Grobman-Hartman theorem asserts that \(\xi_{c}\) can be linearized at \(z\) by a Holder continuous map, which is insufficient for our purposes. Instead of pushing the regularity of this linearization map, we replace \(\xi_{c}\) near \(z\) by its linear part. The new vector field has the form (2.7) near \(z\). **Proposition 4.10**.: _For any neighborhood \(U_{z}\subset U\) of \(z\), there exists a vector field \(\xi^{\prime}\), agreeing with \(\xi_{c}\) away from \(U_{z}\), such that, in some system of coordinates on \(U_{z}\), \(\xi^{\prime}\) has the form (2.7). Moreover, \(\xi^{\prime}\) and \(\xi_{c}\) might be as close in \(C^{0}\)-norm as we please._ Proof.: Let \(\xi_{\mathrm{lin}}\) be the linear part of \(\xi_{c}\) at \(z\). Let \(\tau\) be a bump function supported on \(U_{z}\), equal to \(1\) on a smaller neighborhood of \(z\). Set \[\xi^{\prime}=\xi_{c}(1-\tau)+\xi_{\mathrm{lin}}\tau.\] Then, as long as \(\tau\equiv 1\), we have \(\xi^{\prime}=\xi_{\mathrm{lin}}\), which means that in the system of coordinates corresponding to the eigenvectors of \(\xi_{c}(z)\), the vector field \(\xi^{\prime}\) has the form (2.7), as desired. Note that the difference between \(\xi^{\prime}\) and \(\xi_{c}\) at a point \(z^{\prime}\) is of order \(\left\|z-z^{\prime}\right\|^{2}\). Therefore, choosing the support \(U_{z}\) sufficiently small, we make guarantee that the difference \(\left\|\xi^{\prime}-\xi_{c}\right\|_{C^{0}}\) can be made smaller than any predetermined positive constant. We will now aim to construct a new Morse function, whose gradient-like vector field will be \(\xi^{\prime}\). The main step towards this result is the following lemma: **Lemma 4.11**.: _Any trajectory of \(\xi^{\prime}\) that enters \(W\), either converges to \(z\) or leaves \(W\) in finite time._ Proof.: The proof is done in three steps. In Step 1, we provide a proof in two-dimensional case for the vector field \(\xi_{c}\). In Step 2, we will show that this argument works also for \(\xi^{\prime}\). In Step 3, we show that the higher-dimensional case can be reduced to the two-dimensional case. **Step 1. The case of \(\xi_{c}\) in two dimensions.** We proceed by analyzing the phase portrait. Recall that in the two-dimensional case, \(\xi_{c}\) is given by the formula (4.8). We denote the coordinates of \(\xi_{c}\) by \[\xi_{c,x} =w(x)-c\alpha(x)\beta(y)\] \[\xi_{c,y} =y(2x-1)\] Let \[\Gamma_{x}=\xi_{c,x}^{-1}(0),\ \Gamma_{y}=\xi_{c,y}^{-1}(0).\] The latter is easily seen as \(\Gamma_{y}=\left\{x=\frac{1}{2}\right\}\cup\{y=0\}\). In order to obtain a more explicit description of \(\Gamma_{x}\), note that \(\beta^{-1}(r)\) is well-defined for any \(r\in(0,1)\). Set \[\kappa(x)=\beta^{-1}\left(\frac{w(x)}{c\alpha(x)}\right).\] Since \(w\) is negative away from \([0,1]\), all points of \(\Gamma_{x}\) satisfy \(x\in[0,1]\) and \(\alpha(x)=1\). Then \((x,\kappa(x))_{x\in(0,1)}\) parametrizes the subset \[\Gamma_{x}^{\kappa}=\left\{(x,y)\in\Gamma_{x}\ |\ \beta(y)\neq 0,\beta(y)\neq 1 \right\}.\] However, if \(\beta(y)=1\), then \[\xi_{c,x}(x,y)=w(x)-c<0\] by construction, so \(\Gamma_{x}\cap\beta^{-1}(y)\) is empty. Similarly, if \((x,y)\in\Gamma_{x}\) and \(\beta(y)=0\), then \[\xi_{c,x}(x,y)=w(x)=0,\] that is, \(x\in\{0,1\}\). Therefore \[\Gamma_{x} =\Gamma_{x}^{\kappa}\cup\Gamma_{x}^{0},\text{where}\] \[\Gamma_{x}^{\kappa} =\left\{(x,\kappa(x))\ |\ x\in(0,1)\right\}\] \[\Gamma_{x}^{0} =\left\{(x,y)\ |\ x\in\{0,1\},\ \beta(y)=0\right\}\] Noticing that \(\beta^{\prime}(y)=0\) iff \(\beta(y)\in 0,1\), an analogous reasoning lets us conclude that \[\kappa^{\prime}(x)=\frac{1}{\beta^{\prime}(\kappa(x))}\text{ is finite for }x\in(0,1). \tag{4.12}\] We will use this fact below. Figure 2. Proof of Lemma 4.11. Step 1. The level sets \(\Gamma_{x}\) and \(\Gamma_{y}\) divide \(W\) into four regions \(\Omega_{1},\ldots,\Omega_{4}\), as in Figure 2. Let \(\theta\) be a forward trajectory of \(\xi_{c}\) starting at \(p_{s}\). Notice that \(\theta\) cannot cross \(\Gamma_{x}^{0}\) because \(\xi_{c}\) is tangent to \(\Gamma_{x}^{0}\). Now: * if \(p_{s}\in\Omega_{1}\), then \(\xi_{c,y}\) remains positive and separated from \(0\) along \(\theta\), so \(\theta\) leaves \(W\) to the right; * if \(p_{s}\in\Omega_{3}\), then \(\xi_{c,x}\) remains negative and separated from \(0\) along \(\theta\), so \(\theta\) always leaves \(W\) to the bottom; * if \(p_{s}\in\Omega_{2}\) or \(p_{s}\in\Omega_{4}\), then \(\theta\) either * hits the critical point \(z\); * leads to one of \(\Omega_{1}\) or \(\Omega_{3}\), and, by the previous considerations, eventually leaves \(W\). In the light of (4.12), \(\frac{\partial}{\partial y}\) always crosses \(\Gamma_{x}^{\kappa}\) transversally. Similarly, \(\frac{\partial}{\partial x}\) always crosses \(\Gamma_{y}\) transversally. Therefore, it is clear from the phase portrait that each trajectory can cross \(\Gamma_{x}^{\kappa}\) or \(\Gamma_{y}\) only once. In other words, any trajectory of \(\xi_{c}\) entering \(W\) either hits \(z\) or leaves \(W\) in finite time. **Step 2. The case of \(\xi^{\prime}\) in two dimensions.** We begin with the following observation. For any \(U_{0}\subset W\) containing \(z\) and sufficiently small, there exists another \(U_{1}\subset U_{0}\) being an open neighborhood of \(z\), such that if the trajectory of \(\xi_{c}\) starts in \(U_{1}\) and then leaves \(U_{0}\), then does not return to \(U_{1}\), unless it leaves \(W\). To see this, we use an argument similar to the one used in the proof of Proposition 3.6 given in [11]. Namely, if for all \(U_{1}\) the trajectory of \(\xi_{c}\) leaves \(U_{0}\) and subsequently returns to \(U_{1}\), upon passing to a limit, we would construct a trajectory both starting and terminating at \(z\). It is clear from the phase portrait, that every trajectory starting at \(z\) leads either to \(\Omega_{1}\) or to \(\Omega_{3}\). Similarly, every trajectory entering \(z\), comes from either \(\Omega_{2}\) or \(\Omega_{4}\). We already argued that the trajectory in \(\Omega_{1}\) and \(\Omega_{3}\) cannot enter \(\Omega_{2}\) or \(\Omega_{4}\) without leaving \(W\), hence no trajectory starts and terminates at \(z\) without leaving \(W\). Before choosing an appropriate neighborhood \(U_{0}\), we need to provide some estimates. Let \(v_{+}\) and \(v_{-}\) be length one eigenvectors of the matrix \(D_{z}\xi_{c}\), where \(v_{+}\) corresponds to the positive eigenvalue, while \(v_{-}\) corresponds to the negative eigenvalue. Let \((s,t)\) be local coordinates near \(z\) such that \(v_{+}\) and \(v_{-}\) correspond to \((1,0)\) and \((0,1)\) respectively. In other words, in these coordinates, we have: \[\xi_{\text{lin}}(s,t) =(c_{+}s,c_{-}t)\] \[\xi_{c}(s,t) =(c_{+}s,c_{-}t)+O\left(\left\|\!\left|(s,t)\right|\!\right|^{2}\right)\] for the eigenvalues \(c_{+},c_{-}\) of \(D_{z}\xi_{c}\), satisfying \(c_{+}>0>c_{-}\). Consider the function \[g(s,t)=\frac{1}{2}\left(s^{2}-t^{2}\right).\] With this definition, we have \[\partial_{\xi_{\text{lin}}}g=c_{+}s^{2}-c_{-}t^{2}\geq 0\] with equality only at \((0,0)\). Moreover, \[\partial_{\xi_{c}}g=c_{+}s^{2}-c_{-}t^{2}+O\left(\left\|(s,t)\right\|^{3}\right).\] In particular, there exists a neighborhood \(U_{0}\) of \(z\), such that \(\partial_{\xi_{c}}g>0\) everywhere except at \(z\). Find a smaller \(U_{1}\subset U_{0}\) such that any trajectory of \(\xi_{c}\) entering \(U_{1}\) leaving \(U_{0}\) does not return to \(U_{1}\). Let \(\tau\) be a cut-off function supported in \(U_{1}\) and let \(\xi^{\prime}\) be the vector field constructed in Proposition 4.10 above using this function \(\tau\). As \(\partial_{\xi_{\mathrm{lin}}}g,\partial_{\xi_{c}}g>0\) on \(U_{1}{\smallsetminus}\{z\}\), the same inequality holds for a convex combination thereof. Let \(\theta\) be a trajectory of \(\xi^{\prime}\). * If \(\theta\) stays forever in \(U_{0}\), then \(\partial_{\xi^{\prime}}g\geq 0\) implies that \(\theta\) must hit \(z\); * If \(\theta\) does not enter \(U_{1}\), then it is actually a trajectory of \(\xi_{c}\) that does not hit \(z\), so it must leave \(W\). * If \(\theta\) starts in \(U_{1}\) and does not hit \(z\) and eventually leaves \(U_{0}\), then it becomes a trajectory of \(\xi_{c}\) before leaving \(U_{1}\). Consequently, \(\theta\) leaves \(W\) without returning to \(U_{0}\) by the choice of \(U_{1}\). These case conclude the proof of Step 2. **Step 3. The general case.** We assume that \(W\) has a product structure \(W=W_{\mathrm{II}}\times I^{n-2}\), where \(I=(-\delta,\delta)\) and \(W_{\mathrm{II}}\) is an open contractible subset of \(\mathbb{R}^{2}\). Consider the projection \(\Pi\colon W\to W_{\mathrm{II}}\) given by \(\Pi(y,x,\bar{u})=(y,x)\). As passing from \(\xi_{c}\) to \(\xi^{\prime}\) affects only the first two coordinates, we conclude that \(\xi^{\prime}\) has the form \[\xi^{\prime}=(\alpha_{1}(x,y),\alpha_{2}(x,y),-u_{1},\ldots,-u_{k-1},u_{k}, \ldots,u_{n-2}).\] Let \(\xi_{2}^{\prime}\) be the vector field on \(W_{\mathrm{II}}\) with coordinates \[\xi_{2}^{\prime}=(\alpha_{1}(x,y),\alpha_{2}(x,y)).\] Note that \(D\Pi(\xi^{\prime})=\xi_{2}^{\prime}\). If \(\theta\) is a trajectory of \(\xi^{\prime}\) staying in \(W\), then \(\Pi(\theta)\) is a trajectory of \(\xi_{2}^{\prime}\) staying in \(W_{\mathrm{II}}\). By the previous step, any trajectory \(\xi_{2}^{\prime}\) either converges to the critical point \((y_{0},x_{0})\) or leaves \(W_{\mathrm{II}}\). Therefore, if \(\theta\) stays forever in \(W\), \(\Pi(\theta)\) has to converge to \((y_{0},x_{0})\). The \(\bar{u}\) components of \(\xi^{\prime}\) are \((-u_{1},\ldots,-u_{k-1},u_{k},\ldots,u_{n-2})\), that is, if \(\theta\) stays forever in \(W\), the \(\bar{u}\)-coordinates have to converge to \((0,\ldots,0)\). Consequently, if \(\theta\) stays forever in \(W\), then it terminates at \(z\). **Remark 4.13**.: What we proved in Lemma 4.11 concerns forward behavior of a trajectory. However, the backward result is also true: if a trajectory stays forever in the past in \(W\), it has to start at \(z\). The proof is completely analogous. As a corollary, we prove the following result. **Corollary 4.14**.: _Suppose \(a,b\) are such that \(W\subset f^{-1}[a,b]\) and the only critical points of \(f\) in \(f^{-1}[a,b]\) are \(p\) and \(q\). If \(\theta\) is a trajectory of \(\xi^{\prime}\), then:_ * _Either it exits_ \(f^{-1}[a,b]\) _through_ \(f^{-1}(b)\)_, or it terminates at_ \(z\)_._ * _Either it enters_ \(f^{-1}[a,b]\) _through_ \(f^{-1}(a)\)_, or it starts at_ \(z\)_;_ Proof.: If \(\theta\) does not intersect \(U\), it is a trajectory of \(\xi\). Any trajectory of \(\xi\) that does not hit \(U\), flows from \(f^{-1}(a)\) to \(f^{-1}(b)\). Suppose \(\theta\) enters \(U\). By Lemma 4.11, either it hits \(z\), or it leaves \(W\). If it hits \(z\), we are done; if it leaves \(W\), then by Proposition 3.6, it does never return to \(U\), but then, as soon as \(\theta\) leaves \(W\), it actually becomes a trajectory of \(\xi\), and so it must terminate at a critical point. As there are no critical points of \(f\) in \(f^{-1}[a,b]\smallsetminus W\), \(\theta\) must hit a critical point with a critical value not in \([a,b]\). Since \(f\) increases along \(\theta\), we conclude that \(\theta\) terminates above the level set \(f^{-1}(b)\). This proves the first part of the corollary. The proof of the other is analogous. As a final stage, we construct a Morse function \(g\) whose gradient-like vector field is \(\xi^{\prime}\). The construction follows from Vector Field Integration Lemma [1], which in turn generalizes [11, Assertion 5, page 54]. Nevertheless, the proof in [1] has a small technical flaw (namely, the function that is constructed is not necessarily continuous unless the vector field is properly rescaled), therefore we give an independent construction. We note that in [1], there will be given a more general statement of the Vector Field Integration Lemma. The following result concludes the proof of Theorem 1.2. **Proposition 4.15**.: _There exists a Morse function \(g\) whose gradient-like vector field is \(\xi^{\prime}\), such that \(g=f\) away from \(f^{-1}(a,b)\)._ Proof.: The high-level idea is to define \(g\) by an explicit formula in a neighborhood of \(z\) and by interpolation elsewhere. Set \(c=\frac{1}{2}(a+b)\). Inside \(U_{z}\) (see Proposition 4.10) define a function \[g_{0}(s,t,u_{1},\dots,u_{n-2})=c+s^{2}-t^{2}-u_{1}^{2}-\dots-u_{k-1}^{2}+u_{k} ^{2}+\dots+u_{n-2}^{2}.\] Write \(\alpha^{2}:=s^{2}+u_{k}^{2}+\dots+u_{n-2}^{2}\), \(\beta^{2}:=t^{2}+u_{1}^{2}+\dots+u_{k-1}^{2}\). For \(\rho>\varepsilon>0\) define (cf. [1, Section 2.4]) the subset of \(U_{z}\): \[H_{\rho,\varepsilon}:=\{-\alpha^{2}+\beta^{2}\in[-\varepsilon^{2},\varepsilon ^{2}],\ \ \alpha^{2}\beta^{2}\leqslant(\rho^{4}-\varepsilon^{4})/4\}.\] For sufficiently small \(\rho\), the set \(H_{\rho,\varepsilon}\) is compact (which is a formal way of saying that it is contained in the interior of \(U_{z}\)). Let us now define the following parts of the boundary of \(H_{\rho,\varepsilon}\); see Figure 3. \[\begin{split} X_{\mathrm{in}}&=\partial H_{\rho, \varepsilon}\cap\{-\alpha^{2}+\beta^{2}=-\varepsilon^{2}\}\subset g_{0}^{-1} (c-\varepsilon^{2})\\ X_{\mathrm{out}}&=\partial H_{\rho,\varepsilon} \cap\{-\alpha^{2}+\beta^{2}=\varepsilon^{2}\}\subset g_{0}^{-1}(c+\varepsilon ^{2})\\ X_{\mathrm{tan}}&=\partial H_{\rho,\varepsilon} \cap\{\alpha^{2}\beta^{2}=(\rho^{4}-\varepsilon^{4})/4\}.\end{split} \tag{4.16}\] It is clear that \(\xi^{\prime}\) is tangent to \(X_{\mathrm{tan}}\) (cf. [1, Lemma 2.31]). Choose sufficiently small values \(\rho>\varepsilon_{1}>\varepsilon_{2}>0\). For brevity, we will write \(H_{\varepsilon_{i}}:=H_{\rho,\varepsilon_{i}}\) and the corresponding subsets of \(\partial H_{\varepsilon_{i}}\) will be denoted by \(X_{\mathrm{in}}^{i},X_{\mathrm{out}}^{i},X_{\mathrm{tan}}^{i}\), as in Figure 4. Rescale the vector field \(\xi^{\prime}\) by a positive factor in such a way that: * if the flow of \(\xi^{\prime}\) does not hit \(H_{\varepsilon_{1}}\), then it takes precisely the time \(b-a\) to get from \(f^{-1}(a)\) to \(f^{-1}(b)\). * if the flow of \(\xi^{\prime}\) starts from \(f^{-1}(a)\) and hits \(X_{\mathrm{in}}^{1}\), then it takes precisely time equal \(c-\varepsilon-a\); * if the flow of \(\xi^{\prime}\) ends at \(f^{-1}(b)\) and hits \(X_{\mathrm{out}}^{1}\) in the past, then it takes precisely time equal \(b-c-\varepsilon\); * The time to reach \(X^{1}_{\mathrm{out}}\smallsetminus X^{2}_{\mathrm{out}}\) from \(X^{1}_{\mathrm{in}}\smallsetminus X^{2}_{\mathrm{in}}\) is equal to \(2\varepsilon\). Choose a point \(x\in f^{-1}(a,b)\) and let \(\gamma_{x}\) be the trajectory of \(\xi^{\prime}\) through \(x\). We now define the function \(g_{1}\) away from \(H_{\varepsilon_{2}}\) as follows: * Suppose \(\gamma_{x}\) travels from \(f^{-1}(a)\) to \(f^{-1}(b)\) without hitting \(H_{\varepsilon_{2}}\). Assume that \(\gamma_{x}(0)\in f^{-1}(a)\), and so \(\gamma_{x}(b-a)\in f^{-1}(b)\). We set \(g_{1}(x)=a+t_{x}\), where \(t_{x}\) is defined by \(\gamma_{x}(t_{x})=x\). * Suppose \(\gamma_{x}\) travels from \(f^{-1}(a)\), passes through \(x\), and then hits \(X^{2}_{\mathrm{in}}\). Assume that \(\gamma_{x}(0)\in f^{-1}(a)\). We set \(g_{1}(x)=a+t_{x}\), where \(t_{x}\) is defined by \(\gamma_{x}(t_{x})=x\). * Suppose \(\gamma_{x}\) travels from \(X^{2}_{\mathrm{out}}\), passes through \(x\), and then hits \(f^{-1}(b)\). We assume \(\gamma_{x}(0)\in X^{2}_{\mathrm{out}}\). We set \(g_{1}(x)=c+\varepsilon+t_{x}\), where \(t_{x}\) is defined by \(\gamma_{x}(t_{x})=x\). With this definition, we have defined \(g_{1}(x)\) everywhere in \(f^{-1}[a,b]\) except for the interior of \(H_{\varepsilon_{2}}\). For future reference, observe that the way the vector field \(\xi^{\prime}\) leads to the following claim: **Lemma 4.17**.: _The function \(g_{1}\) is equal to \(c-\varepsilon\) on the whole of \(X^{1}_{\mathrm{in}}\), and to \(c+\varepsilon\) on the whole of \(X^{1}_{\mathrm{out}}\). Moreover, \(f\) and \(g_{1}\) coincide on \(f^{-1}(b)\)._ Ideally, we would like to set \(g\) to be equal to \(g_{0}\) on \(H_{\varepsilon_{2}}\) and to \(g_{1}\) away from \(H_{\varepsilon_{2}}\). However, this could potentially introduce a discontinuity at the boundary. Figure 4. Proof of Proposition 4.15. Figure 3. A schematic presentation of \(H_{\rho,\varepsilon}\). To avoid this, we choose a smooth cut-off function \(\phi\colon X^{1}_{\text{in}}\to[0,1]\), equal to \(1\) on \(X^{2}_{\text{in}}\) and supported on a compact subset of \(\operatorname{Int}X^{1}_{\text{in}}\). Extend \(\phi\) to the whole of \(H_{\varepsilon_{1}}\) demanding that \(\phi\) be invariant under the flow of \(\xi^{\prime}\). We set \[g(z)=\begin{cases}g_{1}(z)&z\notin H_{\varepsilon_{1}},\\ \left(\phi g_{0}+(1-\phi)g_{1}\right)(z)&z\in H_{\varepsilon_{1}}.\end{cases} \tag{4.18}\] In particular \(g=g_{1}\) outside \(H_{\varepsilon_{1}}\). We only need to check continuity at \(\partial H_{\varepsilon_{1}}\). By Lemma 4.17, \(g\) is continuous at \(X^{1}_{\text{in}}\) and \(X^{1}_{\text{out}}\), the vanishing of \(\phi\) on \(X^{1}_{\text{tan}}\) implies that \(g\) is continuous at \(X^{1}_{\text{tan}}\), and so, \(g\) is continuous everywhere. Moreover, after a little technical amendment, that is, after rescaling \(\xi^{\prime}\), \(g\) can be assumed smooth, cf. [1, Proof of Proposition 2.35]. By construction, \(g=g_{0}\) in \(H_{\varepsilon_{2}}\), in particular, it is Morse. We claim that \(\xi^{\prime}\) is gradient-like for \(g\) constructed in such a way. Since \(g\) and \(\xi^{\prime}\) admit the Morse normal form near \(z\) (see (2.7)), we only need to check that \(\partial_{\xi^{\prime}}g\geq 0\) on \(f^{-1}[a,b]\), with the sole equality at \(z\). It is evident on \(H_{\varepsilon_{2}}\), as \(g=g_{0}\). Note that, by construction, \(\partial_{\xi^{\prime}}g_{1}\equiv 1\) on \(f^{-1}[a,b]\diagdown H_{\varepsilon_{2}}\). In particular, \(\partial_{\xi^{\prime}}g>0\) away from \(H_{\varepsilon_{1}}\). Suppose \(z\in H_{\varepsilon_{1}}\diagdown H_{\varepsilon_{2}}\). The function \(\phi\) was to be \(\xi^{\prime}\)-invariant, hence \(\partial_{\xi^{\prime}}\phi=0\). By (4.18): \[\partial_{\xi^{\prime}}g=\partial_{\xi^{\prime}}(\phi g_{0}+(1-\phi)g_{1})= \phi\partial_{\xi^{\prime}}g_{0}+(1-\phi)\partial_{\xi^{\prime}}g_{1}.\] As \(\partial_{\xi^{\prime}}g_{0}(x)>0\) on \(H_{\varepsilon_{1}}\diagdown H_{\varepsilon_{2}}\), and \(\partial_{\xi^{\prime}}g_{1}\equiv 1\), we conclude that \(\partial_{\xi^{\prime}}g>0\) on \(H_{\varepsilon_{1}}\diagdown H_{\varepsilon_{2}}\). In this way, we finish the proof of Proposition 4.15, which was the last step in the proof of Theorem 1.2.
2310.13974
Automatic Pronunciation Assessment -- A Review
Pronunciation assessment and its application in computer-aided pronunciation training (CAPT) have seen impressive progress in recent years. With the rapid growth in language processing and deep learning over the past few years, there is a need for an updated review. In this paper, we review methods employed in pronunciation assessment for both phonemic and prosodic. We categorize the main challenges observed in prominent research trends, and highlight existing limitations, and available resources. This is followed by a discussion of the remaining challenges and possible directions for future work.
Yassine El Kheir, Ahmed Ali, Shammur Absar Chowdhury
2023-10-21T11:26:24Z
http://arxiv.org/abs/2310.13974v1
# Automatic Pronunciation Assessment - A Review ###### Abstract Pronunciation assessment and its application in computer-aided pronunciation training (CAPT) have seen impressive progress in recent years. With the rapid growth in language processing and deep learning over the past few years, there is a need for an updated review. In this paper, we review methods employed in pronunciation assessment for both phonemic and prosodic. We categorize the main challenges observed in prominent research trends, and highlight existing limitations, and available resources. This is followed by a discussion of the remaining challenges and possible directions for future work. ## 1 Introduction Computer-aided Pronunciation Training (**CAPT**) technologies are pivotal in promoting self-directed language learning, offering constant, and tailored feedback for secondary language learners. The rising demand for foreign language learning, with the tide of globalization, fuels the increment in the development of CAPT systems. This surge has led to extensive research and development efforts in the field (Neri et al., 2008; Kang et al., 2018; Rogerson-Revell, 2021). CAPT systems have two main usages: (_i_) pronunciation assessment, where the system is concerned with the errors in the speech segment; (_ii_) pronunciation teaching, where the system is concerned with correcting and guiding the learner to fix mistakes in their pronunciation. This paper addresses the former - focusing on pronunciation assessment, which aims to automatically score non-native speech-segment and give meaningful feedback. To build such a robust pronunciation assessment system, the following design aspects should be addressed. **Modelling**: Mispronunciation detection and diagnosis (MDD), in many cases, are more challenging to model compared to the vanilla automatic speech recognition (ASR) system, which converts speech into text regardless of pronunciation mistakes. Robust ASR should perform well with all variation including dialects and non-native speakers. However, MDD should mark phonetic variations from the learner, which may sometimes be subtle differences (Li et al., 2016). **Training Resources**: Recent success in deep learning methods emphasized the need for in-domain training data. Language learners can be divided into two groups: adult secondary (L2) language learners and children language learners - the former depends on whether to build a system that is native language dependant (L1). At the same time, the latter identifies the need for children's voice, which is a challenging corpus to build (Council III et al., 2019; Venkatasubramaniam et al., 2023), even the accuracy for ASR for children is still behind compared to adult ASR (Liao et al., 2015). The scarcity and imbalanced distribution of negative mispronunciation classes pose a significant challenge in training data. **Evaluation**: There is no clear definition of right or wrong in pronunciation, instead an entire scale from unintelligible to native-sounding speech (Witt, 2012). Given that error in pronunciation is difficult to quantify, it can be split into (a) _Objective evaluations_ - (_i_): phonetic or segmental; (_ii_): prosodic or supra-segmental; and (_iii_) place or articulation, manner of speech or sub-segmental; (b) _Subjective evaluations_; in many cases measured through listening tasks followed by human judgment, and can be split into three main classes: (_i_) intelligibility; (_ii_) comprehensibility and (_iii_) accentedness (or linguistic native-likeness). See Figure 1 for common pronunciation assessment factors. Several studies have summarized advances in pronunciation error detection (Eskenazi, 1999, 2009; Witt, 2012; Li et al., 2016; Chen and Li, 2016; Zhang et al., 2020; Caro Anzola and Mendoza Moreno, 2023). Eskenazi (1999) investigated the potentials and limitations of ASR for L2 pronunciation assessment, showcasing its practical implementation using an interface developed at CMU. Furthermore, the study reports different automatic scoring techniques, emphasizing modalities of interaction, associated algorithms, and the challenges. Witt Witt2012 presented an overview of pronunciation error detection, encompassing various scoring methodologies and assessing commercial CAPT systems. Chen and Li2016 provided a research summary, focusing on phoneme errors and prosodic error detection. More recently, Zhang et al. Zhang2020 provided a summary of two automatic scoring approaches (a) ASR-based scoring to calculate confidence measures; and (b) acoustic phonetics scoring focusing on comparing or classifying phonetic segments using various acoustic features. With large transformer-based pre-trained models gaining popularity, re-visiting the existing literature and presenting a comprehensive study of the field is timely. We provide an overview of techniques adapted for detecting mispronunciation in _(a)_ segmental space, _(b)_ assessing pronunciation with supra-segmal measures, along with _(c)_ different data generation/augmentation approaches. Unlike previous overview studies, we also cover a handful of _(d)_ qualitative studies bringing together the notion of intelligibility, comprehensiveness, and accentedness. We note the resources and evaluation measures available to the speech community and discuss the main challenges observed within prominent research trends, shedding light on existing limitations. Additionally, we also explore potential directions for future work. ## 2 Nuances of Pronunciation Pronunciation can be defined as "the way in which a word or letter is said, or said correctly, or the way in which a language is spoken".1 Compared to other language skills, learning pronunciation is difficult. Yet, for learners, mastering L2 pronunciation is most crucial for better communication. Historically, pronunciation errors (mispronunciations) are characterized by phonetic (segmental) errors and prosodic (supra-segmental) errors Witt2012, Chen and Li2016, as represented in Figure 1. This characterization provides some clear distinctions for pronunciation assessment. Footnote 1: [https://dictionary.cambridge.org/dictionary/english/pronunciation](https://dictionary.cambridge.org/dictionary/english/pronunciation), Accessed: 2023-06-21 ### Pronunciation Errors Phonetic (segmental) errors involve the production of individual sounds, such as vowels, and consonants, and it includes three errors: insertion, deletion, and substitution. This can be attributed to several factors, including negative language transfer, incorrect letter-to-sound conversion, and misreading of text prompts Meng et al. (2007); Qian et al. (2010); Kartushina and Frauenfelder (2014); Li et al. (2016). For example, Arabic L1 speakers may find it difficult to differentiate between /p/ and /b/ as the phoneme /p/ is non-existent in Arabic, so verbs like /park/ and /bark/ might sound similar to Arabic L1 speakers. Similarly, in Spanish, there are no short vowels, so words like /eat/ and /it/ might sound similar to Spanish L1 speakers. ### Prosodic Errors Prosodic features encompass elements that influence the pronunciation of an entire word or sentence, including stress, rhythm, and intonation. Errors related to prosodic features involve the production of larger sound units. For intelligibility, prosodic features particularly play a significant role Raux and Kawahara (2002). This is especially true for tonal languages Dahmen et al. (2023) where variation in the pitch can lead to words with different meanings. Prosodic errors are often language-dependent and categorized by: stress (lexical and sentence), rhythm, and intonation. Figure 1: Types of Pronunciation Errors for Assessment _Stress_ is the emphasis placed on certain syllables in a word or sentence. It is articulated by increasing the loudness, duration, and pitch of the stressed syllable. It can be categorized as _lexical stress_, if the stress is placed on syllables within the word, or _sentence stress_ if the stress is placed on words within sentences. Mandarin learners of English have contrastive stress at the word-level that is absent in Korean, Mandarin speakers can have an advantage over Korean speakers in stress processing of English words (Wang, 2022). _Rythm_ is the pattern of stressed and unstressed syllables in a word or sentence. A language can be classified as either stress-timed or syllable-timed (Ohata, 2004; Matthews, 2014). In stress-timed languages, the duration of stressed syllables tends to dominate the overall time required to complete a sentence. Conversely, in syllable-timed languages, each syllable receives an equal amount of time during production. \begin{table} \begin{tabular}{p{113.8pt} p{113.8pt} p{113.8pt} p{113.8pt} p{113.8pt} p{113.8pt}} \hline \hline Corpus & Languages (L2) & Native Language (L1) & Dur/Utt & \#Speakers & Reported SOTA Results / Relevant Studies \\ \hline \hline ISLE (Menzel et al., 2000) * & English & German and Italian & 18/ & 46 & \# PER:(Hosseini-Kivanani et al., 2021) \\ \hline ERJ (Minematsu et al., 2004) * & English & Japanese & /68,000 & 200 & \# Ut Utterance PCC (Luan et al., 2012). Word Intelligibility (Minematsu et al., 2011). Phoneme Errors (Ito et al., 2005) \\ \hline CU-CHLOE (Meng et al., 2007a) & English & Cantonese and Mandarin & 34.6/18,139 & 210 & Phoneme F1-measure: 80.98\% (Wu et al., 2021) \\ \hline EURONOUNC (Cylwik et al., 2009) & Polish & German & /721 & 18 & \# Utterance rythm (Wagner, 2014) \\ \hline iCALL (Chen et al., 2015) \({}^{+}\) & Mandarin & 24 countries & 142/90,841 & 305 & FAR: 8.65\%, FRR: 3.09\%: (Li et al., 2017). Tone Recognition: (Tong et al., 2015) \\ \hline SingAkids-Mandarin (7) \({}^{+}\) & Mandarin & Singapore (English) & 125/79,843 & 255 & PER: 28.51\%. Tone Recognition (Tong et al., 2017) \\ \hline SHEFCE (Ng et al., 2017b) * & English, Cantonese & English, Cantonese & 25/ & 31 & Madarin syllable error rate: 17.3\%, English PER: 34.5\% (Ng et al., 2017a) \\ \hline VoisTUTOR (Yarra et al., 2019; Pal et al., 2022) & English & Kannada, Malayalam, Tugu, Tamil, Hindi and Gujarati & 14/26,529 & 16 & Word Intelligibility Accuracy: 96.58\% (Anand et al., 2023) \\ \hline EpaDB (Vidal et al., 2019a) \({}^{+}\) & English & Spanish & /3,200 & 50 & (Sancinetti et al., 2022) reported MinCost per phoneme \\ \hline SELL-CORPUS (Chen et al., 2019) * & English & Chinese & 31.6/ & 389 & F1-score Accept Detection: Word-level 35\%, Sentence-level 45\% (Kyriakopoulos et al., 2020) \\ \hline L2-ARCTIC (Zhao et al., 2018a) * & English & Hindi, Korean, Mandarin, Spanish, and Arabic & 3.6/ & 24 & F1-score: 63.04\% (Lin and Wang, 2022a) \\ \hline Speechocean762 (Zhang et al., 2021b) * & English & Chinese & /5,000 & 250 & Phone PCC: 65.60\% (Chao et al., 2022). Word Accuracy PCC: 59.80\% (Chao et al., 2022). Word Stress PCC: 32.30\% (Do et al., 2023). Sentence total score PCC: 79.60\% (Chao et al., 2022) \\ \hline LATIC (ZHANG, 2021) * & Mandarin & Russian, Korean, French, and Arabic & 42,579 & 4 & Sentence Accuracy PCC: 69.80\% (Lin and Wang, 2023b) \\ \hline Arabic-CAPT (Algabri et al., 2022) & Arabic & India, Pakistan, Indonesia, Nepal, Afghanistan, Nigeria, Uganda & 2.3/1,611 & 62 & F1-score 70.53\% (Algabri et al., 2022) \\ \hline AraVoicedL2 (EL Kheir et al., 2023b) & Arabic & Turkey, Bangladesh, Malaysia & Nigeria, Indonesia & 5.5/7,062 & 11 & F1-score 60.00\% (EL Kheir et al., 2023b) \\ \hline \hline \end{tabular} \end{table} Table 1: Widely used datasets. * represent publicly available dataset, + is available on request, # relevant study, Dur: total duration in hours, Utt: total number of utterances, SOTA: is the notable reported state-of-the-art for each corpus. FAR: false acceptance rate, FRR: false rejection rate, PCC: pearson correlation coefficient with human scores _Intonation_ refers to the melodic pattern and pitch variations in speech. L2 learners of Vietnamese and Mandarin Chinese encounter significant difficulty in acquiring distinct tones, particularly if their native language lacks tonality. Such tonal languages rely on different pitch patterns to convey distinct meanings, making it challenging for learners to accurately grasp and reproduce these tonal variations Nguyen et al. (2014); Chen et al. (2015). ### Pronunciation Constructs The motivation behind mastering L2 pronunciation is to communicate properly in the target language. Most of the time, these successes are measured using three pronunciation constructs Uchihara (2022) - **Intelligibility**, **Comprehensibility**, and **Acementness**. These are perceived measures, that are partially independent with overlapping features. **Intelligibility** can be defined using the accuracy of the sound, word, and utterance itself along with utterance-level completeness Abercrombie (1949); Gooch et al. (2016). Accuracy refers where the learner pronounces each phoneme, or word in the utterance correctly. In contrast, completeness measures the percentage of words pronounced compared to the total number of words. **Comprehensibility**, on the other hand, is defined based on the perceived ease or difficulty that listeners experience when understanding L2 speech. Fluency, defined by the smoothness of pronunciation and correct usage of pauses Zhang et al. (2021), is observed to be one of the key factors that determine the level of comprehensibility, along with good linguistic-knowledge and discourse-level organization Trofimovich and Isaacs (2012); Saito et al. (2016). Among the three constructs, **accentedness**, which is defined as "listeners' perceptions of the degree to which L2 speech is influenced by their native language and/or colored by other non-native features" Saito et al. (2016). It is often confused with both comprehensibility and intelligibility, influencing pronunciation assessment. The accent is an inherent trait that defines a person's identity and is one of the first things that a listener notices. It is often observed that most of the unintelligible speech is identified as highly accented whereas highly accented speech is not always unintelligible Derwing and Munro (1997); Kang et al. (2018); Munro and Derwing (1995). Thus accents complicate fine-grained pronunciation assessment as it is harder to pinpoint (supra-)segment-level error. ## 3 Datasets Obtaining datasets for pronunciation assessment is often challenging and expensive. Most of the available research work focused on private data, leaving only a handful of publicly accessible data to the research community. Table 1 provides an overview of available datasets, indicating English as a popular choice for the target language. Within this handful of datasets, a few datasets include phonetic/segmental-level transcription and even fewer provide manually rated word and sentence-level prosodic features, fluency along with overall proficiency scores offering insights to learner's L2 speech intelligibility and comprehensiveness Arvaniti and Baltazani (2000); Cole et al. (2017); Zhang et al. (2021). More details on datasets and annotation are in Appendix A and B respectively. ## 4 Research Avenues In this section, we will delve into diverse approaches, old, revised, and current methodologies used for pronunciation modeling of both segmental and supra-segmental features, as illustrated in Figure 2 and Figure 3. ### Classification based on Acoustic Phonetics Classifier-based approaches explored both segmental and prosodic aspects of pronunciation. _Segmental_ approaches involve the use of classifiers targeting specific phoneme pair errors, utilizing different acoustic features such as Mel-frequency cepstral coefficients (MFCCs) along with its first and second derivative, energy, zero-cross, and spectral features Van Doremalen et al. (2009); Huang et al. (2020), with different techniques such as Linear Discriminant Analysis (LDA) Truong et al. (2004); Strik et al. (2009), decision trees Strik et al. (2009). _Prosodic_ approaches focus on detecting lexical stress and tones, utilizing features such as energy, pitch, duration, and spectral characteristics, with classifiers like Gaussian mixture models (GMMs) Ferrer et al. (2015), support vector machines (SVMs) Chen and Wang (2010); Shahin et al. (2016), and deep neural network (DNNs) Shahin et al. (2016), and multi-distribution DNNs Li et al. (2018). ### Extended Recognition Network (ERN) ERNs are neural networks used in automatic speech recognition to capture broader contextual information, they leverage enhanced lexicons in combination with ASR systems. They cover canonical tran scriptions as well as error patterns, enabling the detection of mispronunciations beyond standard transcriptions (Meng et al., 2007; Ronen et al., 1997; Qian et al., 2010; Li et al., 2016). However, ERNs often depend on experts or hand-crafted error patterns, which are typically derived from nonnative speech transcriptions as illustrated in (Lo et al., 2010) which makes it language-dependent approach and may limit their generalizability when dealing with unknown languages. ### Likelihood-based Scoring and GOP The initial likelihood-based MD algorithms aim to detect errors at the phoneme level using pre-trained HMM-GMM ASR models. Notably, Kim et al. (1997) introduced a set of three HMM-based scores, including likelihood scores, log posterior scores, and segment-duration-based scores. Among these three, the log-based posterior scores are widely adopted due to their high correlation with human scores, and are also used to calculate the popular 'goodness of pronunciation' (GOP) measure. The GMM-HMM based GOP scores can be defined by the Equation 1. \[GOP(p)=P(p|O)=\frac{p(O|p)\;P(p)}{\sum_{q}\;p(O|q)\;P(q)} \tag{1}\] \(O\) denotes a sequence of acoustic features, \(p\) stands for the target phone, and \(Q\) represents the set of phones. These scores are further improved using forced alignment framework (Kawai and Hirose, 1998). More details are presented in Witt and Young (2000). ### Reformulations of GOP To enhance further the effectiveness of GOP scoring, (Zhang et al., 2008) are first to propose a log-posterior normalized GOP defined as: \[GOP_{r}(p)=|\frac{p(\alpha_{t}|p)P(p)}{\max_{q}p(\alpha_{t}|q)}| \tag{2}\] Building upon this, Wang and Lee (2012) adopted the GOP formulation and incorporate error pattern detectors for phoneme mispronunciation diagnosis tasks. With the emergence of DNN in the field of ASR, Hu et al. (2013, 2015, 2015) demonstrated that using a DNN-HMM ASR for GOP yields improved correlation scores surpassing GMM-HMM based GOP. The GOP and its reformulation represent a significant milestone. It leverages pre-trained acoustic models on the target language without the necessitating of speaker's L1 knowledge. Furthermore, it offers the advantage of being computationally efficient to calculate. However, these scores lack context-aware information that is crucial for accurate pronunciation analysis. To overcome this, Sudhakara et al. (2019) presented a context-aware GOP formulation by adding phoneme state transition probabilities (STP) extracted from HMM model to the GOP score calculation. Furthermore, Shi et al. (2020) proposed a context-dependent GOP, incorporating a phoneme duration factor \(\alpha_{i}\), and phonemes transition factor \(\tau\). The formulated GOP score combines all the contextual scores as illustrated in Equation 3. \[E_{t} =-\sum p(q|O)\;log(p(q|O)) \tag{3}\] \[\tau(p) =\sum_{t}\frac{\frac{1}{E_{t}}}{\sum_{t^{\prime}}\frac{E_{t^{ \prime}}}{E_{t^{\prime}}}}log(p(q|O))\] \[GOP(p) =(1-\alpha_{i})*\tau(p)\] For _sentence accuracy_ evaluation, one common approach is to calculate the average GOP scores across phonemes (Kim et al., 1997; Sudhakara et al., 2019). However, relying solely on averaging GOP scores at the phoneme level is limited. A recent approach in (Sheoran et al., 2023) proposed a combination of phone feature score and audio pitch comparison using dynamic time warping (DTW) with an ideal pronounced speech, as a score to assess prosodic, fluency, completeness, and accuracy at the sentence level. Inspired by GOP, Tong et al. (2015) proposed Goodness of Tone (GOT) based on posterior probabilities of tonal phones. While efforts have been made to improve the GOP formulation, it is important to acknowledge that the GOP score still has limitations, specifically in its ability to identify specific types of mispronunciation errors (deletion, insertion, or substitution), and it also demonstrates a degree of dependency on the language of the acoustic model. ### End-to-End Modeling In the new era of DNNs and Transformers, there is a significant exploration by researchers in leveraging the power of these models and training end-to-end pronunciation systems. Li et al. (2017) introduced LSTM mispronunciation detector leveraging phone-level posteriors, time boundary information, and posterior extracted from trained DNNs models on the classification of phonetic attributes (place, manner, aspiration, and voicing). In contrast, Kyriakopoulos et al. (2018) introduced a siamese network with BiLSTM for pronunciation scoring by extracting distance metrics between phone instances from audio frames. A notable approach presented in Leung et al. (2019) introduced a CNN-RNN-CTC model for phoneme mispronunciation detection without any alignment component dependency. Subsequently, Feng et al. (2020) incorporated character embeddings to enhance CNN-RNN model. Furthermore, Ye et al. (2022) enhanced the later model using a triplet of features consisting of acoustic, phonetic, and linguistic embeddings. Subsequently, GOP features extracted from pre-trained ASR are enhanced using a Transformer encoder to predict a range of scores of prosodic and segmental scores (Gong et al., 2022), or using additional SSL representation features, energy, and duration within the same architecture (Chao et al., 2022), or using Conformer encoder explored in (Fan et al., 2023). Moreover, PEPPANET is also a transformer-based mispronunciation model, but can jointly model the dictation process and the alignment process, and it provides corresponding diagnostic feedback (Yan et al., 2023a). A subsequent improvement of PEPANET uses knowledge about phone-level articulation traits with a graph convolutional network (GCN) to obtain more discriminative phonetic embeddings (Yan et al., 2023b). Recently, Zhang et al. (2023) proposed recurrent neural network transducer RNN-T for L2 phoneme sequence prediction along with an extended phoneme set and weakly supervised training strategy to differentiate similar-sounding phonemes from different languages. Several approaches have also been proposed for _supra-segmental features scoring_. Yu et al. (2015), proposed a new approach where traditional time-aggregated features are replaced with time-sequence features, such as pitch, to preserve more information without requiring manual feature engineering, a BiLSTM model is proposed for fluency predictions. Tao et al. (2016); Chen et al. (2018), studied different DNNs models such as CNN, BiLSTM, Attention BiLSTM to predict the fluency and prosodic scoring. (Lin and Wang, 2021) utilized deep features directly from the acoustic model instead of relying on complex feature computations like GOP scores with a scoring module, incorporating a self-attention mechanism, which is designed to model human sentence scoring. More recently, (Zhu et al., 2023) proposed BiLSTM model trained to predict the intelligibility score of a given phoneme or word segment using an annotated intelligibility L2 speech using shadowing. Towards _lexical stress_ detection, several methods have been proposed to improve accuracy and performance. Ruan et al. (2019) proposed a sequence-to-sequence approach using the Transformer model upon the need for long-distance contextual information to predict phoneme sequence with stress marks. Furthermore, Korzekwa et al. (2020) intro Figure 3: Overview of the performance of fluency and prosody assessment models on Speechocean762 Figure 2: Overview of the performance of different phonetic pronunciation detection models on L2-ARCTIC duced an attention-based neural network focusing on the automatic extraction of syllable-level features that significantly improves the detection of lexical stress errors. _Tone classification_ has received significant attention in Mandarin language learning due to the crucial role that tones play in Mandarin Chinese. To address the challenge several methods have been proposed. One approach involves training a DNN to classify speech frames into six tone classes (Ryant et al., 2014). Inspired by this, DNNs have been used to map combined cepstral and tonal features to frame-level tone posteriors. These tone posteriors are then fed into tone verifiers to assess the correctness of tone pronunciation (Lin et al., 2018; Li et al., 2018). Another study utilizes CNN to classify syllables into four Mandarin tones (Chen et al., 2016). Similarly, ToneNet, a CNN-based network is introduced for Chinese syllable tone classification using mel-spectrogram as a feature representation (Gao et al., 2019). Additionally, a BiLSTM model is proposed as an alternative to capture long-term dependencies in acoustic and prosodic features for tone classification (Li et al., 2019). ### Self-Supervised Models Motivated by the recent success of self-supervised learning methods (Baevski et al., 2020; Hsu et al., 2021; Chen et al., 2022; Mohamed et al., 2022) in speech recognition and related downstream tasks such as emotion recognition, speaker verification, and language identification (Chen and Rudnicky, 2023; Fan et al., 2020), self-supervised approaches is employed also in this field. Xu et al. (2021) explored finetuning wav2vec 2.0 on frame-level L2 phoneme prediction. A pretrained HMM-DNN ASR is used to extract time force-alignment. To overcome the dependency on time alignment, Peng et al. (2021) propose a CTC-based wav2vec 2.0 to predict L2 phonemes sequences. Building upon this work, Yang et al. (2022) propose an approach that leverages unlabeled L2 speech using momentum pseudo-labeling. In a contrasting approach, (Lin and Wang, 2022) combined wav2vec 2.0 features and phoneme text embeddings in a jointly learning framework to predict frame-level phoneme sequence and detect boundaries. Recently, EL Kheir et al. (2023) explored the multi-view representation utilizing mono- and multilingual wav2vec 2.0 encoders to capture different aspects of speech production and leveraging articulatory features as auxiliary tasks to phoneme sequence prediction. Furthermore, Kheir et al. (2023) introduces a novel L1-aware multilingual, L1-MultiMDD, architecture for addressing mispronunciation in multilingual settings encompassing Arabic, English, and Mandarin using wav2vec-large pre-trained model as the acoustic encoder. L1-MultiMDD is enriched with L1-aware speech representation, allowing it to understand the nuances of each speaker's native language. SSL models have proven to be effective in predicting fluency and prosodic scores assigned by human annotators. Kim et al. (2022); Lin and Wang (2023); Yang et al. (2022) fine-tuned wav2vec 2.0 and Hubert to predict prosodic and fluency scores. Similarly, another research conducted in (Lin and Wang, 2023) jointly predicts L2 phoneme sequence using CTC loss, and predicts prosodic scores using fused acoustic representations with phoneme embeddings. Subsequently Lin and Wang (2023) introduced a fusion of language embedding, representation features and build a unified framework for multi-lingual prosodic scoring. Recently, Chao et al. (2022); Kheir et al. (2023); Chen et al. (2023), enriched latent speech extracted from SSL models with handcrafted frame- and utterance-level non-verbal paralinguistic cues such as duration, and energy for modeling Fluency and Prosody scores. ### Unsupervised Approaches It is important to note that the aforementioned approaches for studying mispronunciation detection typically involve the need for expert knowledge, laborious manual labeling, or dependable ASR results, all of which come with significant costs. In contrast, recent years have witnessed considerable endeavors in unsupervised acoustic pattern discovery, yielding sub-optimal outcomes. Lee and Glass (2012) initially investigated a comparison-based approach that analyzes the extent of misalignment between a student's speech and a teacher's speech. In subsequent studies Lee and Glass (2015); Lee et al. (2016), explored the discovery of mispronunciation errors by analyzing the acoustic similarities across individual learners' utterances, with a proposed n-best filtering method to resolve ambiguous error candidate hypotheses derived from acoustic similarity clustering. Furthermore, Mao et al. (2018) proposed _k_-means clustering on phoneme-based phonemic posterior-grams (PPGs) to expand the phoneme set in L2 speech. More recently, Sini et al. (2023) introduced a weighted DTW alignment as an alternative to the GOP algorithm for predicting probabilities and the sequence of target phonemes. Their proposed method achieves comparable results to the GOP scoring algorithm, likewise Anand et al. (2023) explored alignment distance between wav2vec 2.0 representations of teacher and learner speech using DTW, to distinguish between intelligible and unintelligible speech. ### Data Augmentation Two major challenges in this field are L2 data scarcity and the imbalanced distribution of negative classes (mispronunciation). To address these challenges, researchers have opted for data augmentation techniques that are proven to be quite effective in pronunciation assessment. Such methods employed strategies like altering the canonical text by introducing mismatched phoneme pairs while preserving the original word-level speech (Fu et al., 2021). Additionally, a mixup technique is utilized in the feature space, leveraging phone-level GOP pooling to construct word-level training data (Fu et al., 2022). Furthermore, the error distance of the clustered SSL model embeddings are employed to substitute the phoneme sound with a similar sound (Zhang et al., 2022). These latter approaches depend on the reuse of existing information rather than generating novel instances of mispronuncia-tions. In (Fernandez et al., 2017), voice transformations in pitch, vocal-tract, vocal-source characteristics to generate new samples. Furthermore, L2-GEN can synthesize realistic L2 phoneme sequences by building a novel Seq2Seq phoneme paraphrasing model (Zhang et al., 2022). Korzekwa et al. (2020) proposed an augmentation technique by generating incorrectly stressed words using Neural TTS. Furthermore, Korzekwa et al. (2022) provided an overview of mispronunciation error generation using three methods, phoneme-2-phoneme P2P relies on perturbing phonetic transcription for the corresponding speech audio, text-2-speech create speech signals that match the synthetic mispronunciations, and speech-2-speech S2S to simulate a different aspect of prosodic nature of speech. Recently, SpeechBlender (EL Kheir et al., 2023) framework is introduced as a fine-grained data augmentation pipeline that linearly interpolates raw good speech pronunciations to generate mispronunciations at the phoneme level. ## 5 Evaluation Metrics **Phoneme Error Rate (PER):** is a common metric used in the MD evaluation, measuring the accuracy of the predicted phoneme with the human-annotated sequence. However, PER might not provide a comprehensive assessment of model performance when mispronunciations are infrequent which is the case for MD datasets. **Hierarchical Evaluation Structure:** The hierarchical evaluation structure developed in (Qian et al., 2010), has also been widely adopted in (Wang and Lee, 2015; Li et al., 2016; EL Kheir et al., 2023) among others. The hierarchical mispronunciation detection depends on detecting the misalignment over: _what is said_ (annotated verbatim sequence); _what is predicted_ (model output) along with _what should have been said_ (text-dependent reference sequence). Based on the aforementioned sequences, the false rejection rate, false acceptance rate, and diagnostic error rate are calculated, using: * True acceptance (**TA**): the number of phones annotated and recognized as correct pronunciations. * True rejection (**TR**): the number of phones both annotated and correctly predicted as mispronunciations. The labels are further utilized to measure the diagnostic errors and correct diagnosis based on the prediction output and text-dependent canonical pronunciation. * False rejection (**FR**): the number of phones wrongly predicted as mispronunciations. * False acceptance (**FA**): the number of phones misclassified as correct pronunciations. As a result, we can calculate the false rejection rate (**FRR**) that indicates the number of phones recognized as mispronunciations when the actual pronunciations are correct, false acceptance rate (**FAR**) that indicates phones misclassified as correct but are actually mispronounced, and diagnostic error rate (**DER**) using the following equations: \[FRR=\frac{FR}{TA+FR} \tag{4}\] \[FAR=\frac{FA}{FA+TR} \tag{5}\] \[DER=\frac{DE}{CD+DE} \tag{6}\] **Precision, Recall, and F-measure** are also widely used as the performance measures for mispronunciation detection. These metrics are defined as follows: \[Precision=\frac{TR}{TR+FR} \tag{7}\] \[Recall=\frac{TR}{TR+FA}=1-FAR \tag{8}\] \[F-measure=2\cdot\frac{Precision\cdot Recall}{Precision+Recall} \tag{9}\] **Pearson Correlation Coefficient:** PCC is widely used to measure the relation of the predicted score of fluency, stress, and prosody with other supra-segmental and pronunciation constructs with subjective human evaluation for pronunciation assessment. The human scores are typically averaged across all annotators to provide a comprehensive score. ## 6 Challenges and Future Look There are two significant challenges facing advancing the research further: (1) the lack of public resources. Table 1 shows a handful of L2 languages. With 7,000 languages spoken on earth, there is an urgent need for inclusivity in pronouncing the languages of the world. (2) There is a need for a unified evaluation metric for pronunciation learning, this can be used to establish and continually maintain a detailed leaderboard system, which serves as a dynamic and multifaceted platform for tracking, ranking, and showcasing the advances in the field to guide researchers from academia and industry to push the boundaries for pronunciation assessments from unintelligible audio to native-like speech. The advent of AI technology represents a pivotal moment in our technological landscape, offering the prospect of far-reaching and transformative changes that have the potential to revolutionize a wide array of services in CAPT. Listing here some of the opportunities: **Integration with Conversation AI Systems**: The progress made in Generative Pre-trained Transformer (GPT) led to a human-like text-based conversational AI. Furthermore, low-latency ASR has enhanced the adoption of speech processing in our daily life. Both have paved the way for the development of a reliable virtual tutor CAPT system, which is capable of interacting and providing students with instant and tailored feedback, thereby enhancing their pronunciation skills and augmenting private tutors. **Multilingual**: Recent advancements in end-to-end ASR enabled the development of multi-lingual code-switching systems Datta et al. (2020); Chowdhury et al. (2021); Ogunremi et al. (2023). The great progress in SSL expanded ASR capabilities to support from over \(100\)Pratap et al. (2023), to over \(1,000\)Pratap et al. (2023) languages. Traditional research in pronunciation assessments focused on designing monolingual assessment systems. However, recent advancements in multilingual allowed for the generalization of findings across different languages. Zhang et al. (2021) explored the adaptation of pronunciation assessments from English (a stress-timed language) to Malay (a syllable-timed language). Meanwhile, Lin and Wang (2023) investigated the use of language-specific embeddings for diverse languages, while optimizing the entire network within a unified framework. **Children CAPT**: There is a noticeable imbalance in research between children learning pronunciation research papers, for example, reading assessments, compared to adults' L2 language learning. This disparity can be attributed to the scarcity of publicly available corpora and the difficulties in collecting children's speech data. **Dialectal CAPT**: One implicit assumption in most of the current research in pronunciation assessment is that L2 is a language with a standard orthographic rule. However, Cases like dialectal Arabic - which is every Arab native language, there is no standard orthography. Since speaking as a native is the ultimate objective for advanced pronunciation learning, there is a growing demand for this task. ## 7 Conclusion This paper serves as a comprehensive resource that summarizes the current research landscape in automatic pronunciation assessment covering both segmental and supra-segmental space. The paper offers insights into the following: * highlighting design choices and their effect on performance. * emphasizes the success of automatic data generation/augmentation pipeline and lack of consensus annotation guidelines and labels. The paper also lists available resources to the community along with the current state-of-the-art performances reported per resource. * Importance of standardised evaluation metrics and steady benchmarking efforts. With the current trend of end-to-end modeling and multilingualism, we believe this study will provide a guideline for new researchers and a foundation for future advancements in the field. ### Limitations In this overview, we address different constructs of pronunciation and various scientific approaches for detecting errors, predicting prosodic and fluency scores among others. However, we have not included the corrective feedback mechanism of CAPT system. Moreover, the paper does not cover, in detail, the limited literature on CAPT's user study, or other qualitative study involving subjective evaluation. With the fast-growing field of pronunciation assessments, it is hard to mention all the studies and resources. Therefore, we would also like to apologize for any oversights of corpora or major research papers in this study. ## Ethics Statement We discussed publicly available research and datasets in our study. Any biases are unintended.
2306.11885
Reward Shaping via Diffusion Process in Reinforcement Learning
Reinforcement Learning (RL) models have continually evolved to navigate the exploration - exploitation trade-off in uncertain Markov Decision Processes (MDPs). In this study, I leverage the principles of stochastic thermodynamics and system dynamics to explore reward shaping via diffusion processes. This provides an elegant framework as a way to think about exploration-exploitation trade-off. This article sheds light on relationships between information entropy, stochastic system dynamics, and their influences on entropy production. This exploration allows us to construct a dual-pronged framework that can be interpreted as either a maximum entropy program for deriving efficient policies or a modified cost optimization program accounting for informational costs and benefits. This work presents a novel perspective on the physical nature of information and its implications for online learning in MDPs, consequently providing a better understanding of information-oriented formulations in RL.
Peeyush Kumar
2023-06-20T20:58:33Z
http://arxiv.org/abs/2306.11885v1
# Reward Shaping via Diffusion Process in Reinforcement Learning ###### Abstract Reinforcement Learning (RL) models have continually evolved to navigate the exploration - exploitation trade-off in uncertain Markov Decision Processes (MDPs). In this study, I leverage the principles of stochastic thermodynamics and system dynamics to explore reward shaping via diffusion processes. This provides an elegant framework as a way to think about exploration-exploitation trade-off. This article sheds light on relationships between information entropy, stochastic system dynamics, and their influences on entropy production. This exploration allows us to construct a dual-pronged framework that can be interpreted as either a maximum entropy program for deriving efficient policies or a modified cost optimization program accounting for informational costs and benefits. This work presents a novel perspective on the physical nature of information and its implications for online learning in MDPs, consequently providing a better understanding of information-oriented formulations in RL. ## 1 Introduction In this article, I take inspiration from stochastic thermodynamics to derive a problem formulation for online learning in uncertain MDPs while grounded in system dynamics. The system balances the diffusion process with drift dynamics as a way to formulate the exploration-exploitation trade-off. To this effect, I make an explicit link between the information entropy and the stochastic dynamics of a system coupled to an environment. I analyze various sources of entropy production: due to the decision-maker's uncertainty about the system-environment interaction characteristics; due to the stochastic nature of system dynamics; and the interaction of the decision maker's knowledge with system dynamics. This analysis provides a framework that can be formulated either as a maximum entropy program to derive efficient policies that balance the exploration and exploitation trade-off, or as a modified cost optimization program that includes informational costs and benefits. ### Background Markov decision processes (MDPs) are perhaps the most widely studied models of sequential decision problems under uncertainty (Puterman, 1994). In this article, an MDP is described by the tuple \(\mathcal{M}=(S,A,T,R,N)\). Here, \(S\) is a finite set of states; \(A\) is a finite set of actions; \(T\) denotes a transition probability function of the form \(T(s^{\prime}|s,a)\), for \(s,s^{\prime}\in S\) and \(a\in A\); \(R\) denotes a reward function of the form \(R(s^{\prime}|s,a)\), for \(s,s^{\prime}\in S\) and \(a\in A\); and \(N\) denotes a finite planning horizon. This MDP models the following time-invariant, finite-horizon, sequential decision-making problem under uncertainty. A decision-maker observes the state \(s_{t}\in S\) of a system at the beginning of time-slot \(t\in\{1,2,\ldots,N\}\) and then chooses an action \(a_{t}\in A\). The system then stochastically evolves to a state \(s_{t+1}\in S\) by the beginning of slot \(t+1\) with probability \(T(s_{t+1}|s_{t},a_{t})\). As a result of this transition, the decision-maker collects a reward \(R(s_{t+1}|s_{t},a_{t})\). This process of state observation, action selection, state evolution, and reward collection repeats until the end of slot \(N\). A policy trajectory \(\pi=(\pi_{1},\pi_{2},\ldots,\pi_{N})\) is a decision-rule that assigns actions \(\pi_{t}(s_{t})\in A\) to states \(s_{t}\in S\), for \(t=1,2,\ldots,N\). Note that the set \(\mathcal{P}\) of such policy trajectories is finite. The decision-maker's objective is to find a policy trajectory \(\pi=(\pi_{1},\pi_{2},\ldots,\pi_{N})\in\mathcal{P}\) that maximizes the expected reward \[J_{\pi}(s_{1})=E\Bigg{[}\sum_{t=1}^{N}R(s_{t+1}|s_{t},\pi_{t}(s_{t}))\Bigg{]}.\] It is assumed for simplicity of notation that no terminal reward is earned at the end of slot \(N\). The transition probability function is often unknown to the decision-maker at the outset. This calls for online learning of transition probabilities while the system evolves. For instance, in medical treatment planning, a doctor might not know the uncertain dose-response function of an individual at the beginning of a treatment course, but may want to adaptively make drug selection and dosing decisions over the treatment course (Kotas and Ghate, 2016). Similarly, a seller conducting a sequence of auctions may not know the bidder demand and willingness-to-pay distributions, but must adaptively make auction-design decision such as the minimum bid in each auction (Ghate, 2015). Such problems fall under the broad framework of MDPs under imperfect information, and can be seen as Bayesian adaptive MDPs (BAMDPs) or partially observable MDPs (POMDPs) in some cases (Bertsekas, 2005; Dreyfus and Law, 1977; Krishnamurthy, 2016; Kumar, 1985; Kumar and Varaiya, 2016). The challenge in any Bayesian learning approach is that there is no clear consensus on the actual problem that needs to be solved. Generally, we want to find a policy that _maximizes_ cumulative reward while learning with uncertain or partial information. BAMDPs provide a classic formulation of this problem. But this formulation does not take into account the cost of information gain, and hence intuitively one can find better policies that leverage information gain while learning. The information theoretic methods developed so far rely on the heuristic idea of information ratio, which, I believe, is somewhat ad-hoc. In addition, this ratio does not give a strong insight into the global problem that is being solved. I find a relation between the optimization of reward and the cost of information that is embedded in the dynamics of the system and its interaction with the environment. ## 2 Physical nature of information In order to motivate the idea of the physical nature of information, I dive into the role of information in thermodynamics of gases. To guide the reader, a natural connection is to interpret particle configurations in gas systems as sample trajectories in stochastic system. Physicist Ludwig Boltzmann showed that with time, a system evolves towards lower states of energy, where the energy dispersed increases the entropy of the system due to the nature of statistics [1]. As Wolchover [2017] commented, " _There are many ways for energy to be spread among the particles in a system than concentrated in a few, so as particles move around and interact, they naturally tend toward states in which their energy is increasingly shared. This has been classically understood as the second law of thermodynamics. But Maxwell's letter [18] described a thought experiment in which an enlightened being, called Maxwell's demon (Figure 1), uses its knowledge to lower entropy and violate the second law. The demon knows the positions and velocities of every molecule in a container of gas. By partitioning the container and opening and closing a small door between the two chambers, the demon lets only fast-moving molecules enter one side, while allowing only slow molecules to go the other way. The demon's actions divide the gas into hot and cold, concentrating its energy and lowering its overall entropy. The once useless gas can now be put to work. This thought experiment lead to questions on how a law of nature could depend on one's knowledge of the positions and velocities of molecules. [This implies that second law of thermodynamics require a reinterpretation to include the subjective nature of information.] Charles Bennett [1], building on work by Leo Szilard [1976] and Rolf Landauer [1], resolved the paradox by formally linking thermodynamics to the science of information. Bennett argued that the demon's knowledge is stored in its memory, and memory has to be erased, which takes work. [1] calculated that at room temperature, it takes at least 2.9 zeptojoules of energy for a computer to erase one bit of stored information.) In other words, as the demon organizes the gas into hot and cold and lowers the gas's entropy, its brain burns energy and generates more than enough entropy to compensate. The overall entropy of the gas-demon system increases, satisfying the second law of thermodynamics. These findings revealed that, as Landauer put it, "Information is physical" [1]. More information implies that more work can be extracted. Maxwell's demon can wring work out of a single-temperature gas because it has far more information than the average user." This interaction of entropy and dynamics, capture by the second law, creates a strong foundation to analyze stochastic systems. There is a natural equivalence between stochastic thermodynamics and stochastic control theory. Any decision process can be modeled as a Figure 1: Maxwell’s demon classic control problem. Generally, the quantities which are of interest are averaged over trajectories of the system rather than sample path behaviors. Thermodynamics has provided an intuitive framework and solution about averaged entities on stochastic systems. I study this equivalence and bridge gaps in the existing literature on learning in MDPs. I develop an equivalent thermodynamic system and apply an information theoretic framework to find a formulation of the learning problem to compute good policies. There has been some work in the literature to bridge this gap between control theory and stochastic thermodynamics. Brockett and Willems (1979) studies second law of thermodynamics from the point of view stochastic control theory. They compute a criterion which, when satisfied, permits one to assign a temperature to a stochastic system in a way that Carnot cycles become the optimal trajectories of optimal control problems. Propp (1985) also studied the connection between thermodynamic and Markovian systems. There, an input-output framework for thermodynamics was proposed, which allowed to introduce the notion of states, controls and response, thus drawing a connection between the two fields. There has also been a recent surge in understanding the field of stochastic thermodynamics to study Markovian processes at the trajectory level using statistical quantities (Seifert et al., 2011; Aurell et al., 2012). Saridis (1988) proposed a formulation that gives a generalized energy interpretation to the optimal control problem. This framework provides compatibility between the control problem and the information theoretic methodology for the intelligent control system using entropy as the common measure. A reformulation of the optimal control problem is based on the idea of expressing the design of the desirable control by the uncertainty of selecting a control law that minimized a given performance index. ## 3 Clairvoyant MDP: an information theoretic perspective Consider the Bellman's equation for MDP \(M=\{S,A,T,R,N\}\). \[V^{*}(s)=\min_{a}\sum_{s^{\prime}}T(s^{\prime}|s,a)[R(s^{\prime}|s,a)+V^{*}(s^ {\prime})]. \tag{1}\] I consider an alternate formulation to this classical MDP, with a small loss of generality. Todorov (2009) proposed a linear problem where actions that are considered symbolic in the above formulation are replaced through making decisions over transition distributions. Therefore, the decision maker specifies a control dynamics distribution \(a(s^{\prime}|s)=T(s^{\prime}|s,a)\). This allows us to write an equivalent reward form as \[q(s,a)=\ell(s)+\underset{s^{\prime}\to a(\cdot|s)}{E}\ln\left(\frac{a(s^{ \prime}|s)}{p(s^{\prime}|s)}\right),\] where the state cost \(\ell(s)\) is an arbitrary function encoding how undesirable different states are and \(p(s^{\prime}|s)\) is an arbitrary transition distribution. Using this construction the Bellman's equation can be rewritten as: \[V^{*}(s)=\min_{a}\left(\ell(s)+\underset{s^{\prime}\to a(\cdot|s)}{E}\left[\ln \frac{a(s^{\prime}|s)}{p(s^{\prime}|s)}+V^{*}(s^{\prime})\right]\right). \tag{2}\] Now, I define the quantity \(G(s)=\underset{s^{\prime}\sim p(\cdot|s)}{E}exp(-V^{*}(s^{\prime}))\). Therefore, through some algebraic manipulation, I get \[\underset{s^{\prime}\sim a(\cdot|s)}{E}\left[\ln\frac{a(s^{\prime}|s)}{p(s^{ \prime}|s)}+V^{*}(s^{\prime})\right]=-\ln(G(s))+\mathbb{KL}\left(a(\cdot|s) \|\frac{p(\cdot|s)\exp(-V^{*}(\cdot))}{G(s)}\right),\] which gives \[V^{*}(s)=\min_{a}\left[\ell(s)-\ln(G(s))+\mathbb{KL}\left(a(\cdot|s)\|\frac{p( \cdot|s)\exp(V^{*}(\cdot))}{G(s)}\right)\right]. \tag{3}\] An interesting observation is that the right hand side of the above function is minimized when the KL divergence is \(0\), which gives the optimality condition as \[a^{*}(s^{\prime}|s) =\frac{p(s^{\prime}|s)\exp(-V^{*}(s^{\prime}))}{G(s)} \tag{4}\] \[=\frac{p(s^{\prime}|s)\exp(-V^{*}(s^{\prime}))}{\sum_{s^{\prime}} p(s^{\prime}|s)exp(-V^{*}(s^{\prime}))} \tag{5}\] Now consider the following Lemma (Theodorou and Todorov, 2012; Theodorou, 2015). **Lemma 1**.: _Consider distributions \(\mathbb{A}\) and \(\mathbb{P}\) defined on the same probability space with sample set \(\Omega\), such that \(\mathbb{A}\) is absolutely continuous with respect to \(\mathbb{P}\), and \(Q:\Omega\mapsto\mathbb{R}\) is a measurable function, then the following inequality holds_ \[\frac{1}{\rho}\ln\left(\underset{\mathbb{P}}{E}\left[e^{\rho Q(s)}\right] \right)\leq\underset{\mathbb{A}}{E}[Q(s)+|\rho|^{-1}\mathbb{KL}(\mathbb{A}| \mathbb{P})],\] _where \(\rho\in\mathbb{R}^{-}\)._ Proof.: The proof is reproduced here for completeness. It is a straightforward derivation from Jensen's inequality. Consider, \[\ln\left(\underset{\mathbb{P}}{E}\left[e^{\rho Q(s)}\right]\right) =\ln\sum_{s}p(s)e^{[\rho Q(s)]}\] \[=\ln\left[\sum_{s}a(s)\frac{p(s)}{a(s)}\exp\left(\rho Q(s)\right)\right]\] \[\overset{a}{\geq}\sum_{s}a(s)\ln\left[\frac{p(s)}{a(s)}\exp\left( \rho Q(s)\right)\right]\] \[=\rho\underset{\mathbb{A}}{E}[Q(s)]+\sum_{s}a(s)\ln\frac{p(s)}{a (s)}\] \[=\rho\left(\underset{\mathbb{A}}{E}[Q(s)]-\rho^{-1}\mathbb{KL}( \mathbb{A}|\mathbb{P})\right),\] where inequality "a" follows from Jensen's inequality and concavity of the \(\ln\) function. Now dividing both sides by \(\rho\in\mathbb{R}^{-}\) gives the required inequality. Next, consider Equation (2), where I substitute \(Q(s^{\prime})=\ell(s)+V^{*}(s^{\prime})\). Now using Lemma 1 with \(\rho=-1\), \(\mathbb{P}=p(s^{\prime}|s)\), \(\mathbb{A}=a(s^{\prime}|s)\), I get \[-\ln\left(\underset{s^{\prime}\sim p(\cdot|s)}{E}\left[e^{-\ell(s)-V^{*}(s^{ \prime})}\right]\right)\leq\sum_{s^{\prime}}a(s^{\prime}|s)\left[\ell(s)+V^{*}( s^{\prime})+\ln\frac{a(s)}{p(s)}\right],\] which implies \[-\ln\left(\underset{s^{\prime}\sim p(\cdot|s)}{E}\left[e^{-\ell(s)-V^{*}(s^{ \prime})}\right]\right)=\min_{a}\sum_{s^{\prime}}a(s^{\prime}|s)\left[\ell(s)+ V^{*}(s^{\prime})+\ln\frac{a(s)}{p(s)}\right].\] The right hand side of the above equation is the right hand side of the Bellman equation in Equation (2). Therefore \[V^{*}(s)=-\ln\left(\underset{s^{\prime}\sim p(\cdot|s)}{E}\left[e^{-\ell(s)-V ^{*}(s^{\prime})}\right]\right).\] This framework can also be used as an estimation framework where instead of minimizing the expected cumulative cost, the decision maker can maximize the KL divergence for a required performance. Therefore, the optimization problem in Equation (2) becomes \[\min_{\mathbb{A}}\mathbb{KL}(\mathbb{A}||\mathbb{P}),\] subject to \[\sum_{s^{\prime}}a(s^{\prime}|s)=1,\] \[V(s)=K,\] where \(K\) is the required performance. In the interest of providing interesting connection, I consider continuous optimization. Using the Lagrangian method, the optimization program reduces to \[\mathcal{L} =\mathbb{KL}(\mathbb{A}||\mathbb{P})+\mu(V(s)-K)+\lambda(\int_{s^ {\prime}}a(s^{\prime}|s)ds^{\prime}-1)\] \[=-\int_{s^{\prime}}a(s^{\prime}|s)\left(ln\frac{a(s^{\prime}|s)} {p(s^{\prime}|s)}+\mu V(s)+\lambda\right)ds^{\prime}+\mu K+\lambda\] Now, maximizing with respect to \(a(s^{\prime}|s)\) gives \[ln\frac{a^{*}(s^{\prime}|s)}{p(s^{\prime}|s)}+\mu V(s)+\lambda=0,\] which gives \[a^{*}(s^{\prime}|s)=\int exp(-muV(s)-\lambda)p(s^{\prime}|s)ds^{\prime}.\] Substituting in the first constraint for \(\int a(s^{\prime}|s)ds=1\), gives \[\lambda=\ln\int p(s^{\prime}|s)\exp(-\mu V(s))ds^{\prime}.\] Substituting \(\lambda\) back and discretizing gives the optimal solution for \(a^{*}\) for a given level of performance. In the case where \(K=V^{*}(s)\), this solution gives the optimal \(a^{*}\) as in Equation 5. This result is very similar to the one derived using HJB principle in the classic paper by Saridis (1988). For the reader's convenience, I recall that result in Appendix 7. Thermodynamics of information This section provides a brief introduction on the relationship between information and thermodynamics. We consider a system \(M\) (such as a gas in a container) that is connected to external reservoirs and other systems. Suppose the microstate of the system (for example, the coordinates and momentum of particles of the gas) is given by \(x\), and suppose that the information gained as a result of measurement is denoted by \(m\). This measurement is what helps to prepare the state of the system. Let us denote a generic statistical state of the system with \(\rho(x)\) (for example, the distribution of coordinate states and momentum of the gas molecules). I assume that in state \(\rho(x)\) the system is in statistical equilibrium. Now after making the measurement, the new state of the system in \(\rho(x|m)\), which in general is out of equilibrium. For example, in the context of the Sczilard's engine described in Section 2, after measurement the statistical state is confined to either the left or right half of the box. Information drives the system away from equilibrium. The thermodynamics of information allows us to reason about this scenario by associating an equivalent energy cost, thus justifying this movement from equilibrium to a non-equilibrium state. The most obvious entity that relates statistical entities to distributions is the entropy of the system. In this case, the non equilibrium entropy is defined using a scaled version of the Shannon Entropy as \[S(\rho)=-\sum_{x}\rho(x)\ln\rho(x)=H(X),\] where \(H(X)\) is the Shannon entropy. At equilibrium this entropy coincides with the canonical entropy \[\rho(x)=\exp^{-\beta E(x)}/Z,\] where \(E(x)\) is the Hamiltonian of the system, and \(Z\) is the partition function, and \(\beta\) is the inverse temperature. Using this we recover the thermodynamic relationship between Free energy \(\mathcal{F}(\rho)=-\beta^{-1}\ln Z\), and internal Energy \(E=E[H]\) and Entropy: \(\mathcal{F}=E-\beta^{-1}S\). The free energy is interpreted as the amount of useful energy that can be used to extract work, taking in account all entropy related costs. The classical second law of thermodynamics for non equilibrium system, therefore, can be written as \[\Delta S\geq 0\implies W-\Delta\mathcal{F}\geq 0, \tag{6}\] where \(W\) is the average work done on the system. The rest of the section evaluates the change in non-equilibrium free energy due to a measurement \(M\). For this purpose the corresponding information gain is defined as \[I(X;M)=H(X)-H(X|M).\] Now, in the event that an external system changes the system parameter after an observation is made, results in work extracted from the system. The refined second law of thermodynamics then becomes \[W-\Delta\mathcal{F}\geq-\beta^{-1}I(X;M) \tag{7}\] An interesting observation is that ultimately, the information used to extract work during feedback was supplied as work by the external system during the measurement process. ## 5 Markovian systems and second law of thermodynamics Now let's consider the second law of thermodynamics \(W\geq\Delta\mathcal{F}\) without feedback (Equation 6), and compare it with the Lemma 1 \[\frac{1}{\rho}\ln\left(\underset{\mathbb{P}}{E}\left[e^{\rho Q(s)}\right] \right)\leq\underset{\mathbb{A}}{E}[Q(s)+|\rho|^{-1}\mathbb{KL}(\mathbb{A}|| \mathbb{P})].\] The quantity on the left hand side is the Free Energy change \(\Delta\mathcal{F}\) and the work done on the system is the expected cumulative cost given by the right hand side of the equation. Substituting the relevant entities for the MDP defined in Section 3 provides a bridge between MDP and the respective thermodynamic interpretation. Therefore, using the mathematical equivalence, the policy that minimizes work done (or maximum work extracted from the system) gives the optimal solution for the MDP. The above results give sufficient evidence to explore equivalence between thermodynamic entities and Markov Decision Processes. In order to develop a learning framework in uncertain MDPs using information theoretic arguments, I develop the definition of thermodynamic quantities at the level of sample trajectories for Markovian system in the next section. ### Second law of thermodynamics for a Markovian system in a heat bath This section reviews the stochastic thermodynamics for Markovian Systems (Ito and Sagawa, 2016). Stochastic Thermodynamics is a theoretical framework to define quantities such as work and heat at the level of sample trajectories. Consider a system \(M\) that evolves stochastically. We assume a physical situation where system \(M\) is connected to a single heat bath at inverse temperature \(\beta\). Also assume that the system \(M\) is driven by an external parameter \(\pi\) and the system is not subject to non-conservative forces. For simplicity, we will assume discrete time \(t_{k},k=\{1,2,\cdots,N\}\), although, the mathematical setup does not force any assumption regarding the continuity of time. Let \(x_{k}\) be the state of the system at time \(t_{k}\), and \(\pi_{k}\) be the external parameter of the system at time \(t_{k}\). Let \(p(x_{k}|x_{k-1},\pi_{k})\) be the conditional probability of state \(x_{k}\) under the past trajectory and external parameter \(\pi_{k}\). Now building on the thermodynamic principles, we define the Hamiltonian of the system as \(E(x_{k},\pi_{k})\). The Hamiltonian change in the system is decomposed into 2 parts heat \(Q_{k}\) and work \(W_{k}\). The heat absorbed by the system from heat bath at time \(t_{k}\) is defined as \[Q_{k}=E(x_{k+1},\pi_{k})-E(x_{k},\pi_{k}),\] and the work done on the system \(M\) is defined as \[W_{k}=E(x_{k},\pi_{k})-E(x_{k},\pi_{k-1}).\] For a given trajectory \(\{x_{1},x_{2},\cdots,x_{n}\}\) the total heat is \(Q=\sum_{i=1}^{N-1}Q_{k}\) and total work is \(W=\sum_{i=1}^{N-1}W_{k}\), where \(x_{0}\) is defined as a buffer state such that \(p(x_{1}|x_{0},\pi)=1\) for any \(\pi\). This is done to impose consistency as it will become apparent later on. Using the above definitions, one can easily show that \(\Delta E_{k}=Q_{k}+W_{k}\), which is the first law of thermodynamics. Now let us define the quantity \(p_{B}(x_{k}|x_{k+1},\pi_{k})\) as the backward transition probability. In the absence of any non conservative the detailed balance (Seifert, 2005) is satisfied which gives \[\frac{p(x_{k+1}|x_{k},\pi_{k})}{p_{B}(x_{k}|x_{k+1},\pi_{k})}=e^{-\beta Q_{k}}.\] Now, I define the stochastic entropy of the system as \(h(x_{k})=-ln(x_{k})\). Therefore the _entropy production_ is defined as the sum of stochastic entropy change in the system and the bath. The stochastic entropy change in the system is given by \[\Delta h_{k}^{M}=h(x_{k+1})-h(x_{k}).\] The total stochastic entropy change therefore is given by \[\Delta h^{M}=ln\frac{p(x_{1})}{p(x_{N})}.\] The stochastic entropy change in the heat bath is given by the heat dissipation into the bath \[\Delta h_{k}^{bath}=-\beta Q_{k}.\] The total entropy change in the bath is given by \[\Delta h^{bath}=ln\frac{p(x_{N}|x_{N-1},\pi_{N-1})p(x_{N-1}|x_{N-2},\pi_{N-2} )\dots p(x_{2}|x_{1},\pi_{1})}{p_{B}(x_{1}|x_{2},\pi_{1})p_{B}(x_{2}|x_{3},\pi _{2})\dots p(x_{N-1}|x_{N},\pi_{N-1})}.\] Therefore the entropy production \(\sigma\) is \[\sigma=ln\frac{p(x_{N}|x_{N-1},\pi_{N-1})p(x_{N-1}|x_{N-2},\pi_{N-2})\dots p(x _{2}|x_{1},\pi_{1}))p(x_{1})}{p_{B}(x_{1}|x_{2},\pi_{1})p_{B}(x_{2}|x_{3},\pi _{2})\dots p(x_{N-1}|x_{N},\pi_{N-1})p(x_{N})}.\] For brevity I define the trajectory of the system as \(O=\{x_{1},x_{2},\dots,x_{N}\}\). Therefore the total entropy production becomes \[\sigma=ln\frac{p(O)}{p_{B}(O)}.\] Therefore, the entropy production is determined by the ratio of the probabilities of a trajectory and its time-reversal. Simple algebraic calculation on this definition yields the second law of thermodynamics which states that \[E[\sigma]\geq 0.\] The equivalent stochastic energetics definition gives the form as in Equation (6) \[W\geq\Delta\mathcal{F},\] where \(\mathcal{F}(\lambda_{k})=-\beta^{-1}\ln\sum_{X}\exp(-\beta E(x,\lambda_{k}))\). This result can be derived using the integral fluctuation theorem and the arguments presented in Seifert [2005]. ### Second law of thermodynamics for a Markovian system in connection with an external entity Here I consider the Markovian System \(M\) in contact with an external system \(D\) in addition to the heat bath. This external system, for instance can be the decision maker in the context of the MDP (More on this in the later sections). In particular, I state the generalized second law of thermodynamics, which states that the entropy production is bounded by the initial and final mutual information between \(M\) and \(D\), and the transfer entropy from \(M\) to \(D\). Let's consider the states of the system \(D\) at time \(t_{k}\) be \(d_{k}\). Therefore, the joint time evolution of system \(M\cup D\) is defined as \(\{(x_{1},d_{0}),(x_{2},d_{1}),\cdots,(x_{N},d_{N-1})\}\). For brevity, I define \(pa(x_{k+1})\) as the parent of state \(x_{k+1}\) which is the set of all states which has a non zero transition probability to \(x_{k+1}\), therefore \(pa(x_{k+1})=\{x_{k},d_{k-1}\}\), such that \(p(x_{k+1}|x_{k},d_{k-1})>0\). At the initial state I assume that \(pa(x_{1})\subseteq D\). The initial correlation between system \(S\) and \(D\) is then characterized by the mutual information between \(x_{1}\) and \(pa(s1)\). The corresponding stochastic mutual information is given by \[I_{ini}=I(x_{1};pa(x_{1})).\] Now, let's define \(an(x_{k+1})\) as the ancestors of \(x_{k}\) in the order that they were observed. Therefore \(an(x_{k+1})=\{(x_{1},d_{0}),(x_{2},d_{1}),\cdots,(x_{k},d_{k-1})\}\). The final correlation between system \(S\) and \(D\) is then characterized by the mutual information between \(x_{N}\) and \(an(x_{N})\cap D\). \[I_{fin}=I(x_{N};\{d_{0},d_{1},\cdots,d_{N}\}).\] Let's define another quantity \(pa(d_{k})\)as the parent of \(d_{k}\) that corresponds to \(pa(d_{k})=\{x_{k-1},d_{k-1}\}\)Finally, I define the transfer entropy from \(M\) to \(D\) as \[I_{tr}^{k}=I(d_{k};pa(d_{k})\cap M|d_{1},d_{2},\cdots,d_{k-1}).\] The total transfer entropy for the entire dynamics is therefore given by \[\sum_{k=1}^{N}I_{tr}^{k}=I_{tr}.\] By combining all the above informational content in the combined system, I define the total informational exchange as \[\Theta=I_{fin}-I_{tr}-I_{ini}.\] Now, as in the simple case in Section 5.1, I define the entropy production in system \(M\) and the heat bath, while in the presence of system \(D\). Let \(\mathcal{B}_{k+1}\subseteq D\) define the set of states in \(D\) that effect \(x_{k+1}\), therefore \(\mathcal{B}_{k+1}=\{d_{k-1}\}\). Now \(p(x_{k+1}|x_{k},\mathcal{B}_{k+1})\) describes the transition probability from \(x_{k}\) to \(x_{k+1}\) under the condition that the states of \(D\) that affect \(M\) are given by \(\mathcal{B}_{k+1}\). We then define the backward transition probability as \(P_{B}(x_{k}|x_{k+1},\mathcal{B}_{k+1})\). Following the definition of entropy change in the heat bath from time \(k\) to \(k+1\) as in Equation (9) is given by: \[\Delta s^{bath} =\sum_{k}x_{k}^{bath}\] \[=\sum_{k}\ln\frac{p(x_{k+1}|x_{k},B_{k+1})}{p_{B}(x_{k}|x_{k+1},B _{k+1})}.\] The total entropy change in the system \(M\) is similar to Equation (8) \[\Delta s^{sys}=\ln\frac{p(x_{1})}{p(x_{N})}.\] The total entropy production is therefore, \[\sigma=\ln\frac{p(x_{1})}{p(x_{N})}\Pi_{k}\frac{p(x_{k+1}|x_{k},B_{k+1})}{p_{B }(x_{k}|x_{k+1},B_{k+1})}.\] Now, we can write the refined second law of thermodynamics, through some algebraic manipulation it can be shown that \[E[\sigma]\geq\Theta.\] Using the integral fluctuation theorem and theory of stochastic energetics, this result can be restated as in Equation (7) \[W-\Delta\mathcal{F}\geq-\beta^{-1}\Theta \tag{10}\] MDP with uncertainty: a stochastic thermodynamics perspective The framework in the previous section provides a way to model the effect of information gain in MDPs with uncertainty with the objective of maximizing the work that can be extracted out of the system. The system \(M\) considered in the previous section is the system that is acting in the real environment, the system \(D\) is the decision maker, who changes some parameter of the system \(M\) in order to achieve the required objective. Both these systems are suspended in a "heat bath" to account for the part of the work that is dissipated and cannot be used for any useful work. The thermodynamic framework allows us to define the objective of the optimization program when the MDPs have model uncertainty. To be consistent, the uncertainty in the MDPs is assumed to be completely reflected through the uncertainty in the transition probabilities. In this section, I propose 2 different perspectives of how the system \(D\) interacts with system \(M\): a) the first perspective is where system \(D\) directly maintains a distribution over the policies and changes this distribution based on feedback; b) the second perspective is where the system \(D\) maintains a distribution over a parameter of the transition distribution and adapts this based on feedback in order to find a good policy. ### MDP with distribution over policies Consider an MDP \(M=\{S,A,T,R,N\}\)1 and the decision maker \(D=\{\pi\}\). The decision maker maintains a probability distribution \(\nu_{k}(\pi|s_{k})\) over policies \(\pi\) at every time step \(t_{k}\) in state \(s_{k}\). The probability distribution is updated based on feedback. This setup is analogous to the thermodynamic setup described in Section 5.2. In a standard MDP, the objective is to minimize the expected cumulative cost Footnote 1: Please note that for the purpose of this discussion I will consider \(R\) as the cost function (rather than the reward function) \[V^{\pi}(s_{t})=\sum_{k=t}^{N-1}E[c(s_{k},\pi(s_{k}))],\] where the expectation is taken over \(\{s_{k},\pi(s_{k})\}\), in terms of the classical discrete MDP \(c(s_{k},\pi(s_{k}))=E_{s_{k+1}-T(\cdot|s_{k},\pi(s_{k}))}[R(s_{k+1}|s_{k},\pi( s_{k}))]\). As in Section 3, the MDP problem for finding a policy to achieve the maximum _performance_ can be formulated as either a maximum entropy optimization program or the classical expected cost optimization. In will start by formulating an expected cost optimization program using the Second Law of Thermodynamics. Equation 10 can be written as \[W+\beta^{-1}\Theta\geq\mathcal{F}.\] Note that the free energy \(\mathcal{F}\) is the amount of useful energy, and the infimum of the left hand side will give the most amount of net work that can be extracted out of the system. Therefore, the optimization program becomes \[\min_{\nu_{t}:t=1:N}W+\beta^{-1}\Theta,\] where \(W=\sum_{k=1}^{N-1}E[c(s_{k},\pi(s_{k}))]\), \(\Theta=I_{fin}-I_{tr}-I_{ini}\), and \(\nu_{t}=p(\pi_{t}|s_{t},\pi_{t-1})\). From previous section \(x_{k}=\{s_{k}\}\), and \(d_{k}=\pi_{k}\). For the classical MDP \(p(s_{1})=\delta(s_{1}-s_{init})\), and \(pa(s_{1})=\emptyset\). Therefore, \[I_{ini}=I(s_{1};pa(s_{1}))=0.\] The final information correlation is given by \[I_{fin}=I(s_{N};\pi_{1},\cdot\cdot\cdot,\pi_{N-1}),\] note that \(p(\pi_{1},\cdot\cdot\cdot,\pi_{N-1})=\Pi_{i=2}^{N-1}p(\pi_{i}|\pi_{i-1})\). \[I_{tr}^{k}=I(\pi_{k};s_{k-1}|\pi_{1},\cdot\cdot\cdot,\pi_{k-1})=I(\pi_{k};s_{ k-1}|\pi_{k-1})\] Therefore the optimization program becomes \[\min_{\nu_{t}:t=1,\cdot\cdot\cdot,N}\left(\sum_{k=1}^{N-1}\left(E[c(s_{k}, \pi(s_{k}))]-\beta^{-1}I(\pi_{k+1};s_{k}|\pi_{k})\right)+\beta^{-1}I(s_{N};\pi _{1},\cdot\cdot\cdot,\pi_{N-1})\right).\] For the case, \(I_{fin}=0\), the solution to the resulting optimization program is discussed in Tanaka et al. (2017). The above problem can reformulated as a maximum entropy principle, which translates to \[\max_{\nu_{t}:t=1,\cdot\cdot\cdot,N}\sum_{k=1}^{N-1}I(\pi_{k+1};s_{k}|\pi_{k})- I(s_{N};\pi_{1},\cdot\cdot\cdot,\pi_{N-1})\] subject to \[\sum_{k}\nu_{t}(\pi_{k})=1\ \forall k,\] \[\sum_{k=1}^{N-1}E[c(s_{k},\pi_{k})]=K,\] where \(K\) is the required performance. When \(K=V^{*}\), the resulting policy is the optimal policy with respect to the cost based optimization program. ### MDP with parametric uncertainty In this case the decision maker \(D\) maintains a distribution over the parameter of the system. The state of the system \(D\) is denoted by \(\lambda_{k}\) at time \(t_{k}\). Again, the specific informational correlations are given by \[I_{ini}=I(s_{1};pa(s_{1}))=0,\] \[I_{fin}=I(s_{N};\lambda_{1},\cdot\cdot\cdot,\lambda_{N-1}),\] \[I_{tr}^{k}=I(\lambda_{k};s_{k-1}|\lambda_{1},\cdots,\lambda_{k-1})=I_{tr}^{k}=I( \lambda_{k};s_{k-1}|\lambda_{k-1}).\] The optimization program becomes \[\min_{\nu_{t}=1,\cdots,N}\left(\sum_{k=1}^{N-1}\left(E[c(s_{k},\pi(s_{k}))]- \beta^{-1}I(\lambda_{k+1};s_{k}|\lambda_{k})\right)+\beta^{-1}I(s_{N};\lambda_ {1},\cdots,\lambda_{N-1})\right),\] where \(\nu_{t}=p(\pi_{t}|s_{t})\), and the distribution over \(\lambda\) is updated using Bayesian learning. As in the previous section this can also be formulated as a maximum entropy framework. ## 7 Discussion and future work This article provides a framework for formulating an optimization program for solving uncertain MDPs built from fundamental principles of system dynamics and information theory. The exact formulation of the optimization program depends on the specific nature of interaction between the decision maker and the system to be controlled. Sections 6.1 and 6.2 provide optimization program for 2 different scenarios. Given these formulation, we can use many of the techniques for optimization (including the Bellman's principle) to solve for a solution. This will be a future work. An important discussion point is the entity \(\beta\) in the above equations. Thermodynamically, \(\beta\) capture the inverse temperature (with a scaling constant). The temperature is a property of the heat bath and assumed to be constant throughout the dynamic process. In the context of a decision process, the temperature is a property of the decision process and can be estimated. A good way to estimate temperature will be to find an _equilibrium_ solution and solve it inversely to get the temperature. For instance, given a MDP \(M=\{S,A,T,R,N\}\) one can chose the starting state for which there a solution is known apriori and that can be used to estimate the temperature of the decision process. In the event we do not have access to this knowledge, the temperature can be considered a pseudo state and a new MDP can be defined \(M^{\prime}=\{\{S,\beta\},A,T^{\prime},R^{\prime},N\}\). Another important point is that the above framework works when certain conditions on the underlying Markovian process is satisfied. One sufficient condition, as discussed in Section 5, is the detailed balance equation, which implies reversibility of the Markovian system. We know that is not a necessary condition, in fact, it can be shown that the results still hold for non-reversible Langevin dynamics. Additional research is required to state and prove the necessary and sufficient conditions for this framework to hold. In conclusion, this work opens up avenues for further research in employing information theoretic arguments to learning in MDPs with model uncertainty. This work explicitly models information content and system dynamics for MDPs. I provide a framework to formulate the optimality criterion for MDPs with model uncertainty. Hopefully, future work can extend the rich theory of MDPs to learn and make good decisions in the situations of information uncertainty. ## Appendix A Maximum Entropy Principle: A Control Theoretic Approach A classic paper by Saridis [1988] derives a maximum entropy framework for Control systems. I present this here as it is interesting to see connections without using the definitions from Thermodynamics. Consider a generic decision system formulated in a classic control theoretic framework. Assume that the dynamics of the system are deterministic for simplicity. Then the dynamics are given by \[\dot{x}(t)=f(x,u,t),\quad x(t_{0})=x_{0}\] and the associated cost function is \[V^{*}(x_{0},u,t_{0})=\int_{t_{0}}^{T}L(x,u,t)dt;\quad L(x,u,t)>0.\] Here, \(x(t)\in X\) is a \(n\)-dimensional state vector and \(u(t):X\times T\) is the \(m\) dimensional feedback control law. The solution is to find a control law \(u_{k}(x,t)\) such that the value function \(V\) will take a value \(K\) such that \(V_{min}\leq K<\infty\). \[V^{*}(x_{0},t_{0}|u_{K}(x(t),t)=K\] This satisfies the Hamilton-Jacobi-Bellman equation \[\frac{\partial V}{\partial t}+\frac{\partial V^{T}}{\partial x}f(x,u_{k},t)+L (x,u_{k},t)=0.\] In order to formulate the problem in entropy terms we consider the decision-maker's uncertainty of selecting the proper control from the set of admissible controls to satisfy the value function requirement to equal \(K\) (\(V_{min}\) in the case of optimal control). This may be expressed as a condition that the expected value of \(V\) equals \(K\): \[E_{u\text{-}p(u)}[V^{*}(x_{0},t_{0};u(x,t)]=K.\] The expected value of \(V\) is taken over the set of admissible controls \(U\), over which a probability density \(p(u)\) is assumed to express the uncertainty of selecting the proper control. The corresponding entropy can then be expressed as \[H(u,p)=-\int_{U}p(u)\ln p(u)du.\] According to Jaynes principle [Jaynes, 1957], the least biased estimate possible on the given information is given by the probability distribution \(p(u)\) that maximizes the above entropy \(H(u,p)\). Following the method of Lagrange, define \[I=H(u)-\mu(E[V]-K)-\lambda(\int p(u)du-1).\] Using calculus of variation to maximize \(I\) with respect to the distribution \(p(u)\) yields \[\ln(p)+1+\mu V+\lambda=0.\] Therefore, \[p(u)=\exp^{-1-\lambda-\mu V^{*}(x_{0},u(t),t)},\] and the entropy with maximum information is given by \[H(u)=1+\lambda+\mu E[V^{*}(x_{0},u(x),t)].\] For optimality, a control policy \(u\) is computed that minimizes the above entropy. This is, therefore, a max-min problem. Saridis (1988) generalizes this analysis in the presence of dynamical uncertainty. Consider that \(y\in Y\) is the observation on the state \(x\). It is essentially shown that the entropy \(H(u)\) can be decomposed in three parts as \[H(u)=H(u|y)+H(y)-H(y|u),\] where the associated probabilities are given by \[p(u|y)=e^{-1-\lambda-\mu W(u(y),t)} \tag{11}\] \[p(y)=e^{-\rho-\nu\int_{0}^{T}\|y-x\|^{2}dt}\] (12) \[p(y|u)=p(u|y)p(y)/p(u). \tag{13}\] Here, \(W(u(y),t)=E_{x_{0},w(t)}\{V^{*}(x_{0},u(x,t),w,\nu,t_{0}\}\), and \(\rho,\nu\) are appropriate constants for the entropy estimation of \(H(y)\) based on Jayne's principle. In case of parametric uncertainty, when \[\dot{x}=f(x,u,\lambda,w,t)\] when \(y\) are the observations \[H(y)=H(u|y,\lambda)+H(y|\lambda)+H(\lambda)-H(y,\lambda|u).\] An interesting observation is that entropy in a stochastic control system is decoupled into 4 different parts which can be individually computed.
2306.00750
End-to-End Document Classification and Key Information Extraction using Assignment Optimization
We propose end-to-end document classification and key information extraction (KIE) for automating document processing in forms. Through accurate document classification we harness known information from templates to enhance KIE from forms. We use text and layout encoding with a cosine similarity measure to classify visually-similar documents. We then demonstrate a novel application of mixed integer programming by using assignment optimization to extract key information from documents. Our approach is validated on an in-house dataset of noisy scanned forms. The best performing document classification approach achieved 0.97 f1 score. A mean f1 score of 0.94 for the KIE task suggests there is significant potential in applying optimization techniques. Abation results show that the method relies on document preprocessing techniques to mitigate Type II errors and achieve optimal performance.
Ciaran Cooney, Joana Cavadas, Liam Madigan, Bradley Savage, Rachel Heyburn, Mairead O'Cuinn
2023-06-01T14:45:28Z
http://arxiv.org/abs/2306.00750v1
# End-to-End Document Classification and Key Information Extraction using Assignment Optimization ###### Abstract We propose end-to-end document classification and key information extraction (KIE) for automating document processing in forms. Through accurate document classification we harness known information from templates to enhance KIE from forms. We use text and layout encoding with a cosine similarity measure to classify visually-similar documents. We then demonstrate a novel application of mixed integer programming by using assignment optimization to extract key information from documents. Our approach is validated on an in-house dataset of noisy scanned forms. The best performing document classification approach achieved 0.97 f1 score. A mean f1 score of 0.94 for the KIE task suggests there is significant potential in applying optimization techniques. Abation results show that the method relies on document preprocessing techniques to mitigate Type II errors and achieve optimal performance. ## 1 Introduction For many organizations, the ubiquity of personal computers and smart devices has led to the digitization of processes that previously required human involvement. In particular, paper documents are being digitized more frequently to enable electronic processing. Among those being digitized, documents such as invoices or insurance forms are common in daily workflows but often require tedious and costly manual processing or suffer from brittle automation systems Majumder et al. (2020). Automating workflows through rule-based systems or machine learning techniques can reduce the requirement for employees to engage in time-consuming data-entry and archiving tasks while cutting costs for employers Audebert et al. (2019); Mandivarapu et al. (2021). Document classification and key information extraction (KIE) are important tasks in automated document processing, and the two are not mutually exclusive. Classification of document images is often an important first stage in enterprise document processing upon which more granular downstream processing rests Zhang and Zhang (2020); Appalaraju et al. (2021); Audebert et al. (2019); Bakkali et al. (2020); Kumar et al. (2014); Harley et al. (2015); Noce et al. (2016). KIE is one of those downstream tasks with the goal of automating retrieval of key information from documents Majumder et al. (2020); Gao et al. (2021); Schuster et al. (2013); Chiticariu et al. (2013); Palm et al. (2019); Garncarek et al. (2020). Document image classification has been studied for decades Peng et al. (2003); Sarkar (2010); Shin et al. (2001); Taylor et al. (1995). Early approaches considered functional landmarks and visual features of the spatial layout as discriminating factors between documents Taylor et al. (1995); Hu et al. (1999); Shin et al. (2001). This strategy treats document images holistically and has been successful in classifying broad categories of documents Taylor et al. (1995); Shin et al. (2001); Shin and Doermann (2006), but is less discriminating when applied to structurally- and visually-similar form-like documents Harley et al. (2015). For these, templating techniques have sometimes been employed Peng et al. (2001); Park et al. (2003); Sarkar (2010). Templating can be defined as finding the highest similarity between any document and a predefined set of document templates Peng et al. (2003). However, following computer vision-inspired developments in deep learning resulting from AlexNet Krizhevsky et al. (2012) and ImageNet Deng et al. (2009), many approaches essentially treated document classification as a deep learning image classification problem Afzal et al. (2015); Kumar et al. (2012); Kang et al. (2014); Tensmeyer and Martinez (2017); Kolsch et al. (2017). Document images enable extraction of features such as graphics, typeface and colour, not possible with text-only approaches. Convolutional Neural Networks (CNN) were the primary driver of this development, with one of the initial applications being a 4-layer CNN used to classify tax forms and the Small Tobacco Dataset Kang et al. (2014). Other works used ImageNet pre-training as a starting-point for CNN-based classifiers Harley et al. (2015); Afzal et al. (2015). However, Tensmeyer and Martinez (2017) argue that domain differences between natural images and documents limit the efficacy of this approach. In scenarios where broad document categories with distinct visual styles need to be classified, general features of each group positively assist classification (e.g., scientific papers differ greatly from magazine articles). However, within-group documents (e.g., forms from within a company - the category being investigated in this work) can exhibit high visual similarity and therefore require more granularity for accurate classification. Increased granularity has been provided by a recent upsurge in multimodal approaches to document classification. Methods include combining textual representations with visual features Noce et al. (2016); Bakkali et al. (2020, 2022); Audebert et al. (2019), combining textual representations with positional embeddings Xu et al. (2020); Wang et al. (2022); Li et al. (2021) or combinations of all three Appalaraju et al. (2021); Xu et al. (2021); Huang et al. (2022); Gu et al. (2021); Li et al. (2021); Powalski et al. (2021). Audebert et al. (2019) demonstrated that their multimodal approach improves upon two unimodal baselines, while a dual-stream document classification technique combined word embeddings and visual features, with late fusion used to learn joint representations of documents Bakkali et al. (2020). LayoutLM, a large pre-trained model for document analysis, models interactions between text and layout information, with and without visual representations Xu et al. (2020). At the time of publication, this approach achieved SOTA results in document image classification when combining all three modalities. Inspired by this, StructuralLM uses cell level 2d positional embeddings and token embeddings and a novel cell position training objective to achieve 96.08% on RVL-CDIP Li et al. (2021). Another text and layout approach, LiLT, uses bi-directional attention to model cross-modal interactions to achieve similar performance (95.68%) Wang et al. (2022). Other general document analysis approaches, such as LayoutLMv2 and LayoutLMv3 have adapted the way visual features are incorporated in multimodal approaches, including using masked image modelling and word-patch alignment training objectives to ensure cross modal alignment Xu et al. (2021); Huang et al. (2022). Despite having proven that visual features such as text font and visual layout are useful for extracting general differences between documents, it is not clear that they perform as well when differentiating between very similar documents, a common problem in industry applications. Additionally, these large pre-trained models require significant computing resources and large training datasets which are not always available. We address some of these issues by using a classification approach that relies on document templates rather than large-scale training. Often downstream of document classification, KIE, including value retrieval, involves the extraction of key information from structured documents such as forms or invoices. Extracted information can be used for many tasks including customer enrollment and insurance claim adjudication. Several early approaches to KIE relied on templates constructed to enable cross-referencing of new documents against a bank of existing templates Rusinol et al. (2013); Chiticariu et al. (2013). Rusinol et al. (2013) developed user-generated training samples which they used to build a document model based on structural templates. Other approaches relied on pre-registered templates within the system to perform KIE Chiticariu et al. (2013); Schuster et al. (2013). However, Chiticariu et al. (2013) showed a disconnect between industry and academia w.r.t. the utility of rules-based approaches to IE. Although industry applications often require some form of rules-based intervention in production systems, template and rule-based methods are constrained to specific layouts and may not generalize well to unseen documents. In the era of deep learning it is common to formulate KIE as a sequence-labelling task Huang et al. (2015); Lample et al. (2016). However, this approach does not handle complex spatial relationships and is not ideal for highly structured documents Hwang et al. (2021). Due to this there have been several developments in deep learning approaches that have gone beyond the sequence-labelling formulation. Palm et al. (2019) proposed a CNN approach to KIE that rejected the need for word-level labels, and therefore is useful in real-world scenarios where labelled data is not always available. Another novel approach, Doc2Dict is a T5 transformer trained on database records to produce a document-to-data-structure model Townsend et al. (2021). Zhang and Zhang (2020) proposed an end-to-end text reading and KIE method that extracts entity values through a multimodal fusion of text and visual features, and Majumder et al. (2020) presented a field-value pairing framework that utilizes knowledge about the data-types of fields to select candidate entities. In the latter, a set of candidates is identified as possibly corresponding to a field in the target schema. A neural network is used to learn representations of each candidate based on neighbouring words, before selecting a correct value. This approach is useful in scenarios with high volumes of unseen documents. As the structure of forms is often more sophisticated than a simple linear ordering of tokens, relative token position, paragraph spacing, and question-answer relationships all contain information that can be leveraged for KIE tasks (Garncarek et al., 2020). Chargrid (Katti et al., 2018), CUTIE (Zhao et al., 2019) and BERTgrid (Denk and Reisswig, 2019) were among the first models to integrate 2D representations of word tokens alongside text for KIE, and each outperformed their respective baselines. Utilizing the 2D positions of text to assist KIE has helped architectures such as LAMBERT (Garncarek et al., 2020) and the LayoutLM family (Xu et al., 2020, 2021; Huang et al., 2022) achieve SOTA performance while exhibiting less sensitivity to serialization. Whereas LayoutLM uses tokens, layout and visual information, LAMBERT relies only on tokens and bounding boxes. This work introduces an end-to-end document classification and KIE pipeline based on a templating approach that does not require any model training.We eschew the recent trend towards deep learning focused approaches to document classification and KIE while retaining the important consideration of 2D document structural layout.We demonstrate that different text encoding strategies work remarkably well for document classification when computing cosine similarity between a document and a set of document templates. A novel assignment optimization technique for KIE is presented which assigns values in a form to a corresponding template key based on global geometric positions and specified constraints. This approach is insensitive to serialization and word tokenization and does not require large training data while being easily updated. We report a series of processing steps that are required to make assignment optimization feasible for noisy scanned documents, and test these with ablation experiments. Finally, we detail some limitations of our approach and suggest future directions for refining the technique further. This work makes the following contributions: * We present a novel assignment optimization approach to key information extraction. * We present a granular document classification strategy by finding the cosine similarity between a vectorized document and a matrix of document templates. * We demonstrate the importance of document rotation and entitiy scaling processing steps in maximising the potential of our optimization approach. ## 2 Related Work ### Document Classification Features can be extracted from document images to represent either text content or visual properties. Image templates have often been favoured in industrial document analysis (Sarkar, 2010). Sarkar (2010) use image anchor templates for document classification and propose a method for learning templates from few training examples. Combining both text and visual content is a popular approach to document classification. Noce et al. (2016) reported performance improvement when supplementing visual features with text, especially in cases where different classes share visual characteristics. StructuralLM (Li et al., 2021) and LiLT (Wang et al., 2022) are deep learning approaches that demonstrated the value of combining text and layout information for document classification. With the proliferation of text encoding techniques such as Word2Vec (Mikolov et al., 2013) and ELMO (Sarzynska-Wawer et al., 2021) several studies have used off-the-shelf algorithms to generate representations of document images for classification (Bakkali et al., 2020; Audebert et al., 2019). ### Key Information Extraction KIE has received focused attention, with several early approaches relying on templates constructed to enable cross-referencing of new documents against a bank of existing templates (Rusinol et al., 2013; Chiticariu et al., 2013; Schuster et al., 2013). Recently, deep learning approaches have dominated the literature. Palm et al. (2019) present a deep neural network that bypasses the need for word-level training labels, with the assumption that the same structured information must be extracted from each document. Another approach seeks to predict target values from arbitrary queries Gao et al. (2021). The technique utilises a novel pre-training strategy which makes it more flexible for learning local geometric relations between words. Gao et al. (2021) also propose a novel framework for using unlabelled documents for training with known form types. Attempting to overcome expensive annotation efforts, the approach uses a bootstrapped training process to develop a rule-based approach for getting pseudo-labels from unlabelled forms. Pseudo-labels are then used for supervised learning. Related to this approach for reducing annotation effort, findings that bounding boxes alone can be effective in VrDU tasks (Cooney et al., 2023), and work formulating KIE as a constrained optimization problem through partial graph matching (Yao et al., 2021), we propose an assignment optimization approach to KIE. ### Assignment Optimization An assignment problem has a number of _agents_ and a number of _tasks_. Agents are assigned to perform tasks, with a cost depending on agent-task assignment. Gong et al. (2021) optimize passenger-route assignment by formulating it as a nonlinear mixed integer optimization model. Other problems such as resource allocation can be formulated as assignment problems to assign a number of tasks (e.g. jobs), to a number of workers (agents), to maximise or minimise some utility function (Lee et al., 2018). In our formulation, agents are template value-positions corresponding to a specific key. Tasks are form entities. The cost we seek to minimize is the Euclidian distance between template value-positions and form entities. ## 3 Dataset Datasets used in document classification studies typically consist of broad categories of documents (e.g. email, letter, scientific report) (Harley et al., 2015; Kumar et al., 2014). Our data consists of 395 scanned document images, each corresponding to one of six categories of health insurance claim forms and exhibiting a degree of structural homogeneity common within organizations (Figure 1). Each form contains a set of _keys_ adjacent to an associated text-box or white-space into which information (_values_) can be entered. Each key in a form asks a claimant to enter a specific piece of information (e.g. first name, last name, policy number), but not all value spaces have been filled-in. Forms in the dataset contain both printed and handwritten responses to key requests for information. The scanned documents are of fixed dimensions (1700 x 2200 pixels) and exhibit significant rotation variance and noise. The number of key-value pairs per document class ranges from 217 to 2484. Dataset statistics are reported in Table 1. We use Amazon Textract1 Optical Character Recognition (OCR) engine to extract text and bounding boxes from the scanned documents and the six document templates. Footnote 1: [https://aws.amazon.com/textract/](https://aws.amazon.com/textract/) ### Consolidating OCR Output Previous studies have noted the negative impact that OCR mistakes can have on KIE tasks (Audebert et al., 2019; Palm et al., 2019). Due to the noisy nature of many of our scanned documents, OCR output can be degraded in terms of identifying characters that form part of an entity string (Figure 2 - top). To deal with this, we implement a simple yet effective method for consolidating character strings that are likely to form part of a single entity. First, we iterate through the set of strings identified by OCR from top-left to bottom-right of the document. For each source string (i.e., the string we may wish to append to), we search potential candidate strings to determine whether they are vertically aligned with the source. Here, we use a threshold of \(\pm\)15 pixels for both top and bottom of the bounding boxes. If bounding boxes are inline, we then compute the distance along the horizontal axis between the trailing character in the source string and the leading character in the candidate string. If this distance is below a specified threshold, the character strings are consolidated, and their \begin{table} \begin{tabular}{l l l} \hline \hline **Form** & **N documents** & **N key-values** \\ \hline aicf\_pg1 & 169 & 2484 \\ aicf\_pg2 & 144 & 1390 \\ hicf\_pg1 & 30 & 422 \\ hicf\_pg2 & 24 & 352 \\ aicf\_v1 & 14 & 217 \\ pvbcf & 14 & 279 \\ \hline \hline \end{tabular} \end{table} Table 1: Dataset statistics for each form type. bounding boxes merged (Figure 2 - bottom). If the difference between the start of a source and start of a target is <60 pixels we append as a single word. If >60 we append as if separate words within an entity (e.g. parts of an address). ### Document Image Alignment KIE from document images can suffer from geometrical distortions caused by scanning processes [20]. To mitigate the effects of rotation variance, we implement document image alignment to rotate scanned documents to align with their corresponding template images. First, we use Oriented FAST and rotated BRIEF (ORB) to detect keypoints within a document and to extract local invariant descriptors [17]. Then, applying Hamming distance to compute distances between template and scanned features, we determine a set of best matches. Next, random sample consensus (RANSAC) matches keypoints between the template and the scanned document [13], before computing a homography matrix between the two documents. The homography matrix is a perspective transformation between the template and scanned document, facilitating alignment of the scanned document. This method was implemented in OpenCV [1]. ## 4 Methods ### Global Representations for Document Classification As stated in the introduction, documents from different classes that exhibit similar features require fine-grained analysis rather than broad visual features to enable differentiation. Here, document-level templates are constructed from vectorized document text and layout embeddings generated from bounding box information returned from the Figure 1: Two of the six forms in our template dataset. (a) Aflac Accidental Injury Claim Form (page 1). (b) Aflac Accidental Injury Claim Form (page 2). Forms consist of keys and value spaces to be filled-in by claimants. Figure 2: Example of original character-by-character output from OCR (top), and consolidated entity (bottom). OCR. We implemented three different methods for vectorizing the document text: two versions of the Universal Sentence Encoder (USE) Cer et al. (2018) and Term Frequency Inverse Document Frequency2 (TF-IDF). Footnote 2: [https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidFVectorizer.html](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidFVectorizer.html) The USE approaches are designed to be general purpose text embedding models. Both USE models receive input English strings and return a fixed 512 element vector representation of the string as a sentence embedding, which here represents the entire document. The first USE method is a transformer-based encoding model which generates text embeddings by using attention to compute context aware representations of words Vaswani et al. (2017). The second is a deep averaging network which computes average input embeddings for words and bi-grams before being passed through a feedforward deep neural network to obtain a 512 element sentence embedding. TF-IDF computes the relative frequency of words in one document compared to the inverse frequency of that word in the entire corpus. In this case the document corpus consists of the combined text from the six forms (Section 3). The TF-IDF algorithm is fitted to the data to learn its vocabulary and inverse document frequency. Each of the six forms is then transformed into a document-term matrix using the fitted vectorizer. The fitted vectorizer is retained to transform new unseen forms. Forms often exhibit similar structure and when taken from a single domain can often contain duplication of text (e.g., institutional logos, requests for similar information). Figure 3 depicts the correlation between USE vector representations of the template documents, indicating the difficulty in distinguishing between forms using this approach. In particular, forms _aicf_pg1_ and _hicf_pg1_ (0.943) and forms _aicf_pg2_ and _hicf_pg2_ (0.934) are highly correlated. Here we supplement our text embeddings by also generating layout embeddings for each form. These embeddings represent the 2D positions of the text and are generated using the pretrained Layoutlmv2 model Xu et al. (2021). ### Document Classification using Cosine Similarity Document class is predicted by measuring the cosine similarity between a new document and a set of document templates. A template matrix \(u\) is constructed as a \(n\times m\) matrix where \(n\) is the number of templates in the template bank and \(m\) is the length of the vector representation of a single document. Each new form representation to be measured against the template matrix is represented as a \(1\times m\) vector \(v\). Cosine distance between vectors \(u\) and \(v\) is formulated as: \[s(u,v)=1-\frac{u\cdot v}{||u||_{2}||v||_{2}} \tag{1}\] _where_, \(||*||_{2}\) is the 2-norm of argument *, and \(u\cdot v\) is the dot product of \(u\) and \(v\). This returns a \(n\times 1\) matrix containing similarity between the candidate form \(v\) and \(n\) templates in matrix \(u\). We then take the _argmax_ of the similarity matrix to obtain a document classification. ### Problem Formulation for KIE Document classification is followed by KIE, where templates are constructed for each of the six document classes in our dataset. The goal of KIE is to identify entity values in a document that are associated with a specific key. Given a document template: \[DT=\left\{\left(kw^{(i)},vx^{(i)}_{min},vy^{(i)}_{min},vx^{(i)}_{ max},vx^{(i)}_{max}\right)\right.\\ \left.\mid i\in\{1,...,n\}\right\}\] consisting of \(n\) key words _kw\({}^{(i)}\)_ and a bounding box associated with their corresponding value positions \(vx^{(i)}_{min}\), \(vy^{(i)}_{min}\), \(vx^{(i)}_{max}\), \(vy^{(i)}_{max}\), and a document: Figure 3: Correlation between embedded text representations of document templates is high. \[D=\Big{\{}\Big{(}w^{(j)},x^{(j)}_{min},y^{(j)}_{min},x^{(j)}_{max},x^{ (j)}_{max}\Big{)}\] \[\mid j\in\{1,...,m\}\Big{\}}\] consisting of \(m\) words _w(j)_ and their associated bounding box \(x^{(j)}_{min}\), \(y^{(j)}_{min}\), \(x^{(j)}_{max}\), \(y^{(j)}_{max}\), the objective is to correctly assign \(n\) document bounding boxes to each of the template bounding boxes. In doing so, we assign associated words (values) to the set of key words in the template, thus producing key-value pairs. Each KIE template consists of a set of key-value pairs. Template keys are text (e.g. 'Name', 'D.O.B.'), and values are a bounding box corresponding to a location on the page where information can be entered. With our method of applying assignment optimization to the KIE task, each bounding box is represented by a single coordinate point. In initial experiments, the bounding box centroid was selected as the 2D coordinate to represent the position of each entity. However, this approach proved unstable during optimization as differences in length between unfilled template values and unseen form values routinely resulted in unexpected mis-assignments. Our reported results are based on selection of the upper-left point of each bounding box representing an entity position, as used by Katti et al. (2018). The reason for this is that while the length of form value entries vary significantly and therefore skew the centroid position, the starting position of an entity value is relatively fixed, providing a more stable representation. ### Scaling Document Entities A virtue of using accurate document classification to select templates for KIE is that prior information on the specific form being processed can be used to enhance optimization. An issue with scanned documents is significant variation in scale, rotation, and aspect ratio in comparison with the original template versions of these forms (Ahmad et al., 2021). Here, we use prior knowledge from form templates to scale unseen form entities to approximate dimensions of the template (Figure 4). As a first step, we compute coordinates for twenty rectangular segments of equal area. This segmentation is required because scanned documents are not always warped linearly e.g., entities at the top of a page can be offset by a different factor to those at the bottom. Then, within each segment we search for text strings extracted by OCR. Next, we check whether a word within a given segment is present among the set of keys in our entity template. For this, we implement fuzzy matching with a minimum confidence level of 0.9 to account for errors in OCR outputs. To avoid mis-scaling due to duplicate terms within a document, we apply a maximum distance threshold. Within each segment, Manhattan Distance is measured between each text string and a matching template keyword: \[d(x,y)=|x_{1}-x_{2}|+|y_{1}-y_{2}| \tag{2}\] _where_, (\(x_{1},y_{1}\)) and (\(x_{2},y_{2}\)) are 2d coordinates of the top-left of template and new form entity bounding boxes, respectively. Mean scalar values for each axis are calculated for each segment independently, resulting in distinct vertical and horizontal scalars for scaling each segment. Using these scalars, we re-scale the unseen form entities to approximate positions in the template (Figure 4). ### Assignment Optimization To assign entities from a new form to keys in a template, we developed a binary optimization model. This approach eliminates any requirement for data-hungry neural network training. The optimization objective is to minimise the distance between entities in a new form and spaces in a form template where values are entered i.e., when a text string in a form has been entered into one of the template value positions, we expect to see a close-to-zero distance between the two bounding boxes. Of course, text in a form that does not correspond to some filled-in value is expected to have a relatively high distance to our template bounding boxes. Let \(i\in T_{k}\) and \(j\in F_{l}\) be a value-position within a template \(T_{k}\in\mathcal{T}\) and an entity within the form \(F_{l}\in\mathcal{F}\), respectively. The standard euclidean distance \(D_{ij}\) between \(i\) and \(j\) is defined as (3), with \(\mathcal{T}^{k}_{i}\) and \(\mathcal{F}^{l}_{j}\) being the coordinates of value-position and entity bounding boxes, respectively. \[D_{ij}=\mathcal{T}_{i}-\mathcal{F}_{j},i\in T,j\in F \tag{3}\] The optimization model used to assign form entities to template value-positions comprises the following features: * \(\mathcal{T}\): form template. * \(\mathcal{T}_{i}=(t^{x}_{i},t^{y}_{i})\): coordinates of value-position \(i\) within template (top-left corner). * \(\mathcal{F}\): form. * \(\mathcal{F}_{j}=(f_{j}^{x},f_{j}^{y})\): coordinates of entity \(j\) within form (top-left corner). * \(\mathcal{D}_{ij}\) : distance between value-position \(i\) from template value-position \(T_{i}\) and entity \(j\) from form entity \(F_{j}\). * \(\mathcal{M}_{ij}\) : set of form entities matching template keys. \[\min \sum_{i\in\mathcal{T}}\sum_{j\in F}\mathcal{D}_{ij}\cdot x_{ij}\] (4) s.to \[\sum_{j\in\mathcal{F}}x_{ij}=1,\ \ i\in\mathcal{T}\] (5) \[\sum_{i\in\mathcal{T}}x_{ij}\leq 1,\ \ j\in F\] (6) \[x_{ij}=M_{ij},\ \ i\in\mathcal{T},j\in F,\] (7) \[x_{ij}\in\{0,1\}\text{ for all $t$ in $T$ and all $f$ in $F$}\] (8) where (4) tries to minimize global distance between template value bounding boxes and new form bounding boxes by assigning a maximum of one form bounding box to one template bounding box. The final output is a set of pairs \(x_{ij}\), where \(i\) is a template bounding box and \(j\) is a form bounding box, such that \(x_{ij}=1\). Figure 5 depicts the implementation of assignment optimization for the KIE task. Bounding-boxes for template key-value positions (e.g. 'Policy Number', 'Last Name') are designated rows, while form entity (e.g. 'Doe', '0123456789') bounding boxes are designated columns. Our optimization approach assigns entities to keys by minimizing overall distance between form entities and template value-positions, with constraints. Green squares indicate that a form entity has been assigned to a template key. Red indicates a constraint on assigning certain entities to certain keys. White squares indicate that an assignment is possible but has not been made for this solution. ## 5 Results The transformer-based USE using text and layout encodings achieved a weighted average f1 score of Figure 4: Entities in a scanned document form are scaled to more closely approximate the assigned template. Red bounding boxes indicate scanned document bounding box positions pre-scaling. Green boxes indicate positions post-scaling. Figure 5: Assignment optimization for form entities to template keys. Rows are template positions, columns are new form positions. Red squares are constraints indicating a columns can’t be assigned to a row. Green squares indicate assignment. 0.97 for the document classification task. In total, 383 out of 395 forms were assigned to the correct template label. The deep random network and TF-IDF approaches also performed well with f1 scores of 0.95 and 0.94, respectively. Form _hicf_pg2_ accounts for the majority of misclassified instances, conforming with similarity scores between templates (Figure 3). Precision, recall, and f1 scores are used to evaluate the performance of our assignment optimization approach to KIE (Table 2). With a mean f1 score of 0.941, our method exhibits strong performance for the KIE task despite the noise and rotation variance present in our dataset. Mean precision and recall scores of 0.954 and 0.928 suggest that the current implementation of the approach is slightly more likely to misclassify entities in the form than assign values to spaces that were actually unfilled. This means we can have a high degree of confidence that values are assigned to the correct keys, but less confidence that all entities have been assigned. A striking result from Table 2 is the difference in precision (0.962) and recall (0.872) for hicf_pg2. The reason for this relatively poor recall score appears to be the density of text surrounding the key-value pairs in that particular form. The close proximity of extraneous text to values to be extracted, and the imperfect document scaling technique combine to produce a relatively large number of misassignments. Another notable result is the performance of our method at extracting information from the aicf_v1 form. The proposition that KIE algorithms perform more or less effectively depending on the type of document they face is not new (Zhao et al., 2019; Cheng et al., 2022). Results are comparable with other KIE techniques. On the SROIE dataset, LayoutLM, StructText, LayoutLMv2, TILT, LAMBERT achieved mean KIE f1 scores ranging from 0.952 - 0.982 (Xu et al., 2020; Li et al., 2021; Xu et al., 2021; Powalski et al., 2021; Garcarek et al., 2020), and TRIE++ recently reported 0.984 (Cheng et al., 2022). Although f1 scores presented here are slightly inferior, our approach offers the benefit of very specific key labelling that can be important for industrial applications of IE. ### Ablation Study To evaluate the impact of preprocessing steps on the overall performance of our KIE method we performed an ablation study. In separate experiments, we remove document image alignment (Section 3.2) and entity scaling (Section 4.4) from the pipeline. Results in Table 3 demonstrate an almost 10% drop in mean F1 score when documents are not aligned with template images. A similar decrease in performance is observed when entity scaling is removed from the pipeline (Table 4). The degraded performance is largely due to an increase in Type II errors (false negatives), as witnessed by a sharp decline in recall scores. This vindicates the use of these preprocessing steps to augment our assignment optimization algorithm that would otherwise struggle with noisy and skewed scanned documents. Without rotation and scaling our algorithm fails to classify approximately 20% of key entities we should extract from our forms. ## 6 Limitations A limitation of this work is that it does not fully consider how to integrate extraction of information from tables using our optimization approach. Several recent studies have reported entity extraction \begin{table} \begin{tabular}{l l l l} \hline \hline **Document** & **Precision** & **Recall** & **F1** \\ \hline aicf\_pg1 & 0.923 & 0.745 & 0.825 \\ aicf\_pg2 & 0.958 & 0.824 & 0.886 \\ aicf\_v1 & 0.918 & 0.774 & 0.840 \\ hicf\_pg1 & 0.922 & 0.839 & 0.879 \\ hicf\_pg2 & 0.983 & 0.841 & 0.906 \\ pvbcf & 0.909 & 0.860 & 0.883 \\ \hline **Mean** & 0.936 & 0.788 & 0.856 \\ \hline \hline \end{tabular} \end{table} Table 4: Precision, Recall and F1 when document scaling is not applied. \begin{table} \begin{tabular}{l l l l} \hline \hline **Document** & **Precision** & **Recall** & **F1** \\ \hline aicf\_pg1 & 0.962 (321/2404) & 0.931 (2312/2484) & 0.946 \\ aicf\_pg2 & 0.959 (130/1372) & 0.938 (1304/1390) & 0.944 \\ aicf\_v1 & 0.887 (182/205) & 0.839 (182/217) & 0.862 \\ hicf\_pg1 & 0.954 (393/412) & 0.931 (393/422) & 0.942 \\ hicf\_pg2 & 0.962 (307/319) & 0.872 (307/352) & 0.915 \\ pvbcf & 0.939 (275/293) & 0.986 (275/279) & 0.962 \\ \hline **Mean** & 0.954 & 0.928 & 0.941 \\ \hline \hline \end{tabular} \end{table} Table 2: Precision (TP/(TP+FP)), Recall (TP/(TP+FN)) and F1 scores for each document type. \begin{table} \begin{tabular}{l l l l} \hline \hline **Document** & **Precision** & **Recall** & **F1** \\ \hline aicf\_pg1 & 0.921 & 0.747 & 0.825 \\ aicf\_pg2 & 0.952 & 0.791 & 0.864 \\ aicf\_v1 & 0.894 & 0.734 & 0.806 \\ hicf\_pg1 & 0.925 & 0.874 & 0.899 \\ hicf\_pg2 & 0.948 & 0.786 & 0.859 \\ pvbcf & 0.857 & 0.768 & 0.810 \\ \hline **Mean** & 0.916 & 0.783 & 0.844 \\ \hline \hline \end{tabular} \end{table} Table 3: Precision, Recall and F1 when realignment is not applied. from tables (Paliwal et al., 2019; Yang et al., 2022; Nazir et al., 2021; Agarwal et al., 2021), suggesting feasible approaches to this task. For example, Katti et al. (2018) and Denk and Reisswig (2019) tackle tabular data extraction by predicting the coordinates of the table rows bounding boxes to identify the invoiced products. Another limitation is that skew detection and document dewarping are not currently a fully automated process in our pipeline. Methods for skew detection and correction, such as those reported in Ahmad et al. (2021) could be incorporated in future work to aid processing. Our method has been tested on a limited number of form classes, and currently it does appear to be weaker when faced with densely-populated forms and documents. This is unsurprising, but methods for dealing with this are required to make this method generalise to a more diverse set of forms. Another potential limitation of the system is that the Transformer-based USE is more expensive than the deep averaging network in terms of compute time (Cer et al., 2018). The deep averaging network compute time is linear with length of string, but classification accuracy was not as strong as the transformer approach. Document classification based on similarity assigns a class even when a document is not one of the classes. In a production system, we would consider an 'other' category for such cases. ## 7 Future Work Results indicate that our optimization approach to KIE is not entirely robust to all document structures. It will be important to experiment with techniques to improve the generalizability of this method to a more diverse range of documents. Reducing sensitivity to closely located text will help reduce some of the Type II errors. Additionally, it is possible that setting distance constraints between templates and new forms along two dimensions may enhance our assignment operation. Another area of future work will seek to improve the input to our optimizer. OCR engines can have difficulty dealing with obscure fonts, diacrits, or other relatively uncommon artefacts (Audebert et al., 2019), and we know that the method proposed here relies on strong OCR performance to achieve reliable results. One of the areas to improve our approach is to enhance the fidelity of the initial text recognition phase. This can take the form of using domain-specific knowledge to make more accurate corrections and using an OCR engine optimized for handwriting recognition. Finally, we think there is scope within this approach to add semantic information to the assignment of values to keys. Currently, the approach relies solely on positions to make assignments. However, we have seen from many multimodal approaches to document understanding that text is highly informative (Xu et al., 2020, 2021; Garcarek et al., 2020; Cooney et al., 2023; Li et al., 2021, 2021). LAMBERT uses word tokens alongside bounding boxes for KIE (Garncarek et al., 2020), and Majumder et al. (2020) use the data-type of fields to enhance value retrieval. Further work is required to investigate ways in which text can be integrated into our KIE approach. ## 8 Conclusions In this paper, we present an end-to-end document classification and KIE technique that uses a novel application of assignment optimization to extract key information from insurance claim forms. The system is end-to-end in that accurate document classification is required to select a specific template to be applied downstream to the KIE task. We experiment with several encoding methods for documents, and use cosine similarity to measure the distance between a form and a bank of templates. Key values are extracted from forms and assigned to a corresponding value position in a template using an optimization algorithm. The noisy scanned documents used to validate the approach require substantial preprocessing through realignment and scaling. A mean f1 score of 0.94 indicates that this is a promising new approach to KIE from structured forms. An ablation study indicated preprocessing stages are essential to optimize performance of this approach. Our analysis of results obtained from different documents has suggested potential areas where this approach can be enhanced with further development. The approach is particularly suited to industrial applications in which large volumes of identical forms with different information require extraction of key information.
2302.03976
Parma: Confidential Containers via Attested Execution Policies
Container-based technologies empower cloud tenants to develop highly portable software and deploy services in the cloud at a rapid pace. Cloud privacy, meanwhile, is important as a large number of container deployments operate on privacy-sensitive data, but challenging due to the increasing frequency and sophistication of attacks. State-of-the-art confidential container-based designs leverage process-based trusted execution environments (TEEs), but face security and compatibility issues that limits their practical deployment. We propose Parma, an architecture that provides lift-and-shift deployment of unmodified containers while providing strong security protection against a powerful attacker who controls the untrusted host and hypervisor. Parma leverages VM-level isolation to execute a container group within a unique VM-based TEE. Besides container integrity and user data confidentiality and integrity, Parma also offers container attestation and execution integrity based on an attested execution policy. Parma execution policies provide an inductive proof over all future states of the container group. This proof, which is established during initialization, forms a root of trust that can be used for secure operations within the container group without requiring any modifications of the containerized workflow itself (aside from the inclusion of the execution policy.) We evaluate Parma on AMD SEV-SNP processors by running a diverse set of workloads demonstrating that workflows exhibit 0-26% additional overhead in performance over running outside the enclave, with a mean 13% overhead on SPEC2017, while requiring no modifications to their program code. Adding execution policies introduces less than 1% additional overhead. Furthermore, we have deployed Parma as the underlying technology driving Confidential Containers on Azure Container Instances.
Matthew A. Johnson, Stavros Volos, Ken Gordon, Sean T. Allen, Christoph M. Wintersteiger, Sylvan Clebsch, John Starks, Manuel Costa
2023-02-08T10:15:07Z
http://arxiv.org/abs/2302.03976v3
# Parma: Confidential Containers via Attested Execution Policies ###### Abstract Container-based technologies empower cloud tenants to develop highly portable software and deploy services in the cloud at a rapid pace. Cloud privacy, meanwhile, is important as a large number of container deployments operate on privacy-sensitive data, but challenging due to the increasing frequency and sophistication of attacks. State-of-the-art confidential container-based designs leverage process-based trusted execution environments (TEEs), but face security and compatibility issues that limits their practical deployment. We propose Parma, an architecture that provides lift-and-shift deployment of unmodified containers while providing strong security protection against a powerful attacker who controls the untrusted host and hypervisor. Parma leverages VM-level isolation to execute a container group within a unique VM-based TEE. Besides container integrity and user data confidentiality and integrity, Parma also offers container attestation and execution integrity based on an attested execution policy. Parma execution policies provide an inductive proof over all future states of the container group. This proof, which is established during initialization, forms a root of trust that can be used for secure operations within the container group without requiring any modifications of the containerized workflow itself (aside from the inclusion of the execution policy.) We evaluate Parma on AMD SEV-SNP processors by running a diverse set of workloads demonstrating that workflows exhibit 0-26% additional overhead in performance over running outside the enclave, with a mean 13% overhead on SPEC2017, while requiring no modifications to their program code. Adding execution policies introduces less than 1% additional overhead. Furthermore, we have deployed Parma as the underlying technology driving Confidential Containers on Azure Container Instances. ## 1 Introduction Since the launch of the large-scale Infrastructure-as-a-Service (IaaS) offerings from Amazon (AWS in 2006), Microsoft (Azure in 2008), and Google (GCP in 2008), there has been a continuous trend towards cloud computing, which allows customers to leverage capability and cost advantages through economies of scale. This was made possible through virtualization [9], whereby virtual machines (VMs) allow the efficient use of large bare-metal compute architectures (hosts) by using a hypervisor to coordinate sharing between multiple tenants according to their expressed usage requirements. However, while VMs provide a way for users to quickly obtain additional compute capacity and maximize the utilization of existing hardware (and/or avoid the cost of maintaining peak capacity by utilizing a public cloud), it is still necessary to configure, deploy, manage, and maintain VMs using traditional techniques. In recent years, container-based technologies, such as Docker [19] and Kubernetes [25] have arisen to address this orthogonal need, providing a lightweight solution for creating a set of machine configurations, called containers, which can be deployed onto hardware (virtualized or physical) as a group via an automated process. Container technology provides multiple separated user-space instances which are isolated from one another via kernel software. Unlike VMs, containers run directly on the host system (sharing its kernel) and as such do not need to emulate devices or maintain large disk files. Further, containers defined according to the OCI Distribution Specification [30] specify dependencies as _layers_ which can be shared between different containers, making them amenable to caching and thus speeding up deployment while reducing storage costs for multiple containers. The success of containerization technology for on-premises systems has led to major cloud providers developing their own Container-as-a-Service (CaaS) offerings [1, 3, 22] which provide customers the ability to maintain and deploy containers in the public cloud. In CaaS offerings, containers run in a per-group utility VM (UVM) which provides hypervisor-level isolation between containers running from different tenants on the same host. While the container manager and container shinn running on the host are responsible for pulling images from the container registry, bringing up the utility VM, and orchestrat ing container execution, an agent running within the utility VM (the guest agent) coordinates the container workflow as directed by the host-side container shim. Cloud computing poses unique risks. Although VMs and VM-isolated container groups provide strong isolation between tenants, they are deployed by the cloud service provider (CSP) and coordinated by the CSP's hypervisor. As such, the host operating system (including the container manager and container shim) and the hypervisor all lie within the Trusted Computing Base (TCB). Research into confidential cloud computing attempts to reduce the attack surface by leveraging specialized hardware-enforced Trusted Execution Environments (TEEs) [13, 15, 29, 27, 37, 29, 38, 11], which enable user workloads to be protected inside _enclaves_ even if the host's software is compromised or controlled by a malicious entity. TEEs available from major CPU vendors can be either _process-based_, such as Intel SGX [23] and ARM TrustZone [6], or _VM-based_, such as AMD SEV-SNP [26, 4], Intel TDX [24] and ARM CCA [5, 28]. VM-based TEEs offer hardware-level isolation of the VM, preventing the host operating system and the hypervisor from having access to the VM's memory and registers. With CaaS, container execution is orchestrated by a host-side shim that communicates with the guest agent, which coordinates the activity of the container group within the UVM. The UVM can be hardware-isolated within a TEE enclave, but the container images are controlled by the host, as is the order in which they are mounted, the container environment variables, the commands that are sent to the containers via the bridge between the container shim and the guest agent, and so forth. This means that a compromised host can overcome the hardware isolation of the VM by injecting malicious containers. This risk of attack, be it from malicious or compromised employees of the CSP or external threats, limits the extent to which containerization can be used in the cloud for sensitive workloads in industries like finance and healthcare. The naive solution to this problem is to run the guest agent and container shim within the same VM-based TEE. This removes the CSP from the TCB, but it also removes the CSP's ability to orchestrate and automate the container workflow. In addition, the container owner is then in the TCB. The container images are controlled by the container owner, as is the order in which they are mounted, the container environment variables, and the commands that are sent to the containers. The end-user of the confidential container (_e.g.,_ a customer of a bank, a patient providing data to a doctor) must trust that the container owner has and will run only the expected commands. This also leaves image integrity and data confidentiality and integrity unsolved. Our work.We present Parma, an architecture that implements the _confidential containers_ abstraction on a state-of-the-art containerd [14] stack running on processors with VM-based TEE support (_i.e.,_ AMD SEV-SNP processors). Parma provides a lift-and-shift experience and the ability to run unmodified containers pulled from (unmodified) container registries while providing strong security guarantees: _container attestation and integrity_, meaning that only customer-specified containers can run within the TCB and any means of container tampering is detected by the TCB; and _user data confidentiality and integrity_, meaning that only the TCB has access to the user's data and any means of data tampering is detected by the TCB. Parma provides strong protection for the container's root filesystem (comprised of the container image layers and writeable scratch space) and the user's data. For container image layers (pulled in plaintext by the untrusted container manager and stored in a host-side block device), Parma mounts the device as an integrity-protected read-only filesystem (using dm-verity) and relies on the filesystem driver to enforce integrity checking upon an access to the filesystem. For confidentiality and integrity of privacy-sensitive data stored in a block device (_e.g.,_ writeable scratch space of the container's root filesystem) or blob storage (_e.g.,_ remote blobs holding user data), Figure 1: **Execution Policy**. The execution policy is a component of the utility VM that is attested at initialization time. It describes all of the actions the user has explicitly allowed the guest agent to take within the container group. In (a) we see an example of a successful mount action, in which a layer of a container image has a dm-verity root hash which matches a hash enumerated in the policy. When the hash does not match, as in (b), this action is denied. Parma relies on block-level encryption and integrity (using dm-crypt + dm-integrity) to decrypt memory-mapped blocks, guaranteeing that data appears in plaintext only within the VM's hardware-protected memory. Finally, Parma provides container attestation rooted in a hardware-issued attestation by enforcing attested user-specified _execution policies_. We have augmented the guest agent to enforce the execution policy such that it only executes commands (submitted by the untrusted container shim) which are explicitly allowed by the user, as seen in Figure 1. The policy is attested by encoding its measurement in the attestation report as an immutable field at UVM initialization. As a result of including the execution policy, the hardware-issued attestation forms an inductive proof over the future state of the container group. The attestation can then be used downstream for operations needed by secure computation. For example, remote verifiers may release keys (governing the user's encrypted data) to only those container groups which can present an attestation report encoding the expected execution policy and measurement of the utility VM. Contributions:The main contributions of our work are: * Parma, a novel security architecture for confidential containerized workloads. Parma establishes security guarantees rooted in an inductive proof over all future states of the container group provided by the introduction of an attested execution policy. * an implementation of Parma which forms the basis for Confidential Containers on Azure Container Instances [2] and is publicly available on GitHub 1. Footnote 1: [https://github.com/microsoft/hcsshim/tree/main/pkg/securitypolicy](https://github.com/microsoft/hcsshim/tree/main/pkg/securitypolicy) * neither requiring changes to existing containers, nor container image signing. Instead, the execution policy ensures that only the actions the user has explicitly expressed are allowed to take place within the container group, maintaining support for existing CaaS deployment practices. * an evaluation of our implementation with standard benchmarks for computation, network, and database activity. We compare a base container system to containers running within a TEE enclave with and without Parma. We demonstrate that Parma introduces 0-26% additional overhead in performance over running outside the enclave, with a mean 13% overhead on SPEC2017, and that adding execution policies introduces less than 1% additional overhead. There were also significant implementation challenges, including SEV-SNP enablement in the hypervisor, bounce buffers for I/O, Linux enlightenment for SEV-SNP including attestation report fetching, and hardening the hypervisor interface. These are not claimed as contributions. ## 2 Background We will begin by introducing the technological dependencies of Parma, namely Trusted Execution Environments (TEEs) and the AMD SEV-SNP architecture. ### Trusted Execution Environments There have been many proposed security architectures which aim to provide a TEE [8, 11, 15, 23, 26, 27]. The goal of a TEE is to isolate workloads from the host system in order to protect them against manipulation whilst running. Most architectures provide secure environments, typically called _enclaves_, which run in parallel with the underlying host operating system. As such, the Trusted Computing Base (TCB) contains the required hardware which provides the needed security capabilities and the software to utilize it to maintain the guarantees of the TEE. While each TEE architecture has its own idiosyncrasies, there are desirable qualities which increase their utility: **Small TCB**: Minimizing the TCB is essential to reduce the attack surface of the TEE. **Strong Isolation**: The enclaves must be isolated from the host at all times, including the register state and memory. **Attestable State**: The boot-up process and state of the TEE must be verifiable using attestation. **Minimal Overhead**: High performance costs incurred by using the TEE greatly minimize utility. **Minimal Adoption Cost**: While perhaps not a goal shared by all TEEs, greater utility is achieved if running code in the TEE does not require significant reworking of a workflow (_e.g.,_ rewriting software to target an enclave-specific subset of a language, requiring custom tool-chains). ### AMD Secure Encrypted Virtualization-Secure Nested Paging The commercially available TEE offering from AMD is called Secure Encrypted Virtualization (SEV) [26] and targets cloud servers. As indicated in the name, it is focused on protecting Virtual Machines (VMs) from a malicious host or hypervisor. We use a specialization of SEV called SEV-SNP [4] (Secure Nested Paging). AMD SEV-SNP is available in AMD's EPYC Milan processors and extends the SEV and SEV-ES (Encrypted State) technologies, which offer isolation of a VM by providing encrypted memory and CPU register state. AMD SEV-SNP adds memory integrity protection to ensure that a VM is able to read the most recent values written to an encrypted memory page. In doing so, it provides protection against data replay, corruption, remapping- and aliasing-based attacks. #### 2.2.1 Platform Security Processor The Platform Security Processor firmware (PSP) implements the security environment for hardware-isolated VMs. The PSP provides a unique identity to the CPU by deriving the Versioned Chip Endorsement Key (VCEK) from chip-unique secrets and the current TCB version. The PSP also provides ABI functions for managing the platform, the life-cycle of a guest VM, and data structures utilized by the PSP to maintain integrity of memory pages. #### 2.2.2 Memory Encryption AMD Secure Memory Encryption (SME) [26] is a general-purpose mechanism for main memory encryption that is flexible and integrated into the CPU architecture. It is provided via dedicated hardware in the on-die memory controllers that provides an Advanced Encryption Standard (AES) engine. This encrypts data when it is written to DRAM, and then decrypts it when read, providing protection against physical attacks on the memory bus and/or modules. The key used by the AES engine is randomly generated on each system reset and is not visible to any process running on the CPU cores. Instead, the key is managed entirely by the PSP. Each VM has memory encrypted with its own key, and can choose which data memory pages they would like to be private. Private memory is encrypted with a guest-specific key, whereas shared memory may be encrypted with a hypervisor key. #### 2.2.3 Secure Nested Paging The memory encryption provided by AMD-SEV is necessary but not sufficient to protect against runtime manipulation. In particular, it does not protect against _integrity attacks_ such as: * The attacker writes a valid past block of data to a memory page. This is of particular concern if the attacker knows the unencrypted data. * If the attacker can write to a page then even if it is encrypted they can write random bytes, corrupting the memory. * A malicious hypervisor maps two or more guest pages to the same physical page, such that the guest corrupts its own memory. * A malicious hypervisor can also map one guest page to multiple physical pages, so that the guest has an inconsistent view of memory where only a subset of the data it wrote appears. #### 2.2.4 Reverse Map Table The relationship between guest pages and physical pages is maintained by a structure called a Reverse Map Table (RMP). It is shared across the system and contains one entry for every 4k page of DRAM that may be used by VMs. The purpose of the RMP is to track the owner for each page of memory, and control access to memory so that only the owner of the page can write it. The RMP is used in conjunction with standard x86 page tables to enforce memory restrictions and page access rights. When running in an AMD SEV-SNP VM, the RMP check is slightly more complex. AMD-V 2-level paging (also called Nested Paging) is used to translate a Guest Virtual Address (GVA) to a Guest Physical Address (GPA), and then finally to a System Physical Address (SPA). The SPA is used to index the RMP and the entry is checked [4]. #### 2.2.5 Page Validation Each RMP entry contains the GPA at which a particular page of DRAM should be mapped. While the nested page tables ensure that each GPA can only map to one SPA, the hypervisor may change these tables at any time. Thus, inside each RMP entry is a Validated bit, which is automatically cleared to zero by the CPU when a new RMP entry is created for a guest. Pages which have the validated bit cleared are not usable by the hypervisor or as a private guest page. The guest can only use the page after it sets the Validated bit via a new instruction, PVALIDATE. Only the guest is able to use PVALIDATE, and each guest VM can only validate its own memory. If the guest VM only validates the memory corresponding to a GPA once, then the injective mapping between GPAs and SPAs is guaranteed. #### 2.2.6 Attestation The PSP can issue hardware attestation reports capturing various security-related attributes, constructed or specified during initialization and runtime. Among other information, the resulting attestation report contains the _guest launch measurement_, the _host data_, and the _report data_. The attestation report is signed by the VCEK. Initialization.During VM launch, the PSP initializes a cryptographic digest context used to construct the measurement of the guest. The hypervisor can insert data into the the guest's memory address space at the granularity of a page, during which the cryptographic digest context is updated with the data, thereby binding the measurement of the guest with all operations that the hypervisor took on the guest's memory contents. A special page is added by the hypervisor to the guest memory, which is populated by the PSP with an encryption key that establishes a secure communication channel between the PSP and the guest. Once the VM launch completes, the PSP finalizes the cryptographic digest which is encoded as the _guest launch measurement_ in the attestation report. The hypervisor may provide 256-bits of arbitrary data to be encoded as _host data_ in the attestation report. Runtime.The PSP generates attestation reports on behalf of a guest VM. The request and response are submitted via the secure channel established during the guest launch, ensuring that a malicious host cannot impersonate the guest VM. Upon requesting a report, the guest may supply 512-bits of arbitrary data to be encoded in the report as _report data_. ## 3 Parma Architecture In this section, we present Parma, an architecture that implements the _confidential containers_ abstraction using attested execution policies. We first describe the container platform which forms the basis of the Parma design and implementation (3.1) and then provide a detailed description of the threat model (3.2). Finally, we present the security guarantees under the threat model and how Parma provides these guarantees via a collection of design principles (3.3). The guiding principle of Parma is to provide an inductive proof over the state of a container group, rooted in the attestation report produced by the PSP. The standard components and lifecycle for the container platform (the CPLAT) are largely unchanged, with the exception of the guest agent, whose actions become circumscribed by the execution policy (3.4). Thus constrained, the future state of the system can be rooted in the measurement performed during guest initialization. ### Container Platform The container platform (the CPLAT) is a group of components built around the capabilities of containerd [14]. containerd is a daemon which manages the complete container life-cycle, from image pull and storage to container execution and supervision to low-level storage and network attachments. containerd supports the Open Container Initiative (OCI) [30] image and runtime specifications, and provides the substrate for multiple container offerings, such as Docker [19], Kubernetes [25], and various cloud offerings [1, 22, 3]. Clients interact with containerd via a client interface, such as ctr, nerdctl, or crictl. containerd supports both running bare metal containers (_i.e.,_ those that run directly on the host) and also containers that run within a utility VM (UVM). Our work focuses on VM-isolated containers. The CPLAT interfaces with a custom container shim running in the host operating system. The container shim interacts with (i) host services to bring up a UVM required for launching a new pod and (ii) the guest agent running in the UVM. The guest agent is responsible for creating containers and spawning a runc instance for starting the container. In essence, the container shim-guest agent path allows the CPLAT components running in the host operating system to execute containers in isolated guest VMs. The execution of a VM-isolated container on Linux using CPLAT involves three high-level steps (as seen in Figure 2): (i) pull the container's image, (ii) launch a pod for hosting the container, (iii) start a container. **Image pull**: Pulling images entails downloading them from a container registry (unless they are already cached on the machine). Once the pull is done, the image is unpacked and the image for each layer of the container is stored as a virtual hard drive. **Pod launch**: Launching a pod entails creating and launching a UVM along with the guest agent. Thereafter, the container shim interacts with the guest agent to create and start containers. At the end of the pod launch, the container shim creates and starts a sandbox/pause container that holds the Linux namespaces for future containers. **Container start**: Starting a container requires that the guest agent mounts the container's root filesystem into the UVM; the root filesystem comprises the container layers and a writeable scratch layer. In doing so, the container shim (i) attaches each container layer (in the OCI specification) to the UVM and (ii) creates and attaches to the UVM a writeable sandbox virtual hard drive. The container layer and sandbox devices are then mounted by the guest agent into the UVM as an overlay root filesystem. Finally, the guest agent creates a runtime bundle that contains the overlay filesystem path and configuration data (compiled using the OCI runtime specification.) The runtime bundle is passed to the runc instance, which subsequently starts the container. ### Threat Model We consider a strong adversary who controls the entire host system software, including the hypervisor and the host operating system along with all services running within it. However, we trust the CPU package, including the platform security processor (PSP) and the AMD SEV-SNP implementation, which provides hardware-based isolation of the guest's address space from the system software. We also trust the firmware running on the PSP and its measurements of the guest VM, including the guest agent. Such an adversary can: * tamper with the container's OCI runtime specification; * tamper with block devices storing the read-only container image layers and the writeable scratch layer; * tamper with container definitions, including the overlay filesystem (_i.e.,_ changing the order or injecting rogue layers), adding, altering, or removing environment variables, altering the user command, and the mount sources and destinations from the UVM. * add, delete, and make arbitrary modifications to network messages, _i.e.,_ fully control the network. * request execution of arbitrary commands in the UVM and in individual containers. * request debugging information from the UVM and running containers, such as access to I/O, the stack, or container properties. These capabilities provide the adversary with the ability to gain access to the address space of the guest operating system. Out of Scope.Anything not mentioned here, _e.g.,_ side-channel attacks, are outside the scope of our threat model. ### Security Guarantees Under the threat model presented in Section 3.2, we wish to provide strong confidentiality and integrity guarantees for the container and for customer data. The provided security guarantees are based on the following principles: Hardware-based Isolation of the UVM.The memory address space and disks of the VM must be protected from the host and other VMs by hardware-level isolation. Parma relies on the SEV-SNP hardware guarantee that the memory address space of the UVM cannot be accessed by host system software. Integrity-protected Filesystems.Any block device or blob storage is mounted in the UVM as an integrity-protected file system. The file system driver enforces integrity checking upon an access to the file system, ensuring that the host system software cannot tamper with the data and container images. In Parma, a container file system is expressed as an ordered sequence of layers, where each layer is mounted as a separate device and then assembled into an overlay filesystem [31]. First, Parma verifies as each layer is mounted that the dm-verity root hash [18] for the device matches a layer that is enumerated in the policy. Second, when the container shim requests the mounting of an overlay filesystem that assembles multiple layer devices, Parma verifies that the specific ordering of layers is explicitly laid out in the execution policy for one or more containers. Encrypted Filesystems.Any block device or blob storage that holds privacy-sensitive data is mounted as an encrypted filesystem. The filesystem driver decrypts the memory-mapped block upon an access to the filesystem. The decrypted block is stored in hardware-isolated memory space, ensuring that host system software cannot access the plaintext data. The writable scratch space of the container is mounted with dm-crypt [16] and dm-integrity [17], and this is enforced Figure 2: **Container Flow. The sequence diagram on the left shows the process that results in a VM-isolated container. The pentagons correspond to the workflow steps from the text: (i) pull the image, (ii) launch a pod, (iii) start a container. The circles in this figure outline multiple points of attack within this workflow: The container shim may pass a compromised (1) UVM image or (2) guest agent during UVM creation. The container manager can alter or fabricate malicious layer VHDs (3) and/or mount any combination of layers onto to UVM (4). The container shim can pass any set of layers to use for creating a container file system (5), as well as any combination of environment variables or commands (6). A compromised host OS can tamper with local storage (7), attack the memory of the UVM (8), or manipulate remote communications (9). This list is not comprehensive.** by the execution policy. The encryption key for the writeable scratch space is ephemeral and is provisioned initially in hardware-protected memory and erased once the device is mounted. **UVM Measurement.** The UVM, its operating system and the guest agent are cryographically measured during initialization by the TEE and this measurement can be requested over a secure channel at any time by user containers. The AMD SEV-SNP hardware performs the measurement and encodes it in the signed attestation report as discussed in Section 2.2.6. **Verifiable Execution Policy.** The user must be provided with a mechanism to verify that the active execution policy (see below in (Section 3.4)) in a container group is what they expect it to be. The execution policy is defined and measured independently by the user and it is then provided to the CaaS deployment system. The host measures the policy (_e.g.,_ using SHA-512) and places this measurement in the immutable _host data_ of the report as described in Section 2.2.6. The policy itself is passed to the UVM by the container shim, where it is measured again to ensure that its measurement matches the one encoded as _host data_ in the report. **Remote Attestation.** Remote verifiers (_i.e.,_ tenants, external services, attestation services) need to verify an attestation report so that they can establish trust in a secure communication channel with the container group running within the UVM. In particular, remote verifiers need to to verify that the UVM has booted the expected operating system, the correct guest agent, and further that the guest agent is configured with the expected execution policy. In Parma, the UVM (including privileged containers) can request an attestation report using the secure channel established between the PSP and the UVM, as detailed in Section 2.2.6. The requester generates an emphemeral token (_e.g.,_ TLS public key pair or a sealing/wrapping public key) which is presented as a runtime claim in the report; the token's cryptographic digest is encoded as _report data_ in the report. A remote verifier can then verify that (i) the report has been signed by a genuine AMD processor using a key rooted to AMD's root certificate authority, (ii) the _guest launch measurement_ and _host data_ match the expected VM measurement and the digest of the expected execution policy, (iii) the _report data_ matches the hash digest of the runtime claim presented as additional evidence. Once the verification completes, the remote verifier that trusts the UVM (including the guest OS, guest agent and the execution policy) trusts that the UVM and the container group running with it will not reveal the private keys from which the public tokens have been generated, _e.g.,_ TLS private key, sealing/wrapping private key. The remote verifier can utilize the runtime claim accordingly. For instance, * a TLS public key can be used for establishing a TLS connection with the attested container group. As such, the remote verifier can trust there is no replay or man-in-the-middle attack; * a sealing public key can be used to seal (via encryption) a request or response intended only for the attested containers; Figure 3: **Attestation workflow. Here we present a typical attestation workflow. A container (key in circle) attempts to obtain and decrypt the user’s data for use by other containers in the group. The key has been previously provisioned into a key management service with a defined key release policy. The container within the UVM requests that the PSP issues an attestation report (1) including an RSA wrapping public key as a runtime claim. The report and additional attestation evidence are provided to the attestation service (2), which verifies that it the report is valid and then provides an attestation token that represents platform, init, and runtime claims. Finally, the attestation token is provided to the key management service (3) which returns the customer’s key to the container wrapped using a RSA public key as long as the token’s claims satisfy the key release policy statement.** * a wrapping public key can be used by a key management service to wrap and release encryption keys required by the VM's container group for decrypting encrypted remote blob storage. As such, the remote verifier can trust that only trustworthy and attested container groups can unwrap the encryption keys. Figure 3 illustrates this process. ### Execution Policy As discussed in our threat model, the container shim is not trusted as it could be under the control of an attacker. This implies that any action which the container shim requests the guest agent undertake inside the UVM is suspect (see Section 3.2 for a list of malicious host actions). Even if the current state of the container group is valid, there is no guarantee that the host will not compromise it in the future, and thus no way for the attestation report to be used as a gate on access to secure customer data. The attestation report on its own simply records the UVM OS, the guest agent, and the container runtime versions in use. It is not able to make any claims about the container group the host will subsequently orchestrate. For example, the host can start the user container group in a manner which is expected by an attestation service until such time as it acquires some desired secure information, and then load a series of containers which open the container group to a remote code execution attack. The attestation report, obtained during initialization, cannot protect against this. Even updating it via providing additional runtime data to the PSP (as described in Section 2.2.6) does not help, because the vulnerability is added by the host after the attestation report has been consumed by the external service. To address this vulnerability, we introduce the concept of an _execution policy_. Authored by the customer, it describes what actions the guest agent is allowed to take throughout the lifecycle of the container group. The guest agent is altered to consult this policy before taking any of the actions in Table 1, providing information to the policy that is used to make decisions. These actions each have a corresponding _enforcement point_ in the execution policy which will either allow or deny the action. In our implementation the policy is defined using the Rego policy language [34]. A sample enforcement point can be seen in Listing 1. ``` 1defaultmount_device:={"allowed":false} 2 3device_mounted(target){ 4data.metadata.devices[target] 5 6 7 mount_device:={"metadata":[addDevice], 8"allowed":true}{ 9notdevice_mounted(input.target) 10somecontainerindata.policy.containers 11somelayerincontainer.layers 12input.deviceHash-layer ``` \begin{table} \begin{tabular}{l l} \hline \hline Action & \multicolumn{1}{c}{Policy Information} \\ \hline \hline Mount a device & device hash, target path \\ Unmount a device & target path \\ Mount overlay & ID, path list, target path \\ Unmount overlay & target path \\ Create container & ID, command, environment, working directory, mounts \\ Execute process & ID, command, environment, working directory \\ Execute process & command, environment, working directory \\ (in UVM) & directory \\ Shutdown container & ID \\ Signal process & ID, signal, command \\ Mount host device & target path \\ Unmount host device & target path \\ Mount scratch & target path, encryption flag \\ Unmount scratch & target path \\ Get properties & — \\ Dump stacks & — \\ Logging & — \\ (in the UVM) & \\ Logging & — \\ (containers) & \\ \hline \hline \end{tabular} \end{table} Table 1: **Policy Actions**. These are the actions we propose to be under the control of the execution policy. The list is specific to our implementation, but given standardization around containerd it should be applicable to most scenarios. First we have actions which pertain to the creation of containers. By ensuring that any device mounted by the guest has a dm-verity root hash [18] that is listed in the policy, and that they are combined into overlay filesystems [31] in layer orders that coincide with specific containers, we first establish that the container file systems are correct. We can then start the container, further ensuring that the environment variables and start command comply with policy and that mounts from the UVM to the container are as expected (along with other container specific properties). Other actions proceed in a similar manner, constraining the control which the container shim has over the guest agent. * addDevice :={ * "name": "devices", * "action": "add", * "key": input.target, * "value": input.deviceHash * } Listing 1: **Sample enforcement point.** Here, as in our implementation, the policy is expressed in Rego [34]. A novel feature of our implementation is the ability for a policy to manipulate its own metadata state (maintained by the guest agent). This provides an attested mechanism for the execution policy to build a representation of the state of the container group, allowing for more complex interactions. For example, in the rule shown in Listing 1, the enforcement point for mounting a device creates a metadata entry for the device which will be used to prevent other devices from being mounted to the same target path. The result of making this small change to the guest agent is that the state space of the container group is bounded by a state machine, in which transitions between states correspond to the actions described above. Each transition is executed atomically and comes with an enforcement point. Induction.The state machine starts as a system that is fully measured and attested, including the execution policy with all its enforcement points, with the root of trust being the PSP hardware (\(n=1\)). All possible transitions are described by the execution policy. Regardless of which (\(n\)) transitions have been taken after that, each of the actions listed in Table 1 cannot break integrity or confidentiality without deliberate modification of the respective enforcement point, which would have had to happen before the initial measurement (\(n+1\)). Any sequence of such transitions therefore maintains integrity and confidentiality. Our enforcement points are carefully designed to maintain these properties. Note that confidentiality is _modulo_ acceptance of the UVM and the execution policy. That is, the end-user must verify that the attestation report they receive from Parma is bound to a UVM and an execution policy that uses the end-user's data in a manner they accept. ## 4 Evaluation We used benchmarking tools to evaluate several typical containerized workloads for reductions in computation throughput, network throughput, and database transaction rates to ensure that Parma does not introduce significant computational overhead. In all cases we demonstrate that Parma provides confidentiality for containerized workloads with minimal costs to performance (typically less than 1% additional overhead over running in an enclave). Each benchmarking experiment is conducted using two machines: (1) a DELL PowerEdge R7515 with an AMD EPYC 7543P 32-Core Processor and 128GB of memory for hosting the container runtime and (2) a benchmarking client (to avoid impact of any benchmarking software upon the evaluation) with the same configuration connected to (1) on the same subnet via 10GBit Ethernet. (1) is running Windows Server 2022 Datacenter (22H2) and an offline version of the Azure Container Instances CPLAT (_i.e.,_ containerd, cri, and hcsshim). The UVM runs a patched version of 5.15 Linux which includes AMD and Microsoft patches to provide AMD SEV-SNP enlightenment. (2) is running Ubuntu 20.04 with Linux kernel 5.15. ### nginx Web services are a common use case for containerization, and so we benchmark the popular nginx webserver using the wrk2 [41] benchmarking tool. Each test is run for 60 seconds on 10 threads, simulating 100 concurrent users making 200 requests per second (for a total of 12000 requests per test). We repeat the tests 20 times for each of three configurations: **Base**: A baseline nginx container running outside the SEV-SNP enclave, **SEV-SNP**: The same container running within the SEV-SNP enclave, **Parma**: The same container again within the enclave and with an attested execution policy, and measure the latency. The results are shown in Figure 4. The median curves are computed over all experiments per configuration, and the histograms are composed of all latency samples which were gathered. We observe an increase in latency by the introduction of SEV-SNP, as expected, and also a very minor increase in latency when adding the execution policy. However, it is worth noting that these effects are only reliably observed in aggregate, _i.e.,_ all median curves are within the first quartiles of each other. ### redis The in-memory key/value database redis provides another useful benchmark for containerized compute. It supports a diverse set of data structures, such as hashtables, sets, and lists. We perform our benchmarking using the provided redis-benchmark with 25 parallel clients and a subset of tests, the results of which can be seen in Table 2. Looking at the geometric mean over all actions, we see a performance overhead of 18% added by operating within the AMD SEV-SNP enclave, and a further 1% when using Parma. The performance overhead is attributed to increased TLB pressure arising from large working sets which exhibit poor temporal reuse in TLBs and trigger page table walks. In SEV-SNP-enabled systems, table page table walks incur additional metadata checks (in the Reverse Page Table) to ensure that the page is indeed owned by the VM. ### Spec2017 We also evaluate Parma by measuring the computation performance overhead using the SPEC2017 intspeed benchmarks [36]. The benchmark programs are compiled and run on the bare metal hardware. When containerized, they are provided with 32 cores and 32 GB of memory. As can be seen in Table 3 AMD SEV-SNP adds a performance overhead of 13% on average, and Parma adds less than 1% on top of this. By looking at the individual benchmarks in Figure 5, SEV-SNP introduces a wide range (1-38%) of performance overhead in SPECint benchmarks. The overheads are down to (i) the increased TLB pressure (further exacerbated in SEV-SNP setups as discussed in redis benchmark); 631.deepsjeng, the most memory-intensive benchmark in SPECint introduces the second-highest overhead and (ii) the additional overhead for accessing the encrypted scratch space; 620.omnetpp, the most IO-intensive benchmark (due to large test inputs) introduces the highest overhead (38%). ### NVidia Triton Inference Server Finally, we evaluate Parma by running a machine learning (ML) inference workload based on NVidia's Triton Inference Server [39]. Models (trained offline) and their parameters are used by the server to serve requests via a REST API. The confidential ML inference server is deployed via a container group that comprises two containers: an (unmodified) Triton inference container, and a sidecar container that mounts an encrypted remote blob (holding the ML model) using dm-crypt and dm-integrity. (The sidecar container also implements the attestation workflow described in Figure 3 to release the encryption key.) The filesystem (and the contained ML model) is made available to the Triton inference container. We evaluate the inference servers using NVidia's perf-analyzer system, allowing us to measure the overhead introduced by SEV-SNP and Parma, as shown in Figure 6. For each of the three configurations (Base, SNP, and Parma) we run four different experiments with 1 to 4 concurrent clients \begin{table} \begin{tabular}{l c c c c c} \hline \hline \multicolumn{1}{c}{\multirow{2}{*}{Setup}} & \multicolumn{1}{c}{\multirow{2}{*}{\(\text{\bf{200}}\)}} & \multicolumn{1}{c}{\multirow{2}{*}{\(\text{\bf{200}}\)}} & \multicolumn{1}{c}{\multirow{2}{*}{\(\text{\bf{200}}\)}} & \multicolumn{1}{c}{\multirow{2}{*}{\(\text{\bf{200}}\)}} & \multicolumn{1}{c}{\multirow{2}{*}{\(\text{\bf{200}}\)}} \\ \cline{1-1} \cline{5-5} \cline{7-7} \cline{10-11} \cline{1- making continuous inference requests. In Figure 6 we report the median throughput for these experiments over 3 trials and observee a performance overhead of 26% when running in the AMD SEV-SNP enclave. As before, the additional overhead from Parma is 1%. The overheads share the same root cause in the increased TLB pressure as was previous described in the redis benchmark. ## 5 Related Work A number of TEE container runtimes have been proposed to enable running applications or containers on Intel SGX [33, 35, 10, 40, 7]. A common feature across these proposals is the use of a library OS running in-enclave. By design, a library OS provides a subset of OS features, and so can only run some containers without modification. In addition, the application's network interface, the interface between the library OS, and the actual out-of-enclave OS are security boundaries. This has a performance impact, as the library OS must provide a secure communication mechanism to the out-of-enclave OS and validate all data that crosses the boundary. In contrast, Parma provides an actual OS in-enclave, can run unmodified containers, and only the application's network interface is a security boundary. Additionally, Parma provides the inductive proof of all future states via the attested execution policy. Hecate [21] uses AMD VM privilege levels (VMPLs) to run a nested hypervisor and a guest OS within the same AMD SEV-SNP isolated VM. This allows an unmodified guest OS to be run as a confidential VM, modulo kernel code integrity. However, Hecate does not address attestation, file system integrity and confidentiality, or execution policy. In addition, Hecate allows guest OS administrators full access to the VM. Brasser _et al._ have concurrently proposed TCX, a collection of trusted container extensions for running containers securely within hardware-isolated utility VMs on AMD SEV processors [12]. TCX relies (a) on a root VM (akin to the SGX quoting enclave) for bootstrapping the utility VM, thus increasing the TCB, and (b) on a secure channel established between the container owner and the utility VM for preventing untrusted entities from submitting container commands to the VM. The latter design choice does not support the CaaS deployment model, wherein the container owner and cloud service provider personas are different, thus requiring that the CSP submit container commands to the utility VM. In addition, the container owner is in the TCB. The end-user of the confidential container (_e.g.,_ a bank customer, a patient submitting medical data to their doctor) has no attestation over what container commands have been or will be run. SEVGuard explores running user-mode applications on guest SEV-protected VMs without a guest-side kernel component [32]. SEVGuard relies on an existing kernel virtualization API for interaction with host kernel features and provides Figure 5: **SPEC2017 results**. This plot shows the per-benchmark runtimes (in seconds) for SPEC2017 broken out by Base (container run without SEV-SNP), SNP (with SEV-SNP), and Parma (SEV-SNP + execution policy). Lower is better. 631.deepsjeng, the most memory-intensive benchmark in SPECint introduces the second-highest overhead. 620.omnetpp, the most IO-intensive benchmark (due to large test inputs) introduces the highest overhead. Discussions of these outliers can be found in the text. Figure 6: **Nvidia Triton Results**. This plot shows the inference rate for the Base (container run without SEV-SNP), SNP (with SEV-SNP), and Parma (SEV-SNP + execution policy) configurations. The median values gathered over three experiments are shown for each number of concurrent clients. support for calling shared libraries on the host. While SEV-Guard offers a low TCB, it is vulnerable to attacks on the kernel virtualization API and shared libraries and does not provide secure persistent storage. ## 6 Future Work While Parma provides a solid foundation for confidential containers, there are some limitations to this technique which invite future investigation. ### Trusted Computing Base Parma reduces the TCB by removing the need to trust the host, the hypervisor, and the CSP, but it could be smaller. In particular, by trusting the UVM we necessarily import the UVM OS (_e.g.,_ Linux) into the TCB, as well as the standard libraries needed to implement other elements like the guest agent. However, much of that code is entirely vestigial in the context of providing a container runtime. One potential line of inquiry would be to explore ways of reducing this aspect of the UVM to the smallest possible kernel and the barest necessities needed by the guest agent, runc and other tools to further reduce the attack surface. ### Policy Flexibility One downside of having an execution policy which is measured during initialization and then subsequently used for attestation-based security operations is that the release policies will necessarily be tied to a fixed version of the execution policy. If container images need to change, _e.g.,_ due to necessary security updates upon discovery of a vulnerability, it requires not only an update to the execution policy but also an update to all release policies as well. In many scenarios this is a desirable property, but users may want to choose to loosen how the policy defines which actions it allows. A promising area of future research would be to find a manner in which to provide this flexibility without sacrificing the post-verifiable inductive proof over the state of the container group which Parma provides. ### Writeable Filesystems Freshness Parma relies on dm-integrity for block-level integrity of writeable filesystems. While dm-integrity provides integrity protection based on authentication tags, the latter are vulnerable to replay attacks and do not provide any freshness guarantees. Freshness could be provided using update-able integrity trees (_e.g.,_ Merkle Trees) but at a huge latency and bandwidth overhead when a data block (_i.e.,_ leaf block in the tree) is updated due to a chain of updates to all intermediate blocks lying with the root-leaf path. A promising area of future research would be to explore security-performance trade-offs for writeable filesystems with freshness guarantees. ## 7 Conclusion In this paper we have introduced Parma, a novel method for providing confidential computation for containerized workflows via the introduction of an attested execution policy. Further, we have demonstrated that Parma adds less than 1% additional performance overhead beyond that added by the underlying TEE (_i.e.,_ AMD SEV-SNP). We also outline how the security properties of the system provide an inductive proof over the future state of the container group rooted in the attestation report. This provides the ability (via remote attestation) for external third-parties to securely communicate with containers, enabling a wide range of containerized workflows which require confidential access to secure data. ## Availability The open source implementation of Parma is available as part of the hcsshim system ([https://github.com/microsoft/hcsshim/tree/main/pkg/securitypolicy](https://github.com/microsoft/hcsshim/tree/main/pkg/securitypolicy)) and is the technology which enables Confidential Azure Container Instances. ## Acknowledgements Thanks to Istvan Haller for help with the SPEC2017 benchmark and helpful conversations.
2307.00479
Domain Transfer Through Image-to-Image Translation for Uncertainty-Aware Prostate Cancer Classification
Prostate Cancer (PCa) is a prevalent disease among men, and multi-parametric MRIs offer a non-invasive method for its detection. While MRI-based deep learning solutions have shown promise in supporting PCa diagnosis, acquiring sufficient training data, particularly in local clinics remains challenging. One potential solution is to take advantage of publicly available datasets to pre-train deep models and fine-tune them on the local data, but multi-source MRIs can pose challenges due to cross-domain distribution differences. These limitations hinder the adoption of explainable and reliable deep-learning solutions in local clinics for PCa diagnosis. In this work, we present a novel approach for unpaired image-to-image translation of prostate multi-parametric MRIs and an uncertainty-aware training approach for classifying clinically significant PCa, to be applied in data-constrained settings such as local and small clinics. Our approach involves a novel pipeline for translating unpaired 3.0T multi-parametric prostate MRIs to 1.5T, thereby augmenting the available training data. Additionally, we introduce an evidential deep learning approach to estimate model uncertainty and employ dataset filtering techniques during training. Furthermore, we propose a simple, yet efficient Evidential Focal Loss, combining focal loss with evidential uncertainty, to train our model effectively. Our experiments demonstrate that the proposed method significantly improves the Area Under ROC Curve (AUC) by over 20% compared to the previous work. Our code is available at https://github.com/med-i-lab/DT_UE_PCa
Meng Zhou, Amoon Jamzad, Jason Izard, Alexandre Menard, Robert Siemens, Parvin Mousavi
2023-07-02T05:26:54Z
http://arxiv.org/abs/2307.00479v2
Domain Transfer Through Image-to-Image Translation for Uncertainty-Aware Prostate Cancer Classification ###### Abstract Prostate Cancer (PCa) is often diagnosed using High-resolution 3.0 Tesla(T) MRI, which has been widely established in clinics. However, there are still many medical centers that use 1.5T MRI units in the actual diagnostic process of PCa. In the past few years, deep learning-based models have been proven to be efficient on the PCa classification task and can be successfully used to support radiologists during the diagnostic process. However, training such models often requires a vast amount of data, and sometimes it is unobtainable in practice. Additionally, multi-source MRIs can pose challenges due to cross-domain distribution differences. In this paper, we have presented a novel approach for unpaired image-to-image translation of prostate mp-MRI for classifying clinically significant PCa, to be applied in data-constrained settings. First, we introduce domain transfer, a novel pipeline to translate unpaired 3.0T multi-parametric prostate MRIs to 1.5T, to increase the number of training data. Second, we estimate the uncertainty of our models through an evidential deep learning approach; and leverage the dataset filtering technique during the training process. Furthermore, we introduce a simple, yet efficient _Evidential Focal Loss_ that incorporates the focal loss with evidential uncertainty to train our model. Our experiments demonstrate that the proposed method significantly improves the Area Under ROC Curve (AUC) by over 20% compared to the previous work (98.4% vs. 76.2%). We envision that providing prediction uncertainty to radiologists may help them focus more on uncertain cases and thus expedite the diagnostic process effectively. Our code is available at [https://github.com/med-i-lab/DT_UE_PCa](https://github.com/med-i-lab/DT_UE_PCa). Deep Learning, Image Translation, Uncertainty Estimation, Prostate Cancer ## 1 Introduction Prostate Cancer (PCa) is a prevalent form of cancer among men (Reda et al., 2018), and the clinically significant PCa is defined by the Gleason score \(>6\) or the histopathology ISUP grade \(\geq 2\)(Smith et al., 2004; Arif et al., 2020). The current PCa diagnosis procedure involves a combination of the prostate-specific antigen test and the histopathology analysis of the Transrectal Ultrasound-guided biopsy (TRUS) taken from 10-12 regions on the prostate gland (Fletcher, 2019; Grebenisan et al., 2021). However, the histopathology analysis on TRUS can miss up to 20% of clinically significant PCa due to the limited number of biopsy samples (Grebenisan et al., 2021; Reda et al., 2018). Multi-parametric Magnetic Resonance Imaging (mp-MRI) has emerged as an effective alternative to TRUS for the early detection of PCa. mp-MRI uses a combination of anatomical and functional sequences of MRI that can further highlight the differences between normal and abnormal (cancer) cells. 3.0T MRI generally has higher image quality and spatial resolution (Ladd et al., 2018) than 1.5T MRI, but the latter is widely used in local, small clinical centers due to its lower price (Mushlin et al., 1997). The evaluation and reporting guideline of prostate mp-MRI was first introduced in the Prostate Imaging Reporting and Data System (PI-RADS) (Barentsz et al., 2012; Weinreb et al., 2016; Barentsz et al., 2016). The guideline provides a comprehensive scoring schema for suspicious prostate lesions and mp-MRI sequences. An extensive Prostate MRI imaging study (PROMIS) (Bosaily et al., 2015) reported that targeted biopsy using mp-MRI has higher sensitivity and negative-predictive value (NPV) but lower specificity compared to TRUS biopsy (Bosaily et al., 2015; Stabile et al., 2020; Ahmed et al., 2017). The study also showed that 27% of the patients did not need to undergo biopsy, had mp-MRI been used for screening. Although PROMIS provides strong practical implications for mp-MRI in PCa diagnosis, the low specificity indicates that mp-MRI can be plausibly improved by advanced analyses. In recent years, deep learning methods have emerged as a powerful tool for image classification tasks, and have provided promising performance in detecting and segmenting PCa on multi-parametric Prostate MRIs (Saha et al., 2021; Le et al., 2017; Yoo et al., 2019; Iqbal et al., 2021; Pellicer-Valero et al., 2022). A more recent grand challenge, ProstateX (Armato et al., 2018), has further shown the ability of deep learning approaches in detecting clinically significant PCa on 3.0T mp-MRI data. Several groups have developed Convolutional Neural Network (CNN)-based models that achieve high performance for PCa classification (Litjens et al., 2014; Liu et al., 2017; Mehrtash et al., 2017; Armato et al., 2018; Grebenisan et al., 2020, 2021). These methods have a great potential for clinical translations by highlighting abnormal lesions for radiologists during the PCa diagnostic process. While deep learning has shown promising results in detecting PCa on mp-MRI, there are several challenges in deploying deep models in local clinics with limited data and patient throughput. Training deep models typically requires a large amount of data, making them difficult to deploy in small clinics. Moreover, MRI data may be acquired under different magnetic strengths, vendors, and protocols, which can affect the performance of deep models. For example, prostate MRI typically with high magnetic strength of 3.0T (Ullrich et al., 2017) is preferred because it produces high-resolution images and provides detailed information. In contrast, the low magnetic strength MRI (1.5T) may result in fuzzy boundaries (Ladd et al., 2018) and not be able to offer detailed information. The performance of deep models will be significantly affected if there is a difference between training and test distribution. There are efforts in the literature on solving related problems in federated learning (Li et al., 2020; Adnan et al., 2022) which is not a focus on our study. Furthermore, classical deep models are designed to predict a label when inferring data from a test set, regardless of whether or not the test image is in or out of the training set distribution. These models are not able to identify data samples that belong to unrelated distributions (Sensoy et al., 2018), or indicate how confident they are in their prediction. These limitations make models hard to interpret and hence, there are concerns about the reliability of such models. Hence, reusing and deploying models for local PCa detection is challenging. It is essential to address the above limitations and drawbacks when deploying deep learning models to real clinical routines. Thus, two main questions arise in this context: 1. For small local clinical centers, can they take advantage of the large high-resolution 3.0T public MRI data and enhance the classification performance on their limited low-resolution local 1.5T MRI data? 2. When deploying models in clinical centers, could we offer additional information regarding the confidence of the model's predictions, in addition to the final result, to enhance the reliability of the models? In this work, we aim to answer the two questions above. We propose a novel 2-stage learning framework for clinically significant PCa classification using multi-parametric, multi-center MRI data that can simultaneously provide an estimate of the predictive confidence and the corresponding predicted label to improve classification performance. In the first stage, we introduce a data preprocessing pipeline that translates prostate mp-MRI data from 3.0T to 1.5T via a Generative Adversarial Network (GAN) approach in order to increase the number of training samples. This step addresses the challenge of limited data in local clinics with low patient throughput (see Section 4.1). In the second stage, we propose an uncertainty-aware PCa classification approach. Specifically, we design three different model architectures and leverage the _co-teaching_ framework (Han et al., 2018) to address the noisy label problem (see Section 4.2.1). During the training phase, we incorporate dataset filtering using _evidential uncertainty estimation_(Sensoy et al., 2018) to eliminate the training data samples with high prediction uncertainty to improve the robustness of our models. Finally, we extend the work of Sensoy et al. (2018) to design a novel _Evidential Focal Loss_ to optimize our classification models during training (see Section 4.2.2). Experiments demonstrate the effectiveness of the proposed framework in significantly improving the classification performance compared to previous work. **Contributions:** In summary, our work makes three main contributions: 1. We develop a GAN-based framework to translate unpaired prostate mp-MRIs from 3.0T to 1.5T, which we termed as domain transfer. This framework would align different data distributions and increase the number of training data for deep classification models. 2. We incorporate the Theory of Evidence (Yager and Liu, 2008) into our model, enabling it to identify and filter out highly uncertain training data and making the model more robust. We propose a novel loss function termed _Evidential Focal Loss_ that combines the original Focal Loss (Lin et al., 2017) and the evidential uncertainty (Sensoy et al., 2018) for the binary PCa classification task. 3. Using the uncertainty and filtering on the training set, our results outperform the state-of-the-art and improve the interpretability of model predictions. By providing confidence estimates for the predictions, radiologists can make informed decisions during the PCa diagnostic process and effectively expedite the process. ## 2 Related Work ### Domain Adaptation Machine learning algorithms usually perform well when training and test data share the same distribution and feature space. However, in real-world applications, the distribution of test data often shifts, leading to biased or inaccurate predictions. In addition, it is time-consuming or infeasible to acquire new training data and fully repeat training steps. Domain Adaptation (DA) is an approach that addresses this issue by mitigating the dataset bias or domain shift problem caused by different distributions. There has been a lot of work on this topic in the past few years, which can be grouped into the following three general tasks (Cui et al., 2020): (1) unsupervised DA tasks (Ganin and Lempitsky, 2015; Ganin et al., 2016; Long et al., 2016; Saito et al., 2018; Long et al., 2014) focus on addressing the domain shift problem without requiring labeled target domain data; (2) semi-supervised DA tasks(Yao et al., 2015; Saito et al., 2019; Li et al., 2021) aim to explore the partially labeled target domain data to further enhance the performance of domain adaptation algorithms; and (3) multi-source DA tasks (Hoffman et al., 2012; Xu et al., 2018; Peng et al., 2019) deal with scenarios where multiple source domains are available for adaptation. DA methods are often used to extract domain-invariant features for transferring knowledge between source and target domains. These methods incorporate various learning objectives with deep neural networks Wang and Deng (2018) for distribution matching: (1). **Discrepancy Measurement-based** methods aim to align feature distributions between two domains by fine-tuning deep models, e.g., using statistic criterion like Maximum Mean Discrepancy (Long et al., 2015; Yan et al., 2017; Kumagai and Iwata, 2019), and class criterion (Tzeng et al., 2015; Hinton et al., 2015; Motiian et al., 2017). Some of these methods often require large labeled target domain data to diminish the domain shift problem, which is sometimes infeasible to get such medical data in the real-life scenario. (2). **Adversarial-based** methods aim to confuse domain discriminators from Generative Adversarial Networks (GANs) to enhance the invariant feature extraction (Ganin et al., 2016; Bousmalis et al., 2017; Hong et al., 2018). One common scenario involves utilizing noise vectors, either with or without source images, to generate realistic target images while preserving the source features. However, training GANs are hard and sometimes results in generator degradation, e.g., mode collapse (Karras et al., 2020). (3). **Reconstruction-based** methods, in addition to the general GANs approach from the above category, aim to reconstruct source-like images as an auxiliary task to preserve domain invariant features through an adversarial reconstruction paradigm (Hoffman et al., 2018; Zhu et al., 2017). These methods usually have superior performance over the conventional GANs approach because they have an explicit reconstruction task to supervise the entire pipeline and make the training process more stable. CycleGAN (Zhu et al., 2017) is one of the state-of-the-art unsupervised adversarial reconstruction-based methods that is widely used for unpaired image-to-image translation. Its cycle consistency loss ensures the pixel-level similarity between two images through a reconstruction task, i.e., the source image \(s\) is translated to the target domain \(\hat{s}\) and then translated back \(\tilde{s}\), where it should be identical to the original image (\(s=\tilde{s}\)). However, one drawback of cycle consistency loss is the harsh constraint on the pixel-level, which will degrade the performance of GANs in some tasks (Zhao et al., 2020). To address this limitation, Zhao et al. (2020) purpose the adversarial consistency loss GAN (ACL-GAN) that replaces the pixel-level similarity with the distance between distributions, i.e., instead of forcing \(s=\tilde{s}\), we let the distribution of \(\tilde{s}\) to be similar to the distribution of \(s\). The ACL-GAN can retain important features from the source images and overcomes the disadvantage of the cycle-consistency constraint. Therefore, we adapt the ACL-GAN model and build our framework based on it. In medical imaging, domain shift problems usually fall into two variations: subject-related variation (age, gender, etc.), and acquisition-related variation (MRI vendor, field strength, imaging protocol, etc.) (Kouw et al., 2017). To solve the problem, one intuitive approach is to fine-tune a model that is pre-trained on the source domain with the new data from the target domain. Khan et al. (2019) propose to use the pre-trained VGG model on the ImageNet dataset (Deng et al., 2009) to learn robust high-level features of natural images, and then fine-tune it on the labeled MR images for the Alzheimer's Disease (AD) classification task to achieve state-of-the-art performance. Similarly, Ghafoorian et al. (2017) study the impact of the fine-tuning techniques on the brain lesion segmentation task, demonstrating that fine-tuning with only a small number of target domain training samples can outperform models trained from scratch. Another approach is to use domain adaptation as an intermediate step to reduce variance in image acquisition parameters from both domains and then use it for downstream tasks. Researchers have attempted to address the problem of acquisition variation in MRI data for several years. Kouw et al. (2017) propose a feature-level representation learning method to either extract acquisition-invariant features or remove acquisition-variant features from paired 1.5T and 3.0T brain MRIs. The learned features are then used for a downstream classification task. However, obtaining paired 1.5T and 3.0T MRI data in real-life scenarios is impractical. Another way to align acquisition-invariant features is to synthesize images from different types of acquisition parameters using GAN-based adversarial reconstruction methods. GANs have been applied to perform cross-modality image translation between different medical images or generate synthetic images from random noise. The objective of such translation tasks is to retain the underlying structure while changing the appearance of the image (Armanious et al., 2020). Researchers have attempted to estimate images in the target modality from the source modality, such as MRI-CT translation (Hiasa et al., 2018; Nie et al., 2017; Oulbacha and Kadoury, 2020; Armanious et al., 2019), and X-ray to CT translation (Ying et al., 2019; Ge et al., 2022). Other areas that have been explored include intra-modality translation, such as 3.0T-7.0T MRI translation (Nie et al., 2018), T1/T2-FLAIR translation (Hu, 2021; Uzunova et al., 2020) and pure data augmentation by generating synthetic images from random noise vectors (Radford et al., 2015; Frid-Adar et al., 2018; Huang and Jafari, 2021; Kwon et al., 2019). However, most of the works do not consider the real clinical practicality, for example, for 3.0T-7.0T MRI translation in Nie et al. (2018), the training data is paired, which is not feasible in real clinical settings. Generating synthetic images from noise does not take advantage of the publicly available data and ignores _a-priori_ information. The current limitations provide great potential for unpaired image translation for medical images, which we employ in this work. ### Deep Learning for PCa Classification The use of 3D-CNN models has gained widespread popularity for classifying PCa based on volumetric image data due to their excellent performance. Mehrtash et al. (2017) propose a feature fusion 3D-CNN to classify clinically significant PCa using mp-MRI data. They use ADC maps, DWI, and K\({}^{trans}\) 3.0T MR data to enable the model to learn multi-modal information. Inspired by the VGG architecture (Simonyan and Zisserman, 2014), the model has three VGG-like feature extractors for each image modality, followed by the concatenation between outputs of each extractor and a vector represents the zonal information of the suspicious region. On the test set, the proposed model achieves the area under the receiver operating characteristic (AUC) curve of 0.80 on 140 unseen patients. Liu et al. (2017) propose a similar VGG-like 3D-CNN architecture for the same PCa classification task. Different from Mehrtash et al. (2017), they only have one model for feature extraction. To obtain the multi-modal information, they stack three images from each of the ADC maps, DWI, K\({}^{trans}\) into one 3-channel image as the input. The model achieves the AUC of 0.84 on the test set. In Yoo et al. (2019), a probabilistic approach using mp-MRI data is employed for PCa classification. The authors develop an automated pipeline for the classification of clinically significant PCa using 3.0T DWI images from 427 patients. The pipeline consists of three parts: classification of each DWI slice using the pre-activated ResNet model (He et al., 2016), extraction and selection of first-order statistics from the CNN outputs, and final class label prediction using a random forest classifier. On the test set, the model achieves an AUC of 0.87. While the aforementioned studies may yield favorable AUCs, the reproducibility of the model might be challenging clinics with limited patient (data) throughput. Recently, Grebenisan et al. (2021, 2020) address the data-hungry problem by introducing a disentangled representation learning approach (SDNet) to synthesize public 3.0T MRI images into 1.5T MRI images to increase the training data size for centres with limited 1.5T data. Their approach aims to separate the anatomy- and modality-specific features present in images, subsequently merging the 1.5T modality features with the 3.0T anatomical features to generate MRI images resembling those acquired at 1.5T. Finally, a simple 3D-CNN classifier is used for the binary classification of clinically significant PCa. The model outperforms the state-of-the-art performance in PCa classification through domain alignment between different data sources. Although current methods for PCa classification can achieve good performance, they do not provide a confidence score for their prediction, making them less interpretable in clinical practice. ### Uncertainty Estimation Recent studies in medical imaging have highlighted the detrimental impact of label noise on the performance of modern deep learning models (Karimi et al., 2020). Conventional regularization techniques such as dropout, batch normalization, weight decay, etc. can not properly address the problem (Arpit et al., 2017; Zhang et al., 2021). Methods proposed to mitigate such problem can be summarized as those that use (Song et al., 2022): (1) Robust loss functions and loss adjustments (Van Rooyen et al., 2015; Charoenphakdee et al., 2019; Zhang and Sabuncu, 2018) aiming to stabilize the model performance when optimizing its parameters; (2) Sample selection (Jiang et al., 2018; Malach and Shalev-Shwartz, 2017; Wang et al., 2020) aiming to select a subset of "clean" data from a batch of samples to compute the loss; and (3) Robust architectures (Han et al., 2018; Yu et al., 2019) aiming to learn the same data by training multiple models with different initialization assess output stability. While these methods inherently handle the noisy label problem, they can not provide explicit uncertainty estimation in terms of confidence in their output. Moreover, the capability of deep learning models to effectively identify irrelevant samples is still limited. For instance, when a model trained on prostate MRIs is presented with a CT scan of the prostate at the time of inference, it is unclear whether the model can provide meaningful predictions or simply indicate a lack of in-domain knowledge and perform a human-in-the-loop analysis instead. In recent years, research has been conducted on uncertainty estimation for deep learning models. Gal and Ghahramani (2016, 2015) develop the _dropout neural networks_ framework to represent the prediction uncertainty of deep learning models, where the dropout layers in the model are formed by Bernoulli distributed random variables. During the test phase, the predictive uncertainty can be determined by enabling dropout layers and averaging the results of multiple runs. An alternative approach to modeling uncertainty involves the use of _evidential neural networks_(Sensoy et al., 2018), which formulate uncertainty by fitting a Dirichlet distribution - acting as the conjugate prior of the categorical distribution -- to the class probabilities acquired from neural networks (Sensoy et al., 2018). This method considers model predictions as multinomial subjective opinions (Josang, 2016) or beliefs (Dempster, 1968), which can be further modeled explicitly using subjective logic. The "evidential" approach emphasizes the ability of the model to deliver certain predictions and exhibits superiority compared to the dropout approach (Gal and Ghahramani, 2016). In clinical practice, uncertainty estimation is crucial. By integrating uncertainty information into prediction outcomes, misclassification rates can be significantly reduced. For instance, in radiograph classification task (Ghesu et al., 2019), the authors employ the Dempster-Shafer Theory of Evidence (Dempster, 1968) and the principles of subjective logic (Josang, 2016) to develop a framework that jointly estimates per-class probabilities and provides predictive uncertainty. Later, this approach has been extended to abdominal ultrasound and brain MR images (Ghesu et al., 2021). In the context of breast cancer classification, Tardy et al. (2019) apply the evidential neural networks approach (Sensoy et al., 2018) to effectively diagnose breast cancer. A similar approach is used for the same task by Yuan et al. (2020) through the evidence adjustment technique, which focuses on the difference in the risks of uncertain samples from different classes. Consequently, we build upon the work from Sensoy et al. (2018) by adding uncertainty estimation to improve the robustness of the model and the interpretability of predictions. ## 3 Materials ### Data In this work, we use both large publicly available ProstateX data and small private local clinical data. A visualization of sample images from both datasets is presented in Figure 1. **ProstateX Grand Challenge Data (3.0T)**. The 3.0T data is provided by the International Society of Optics and Photonics in the "ProstateX" challenge (Litjens et al., 2017). The dataset contains T2-weighted (T2), maximum b-value diffusion, diffusion-weighted imaging (DWI) with apparent diffusion coefficient (ADC) maps, and \(K^{trans}\) images of 346 patients undergoing prostate biopsies. T2 images show the anatomical structure of the prostate, and both the ADC maps and K\({}^{trans}\) could further highlight the differences between normal and abnormal (cancer) cells in the MRI scans (Kasivisvanathan et al., 2018; Kasson et al., 2018). We only use 204 of the total 346 patients in this work since these are reserved as training data, and hence they are provided with the spatial location of the suspicious finding, and a binary label indicating whether or not there is cancer. The remaining 142 patients are reserved as the test set and no labels are provided, hence, we exclude those from our work. **Kingston Health Science Center Data (1.5T)**. The local 1.5T data is obtained from the Kingston Health Science Center (KHSC), which contains 104 patients with the corresponding biopsy-confirmed cancer and the Gleason Score. For the local data, only T2, ADC, and b-value images are available. All patients MRI have the spatial location of the suspicious finding(s), the Gleason Score, and the binary label indicating whether it is a cancer lesion or not. Since all patients in both datasets have complete T2 and ADC data, our focus in this work is solely on these two types of images. Each MRI data in our study is associated with a single patient. Both datasets are processed similarly unless stated. ### Pre-processing T2 and ADC sequences from both datasets are \(160\times 160\times C\), where \(C\) is the total number of slices in the MRI. We resample all 3D data to have the same voxel spacing. To reduce aliasing artifacts, the most common voxel spacing (\(0.5\times 0.5\times 3\ mm^{3}\)) is used across all data, and the consine-windowed interpolation is utilized during sampling. We normalize pixel intensities to \([-1,1]\) for all data. For the translation purpose from 3.0T to 1.5T, we further resample all 3D data to \(256\times 256\times C\) and split into \(C\) 2D gray-scale slices. **Augmentation:** For each patient, the MRI volume undergoes rotation ranging from 0 to 100 degrees in 5-degree increments, hence expanding the data size 20-fold. Figure 1: Visualization of sample data. 1a and 1b are the 1.5T T2 and ADC images from KHSC, respectively. Similarly, 1c and 1d are the 3.0T T2 and ADC images from the “ProstateX” Challenge, respectively. **Cropped Patches:** To reduce the computational cost, cropped patches of the MRI volume were employed. The process involves identifying the suspicious slice (\(i_{s}\)) based on the provided spatial location. Recognizing that PCa lesions can span multiple slices, two neighboring slices (\(i_{s-1}\) and \(i_{s+1}\)) are selected as well and cropped around the biopsy location to generate a patch of size \(64\times 64\times 3\). ## 4 Methods Figure 2 summarizes an overview of our proposed approach. The domain transfer framework aims to reduce the distribution-level discrepancy between two prostate MRI datasets. The framework matches the acquisition parameters of publicly available, large 3.0T prostate mpn-MRI data with local, small 1.5T prostate mp-MRI data. Once all the data from 3.0T are translated to 1.5T, a subsequent classifier is trained to classify clinically significant PCa. Furthermore, during the training process, the uncertainty is calculated along with the class output. We also introduce a novel evidential focal loss for the PCa classification task. Lastly, we utilize dataset filtering to improve robustness and accuracy by eliminating uncertain data samples from the training set. ### The Domain Transfer Framework We adapt the ACL-GAN model (Zhao et al., 2020) discussed in Section 2.1 to perform unpaired MR image translation from 3.0T to 1.5T. There are two generators in this model namely \(G_{T->S}\) and \(G_{S->T}\), \(G_{T->S}\) translates the images from the target domain to the source domain given the input \(x\) and a noise vector \(z\) sampled from \(\mathcal{N}(0,1)\). \(G_{S->T}\) is the reverse process of \(G_{T->S}\) which translates the image from the source domain to the target domain. There are three discriminators, \(D_{S},D_{T}\), and \(\hat{D}\) in this model. The first two ensure that translated images are in the correct domain by optimizing adversarial losses, and \(\hat{D}\) ensures that translated images retain anatomical features in 3.0T by distinguishing the pair (Source, Trans. Source1) and (Source, Trans. Source2), as shown in the bottom of Figure 2. The loss function of ACL-GAN (Zhao et al., 2020) is defined as in equation (1): \[\mathcal{L}_{total}=\mathcal{L}_{adv}+\lambda_{acl}\mathcal{L}_{acl}+\lambda_ {idt}\mathcal{L}_{idt}+\lambda_{mask}\mathcal{L}_{mask} \tag{1}\] Where \(\mathcal{L}_{adv}\) is the traditional adversarial loss for both source domain \(S\) and target domain \(T\), i.e., \(\mathcal{L}_{adv}=\mathcal{L}_{adv}^{S}+\mathcal{L}_{adv}^{T}\), to ensure the translated image is in the correct domain. \(\mathcal{L}_{acl}\) is the adversarial consistency loss that is used to preserve important features of the source image in the translated image, \(\mathcal{L}_{idt}\) is the identity loss, which encourages the generators to perform approximately identity mapping when images in the corresponding domain are provided, and \(\mathcal{L}_{mask}\) is used to force both generators to only modify certain regions of the source image and keep the rest of the areas unchanged. Readers are encouraged to refer to the original paper (Zhao et al., 2020) for more details. ### Uncertainty-aware PCa Classification #### 4.2.1 Classifier architectures The traditional CNN approach is used for the clinically significant PCa binary classification task. Specifically, we explore three different model architectures for combinations of T2 and ADC patches. The first architecture, called the multi-stream CNN ("M.S. MpMRI"), treats T2 and ADC patches as separate inputs, as shown in Figure 3. The model takes 3D patches of T2 and ADC as parallel inputs, which are then processed by the same feature extractor to extract deep semantic representations. The output representations of T2 and ADC are then concatenated channel-wise and fed into another convolutional layer followed by a fully-connected layer to produce the class probabilities. In the second architecture, we combine ADC and T2 patches as a single input to the network. We stack cropped 3D patches of T2 and ADC along the channel axis and obtain the input data size of \(64\times 64\times 6\). Another way to combine is we only consider the located suspicious slice \(i_{s}\) for both T2 and ADC, and stack them along the channel axis to obtain the input data size of \(64\times 64\times 2\). The model architecture for both combinations is similar to Figure 3, where there is only one branch and no concatenation afterward. We name the Figure 2: Detailed schematic of the proposed method. The overall framework of our proposed method contains two stages: 1), domain translation to map public 3.0T MRI with local 1.5T MRI; 2), uncertainty-aware clinically significant PCa classification. The bottom figure is the training schema for domain transfer. The upper right portion of the figure illustrates the PCa classification training process, which involves training the classifier using the Evidential Focal loss, filtering the training set based on uncertainty, and retraining the classifier on the filtered data to obtain the final classifier. model with input size of \(64\times 64\times 6\) (resp. input size \(64\times 64\times 2\)) as "Vol. MpMRI"(resp. "MpMRI"). Lastly, we use only 3D T2 patches as input to match with the previous work (Grebenisan et al., 2021). The model architecture is as same as the one for MpMRI, and we call this model "T2-only". To combat the potential noisy label problem, the _co-teaching_ framework (Han et al., 2018) is also utilized in this work. In co-teaching, two models with the same architecture and configurations are trained simultaneously, as shown in Figure 4. In every mini-batch, two models are trained in parallel. Each model first feeds forward all data in the current batch and selects some of the data that has the clean labels with high probability; then, two models decide on what data in the current batch should be used for training; finally, each model uses the data selected by the peer model to update itself. Denote \(f\) be the first model and \(g\) be the second model. The number of instances selected by both models is controlled by a non-increasing function \(R(T)\) on \(T\) defined in equation (2). At epoch \(T\), each model only calculate loss based on the \(R(T)\) portion of the batch instances. \[R_{T}=(1-\tau\cdot\min(T/T_{k},1))\times 100\% \tag{2}\] where \(T\) is the total training epochs, \(\tau\) is the same as noise rate, and \(T_{k}\) is the epochs for linear drop rate. We use the "MpMRI" as the backbone model (model A and B in Figure 4) in the co-teaching framework. #### 4.2.2 Evidential Focal Loss Dataset filtering during the training phase could reduce the effect of the noisy label on the deep model. Followed by Ghesu et al. (2019), the process of uncertainty-based filtering is Figure 3: Detailed architecture of “M.S. MpMRI’ model. The first sequence of CNN layers contains 1 \(\times\) 3D convolution layer and 4 \(\times\) 2D convolution layers, 2 \(\times\) Max Pooling layers with window size \(2\times 2\). Both extracted feature maps of T2 and ADC are concatenated channel-wise. After that, another set of convolution-max pooling layers is utilized. Finally, the extracted 2D features are reshaped to 1D and fed into a Fully connected layer follow by a softmax layer with 2 outputs representing the probabilities of which class the input data belongs to. shown at the top of Figure 2: Firstly, we calculate the uncertainty value for each sample in the training set. We then remove a portion of the training samples that exhibit high predictive uncertainty. Finally, we retrain the model using the remaining "clean" training data. Following the work from Sensoy et al. (2018), we extend and combine the idea of subjective logic (Josang, 2016) with the focal loss (Lin et al., 2017) for the clinically significant PCa binary classification task. In the context of the Theory of Evidence, a belief mass is assigned to individual attributes, e.g., the possible class label of a specific data sample. The belief mass is generally calculated from the evidence collected from the observed data (Dempster, 1968). Let \(K\) be the number of classes, and \(b_{k}\geq 0,k\in[1,K]\) is the belief mass for class \(k\) and \(u\geq 0\) is the overall uncertainty measure. Let \(e_{k}\geq 0\) be the evidence computed for \(k^{th}\) class, then followed by Sensoy et al. (2018), the belief \(b_{k}\) and the uncertainty \(u\) are computed as the following: \[b_{k}=\frac{e_{k}}{S}\quad and\quad u=\frac{K}{S} \tag{3}\] where \(S=\sum_{i=1}^{K}(e_{i}+1)\). For our binary task (\(K=2\)), we can further simplify Equation (3) to \(b_{0}=\frac{e_{0}}{e_{0}+e_{1}+2},b_{1}=\frac{e_{1}}{e_{0}+e_{1}+2}\), and \(u=\frac{2}{e_{0}+e_{1}+2}\). The belief mass assignment, e.g., subjective opinion, is corresponding to the Dirichlet distribution with parameters \(\alpha_{k}=e_{k}+1\), and we could rewrite \(S=\sum_{i=1}^{K}\alpha_{k}\) as the Dirichlet strength. The formal definition of Dirichlet distribution can be found in Sensoy et al. (2018). The expected probability for the \(k^{th}\) class is calculated by the mean with the associate Dirichlet distribution \(\hat{p}_{k}=\frac{\alpha_{k}}{S},k\in[1,...,K]\)(Sensoy et al., 2018). Given the training set contains \(N\) data samples, \(D:=\{x_{i},y_{i}\}_{i=1}^{N}\), where \(x_{i}\) is the \(i^{th}\) data sample and \(y_{i}\in[0,1]\) is the corresponding label, \(0\) is the negative sample and \(1\) is the positive sample. We further denote \(\mathbf{y_{i}}\) as the one-hot encoding label for sample \(i\), e.g., \(\mathbf{y_{i}}=[1,0]\) for class \(0\) and \(\mathbf{y_{i}}=[0,1]\) for class \(1\). The original focal loss is designed by Lin et al. (2017), aiming to address the imbalanced class problem and further penalized well-classified Figure 4: The co-teaching framework, where A and B are two models with the same architectures that are trained in parallel and simultaneously. samples. The focal loss for binary classification is defined by \(FL(p_{t})=-\alpha_{t}(1-p_{t})^{\gamma}\log(p_{t})\) where \(p_{t}=p\) if \(y_{i}=1\) for \(i^{th}\) sample, otherwise \(p_{t}=1-p\) with probability output \(p\) from the model. Let \(\mathbf{P}_{i}\) be a vector that contains the probability of \(i^{th}\) sample for both classes from our model output; \(p_{i,j}\) is the probability of \(i^{th}\) sample belonging to \(j^{th}\) class; \(K\) is the number of classes, and \(\beta_{j}\) is the class weight of \(j^{th}\) class. \(\gamma\) is the focusing parameter to reduce the loss for well-classified samples, and we fix \(\gamma=2\) in this task. We could define Evidential Focal Loss as the following: \[\mathcal{L}_{i}^{cls}(\theta)=\int\sum_{j=1}^{K}-\beta_{j}(1-p_{t})^{\gamma} \log(p_{ij})\frac{1}{\mathcal{B}(\alpha_{i})}\prod_{j=1}^{K}p_{ij}^{\alpha_{ ij}-1}d\mathbf{P}_{i} \tag{4}\] Rewriting class probabilities in vector form, the equation (4) can be simplified to (5) by the definition of expectations: \[\mathcal{L}_{i}^{cls}(\theta)=-\sum_{j=1}^{K}\beta_{j}\mathbf{E}[(1-\mathbf{P }_{i})^{2}\log(p_{ij})] \tag{5}\] Following the idea of focal loss, we replace the constant term 1 in the original focal loss function with \(\mathbf{y_{i}}\), with the goal to tackle the hard-to-classified samples and reduce the loss of well-classified samples for both classes. Recall that expected probability \(\hat{p}_{k}\) for the \(k^{th}\) class is \(\alpha_{k}/S\), then by the linearity of expectations and the definition of expectations of Dirichlet distribution, we could simply to: \[\mathcal{L}_{i}^{cls}(\theta)=\sum_{j=1}^{K}\boldsymbol{\beta}(y_{ij}-(\alpha _{j}/S))^{2}(\psi(S_{i})-\psi(\alpha_{ij})) \tag{6}\] where \(\psi(\cdot)\) is the digamma function, \(y_{ij}\) is the \(j^{th}\) class label in the one hot encoding representation \(\mathbf{y_{i}}\) and \(\boldsymbol{\beta}\) is the class weight vector of length \(K\). To ensure that highly uncertain data samples, referred to as "I do not know" decisions, do not impact the overall data fit and to minimize their associated evidence, we adopt the approach presented in Sensoy et al. (2018). This involves utilizing the Kullback-Leibler (KL) divergence as a regularization term to penalize the unknown predictive distributions, effectively shrinking their influence towards zero. The KL divergence is as same as it is defined in Sensoy et al. (2018). Finally, our total loss is defined as: \[\mathcal{L}^{total}(\theta)=\sum_{i=1}^{N}\mathcal{L}_{i}^{cls}(\theta)+ \lambda_{t}\sum_{i=1}^{N}KL[D(\mathbf{P}_{i}|\alpha_{i})||D(\mathbf{P}_{i}| \mathbf{1})] \tag{7}\] where \(\mathbf{1}\) is an one-vector, \(\lambda_{t}\) is the balancing factor between \(\mathcal{L}^{cls}\) and the KL divergence loss, and is defined as \(\lambda_{t}=min(1.0,t/10)\in[0,1]\), where \(t\) is the current number of epochs of training. Finally, we introduce two proposed methods for filtering training samples based on the calculated uncertainty. **Patch-driven filtering:** Given the uncertainty for each training patch, we simply eliminate \(x\%,x\in[10,20]\) of the _patches_ with highest uncertainty and retrain the model on the rest of the samples on the training set. **Patient-driven filtering:** Similar to Patch-driven filtering, we first calculate the uncertainty of each training patch. To determine the uncertainty of each patient, we calculate the average uncertainty value across their corresponding patches (20 patches per patient as mentioned in Section 3.2). We then eliminate \(x\%,x\in[10,20]\) of the training _patients_ with high uncertainty value and retrain the model on the rest on the training set. ## 5 Experiments & Setup **Data Split:** For the domain transfer task, as mentioned in Section 3.2 and 4, the resampled T2 and ADC "images" with size \(256\times 256\) from both ProstateX and local datasets is used for training the ACL-GAN model. Particularly, we allocate 90% of images in both datasets as training and keep 10% as validation set to avoid overfitting the ACL-GAN model. Importantly, we ensure that the images corresponding to each patient are exclusively present in either the training or validation set, but not both. To improve the robustness and enhance the ability of the ACL-GAN to capture feature-level representations of 1.5T images, we use all data from our local hospital. However, it is important to note that this approach does not yield any additional impacts on the subsequent classification task. The model only modifies image regions that have visual differences caused by acquisition parameters of various MRI machines, but it does not alter the context of the prostate itself. For the PCa classification task, we use cropped and augmented T2 and ADC "patches" from both datasets. As mentioned before, this includes 204 ProstateX patients translated to 1.5T, as well as 104 patients from our local hospital captured in 1.5T. Regarding the data split for the classification, we keep patches of 34 patients from our local center as test set. From the remaining patches (70 local patients and all ProstateX patients, we allocate 80% for training and 20% for validation, assuring patches from the same patient not included in both of these sets. **Domain Transfer:** The first experiment is translating the ProstateX MRI data from 3.0T to 1.5T using our proposed ACL-GAN model through the conversion process mentioned previously. We then evaluate the effectiveness of this approach by using the translated MRI data in a downstream binary classification task for clinically significant PCa. **Classification:** We divide our classification experiments into two categories. In the first category, we use the conventional training paradigm without any filtering or uncertainty estimation. We use different model architectures for these experiments as discussed in Section 4.2.1. In the second category, we use the dataset filtering method and evidential focal loss proposed in Section 4.2.2 for training our models. We select two methods (MpMRI and M.S. MpMRI) that achieve the best performance from the first category to conduct experiments in the second category. Additionally, we conduct several ablation studies and report the results. Finally, we focus on dataset filtering during deployment and examine how this technique affects the classification performance of the test data. ### Experimental Details We train two ACL-GAN models separately for T2 and ADC images as part of our domain transfer framework. The optimizer used for both models is Stochastic Gradient Descent with Adam update rule (Kingma and Ba, 2014), with an initial learning rate of 0.0001 and weight decay of 0.0001 to prevent overfitting. The batch size is 3 and are trained for 30,000 epochs. Moreover, when training the model for T2 images, we set the \(\lambda_{mask}=0.0025,\lambda_{idt}=1,\lambda_{acl}=0.2\) in Equation (1) and lower and upper mask threshold to be 0.005 and 0.1, respectively. When training the model for ADC images, the value of \(\lambda_{mask},\lambda_{idt},\lambda_{acl}\) are the same as in the T2 model with lower and upper mask threshold is set to 0.001 and 0.005, respectively. The Least-Square (LS) loss (Mao et al., 2017) is utilized to calculate \(\mathcal{L}_{adv}\) and \(\mathcal{L}_{acl}\) in Equation (1). **Converting 3.0T to 1.5T:** Once we have obtained two ACL-GAN models, we need to standardize the acquisition parameters of 3.0T prostate MRIs to match those of the 1.5T data in our local dataset. To achieve this, we divide the original MRI into multiple 2D grayscale slices. For each 2D slice, we use the generator \(G_{T}\) and a noise vector \(z\) randomly sampled from \(\mathcal{N}(0,1)\) to translate the slice to 1.5T, e.g., \(I_{1.5T}=G_{T}(I_{3.0T},z)\) in Section 4.1. We repeat this process for all 2D slices and then stack them back together to reconstruct the 3D MRI for each patient. The voxel spacing remains unchanged before and after the translation process. The above process is applied to both T2 and ADC data. All classification models are trained with Stochastic Gradient Descent with Adam, and batch normalization is used to speed up convergence. In the first category of classification experiments, the traditional focal loss (Lin et al., 2017) with \(\gamma=2\) is used to train the model. Specifically, all models except co-teaching are trained for 300 epochs with a learning rate of 0.0001, weight decay of 0.01, and a batch size of 10. To train the co-teaching model, we set the noise rate to 0.1; the forget rate \(\tau=0.1\); the number of epochs for linear drop rate \(T_{k}=10\) in Equation (2). The model is trained for 300 epochs; the batch size is set to 10, and the learning rate is 0.00001. For experiments in the second category, to train the "MpMRI" model for patch-driven filtering, we set the learning rate to 0.0001; weight decay to 0.01; total training epochs to 300, and batch size to 10. The class weights \(\boldsymbol{\beta}\) in Equation (6) are set to [0.25, 0.75] for filtering 10%, and [0.25, 1.25] for filtering 20% of the training data. For patient-driven filtering, all parameters are the same except the class weights \(\boldsymbol{\beta}\) are set to [0.25, 1] for filtering 10% of the training data. Last but not least, we set the initial learning rate to 0.0001; total training epochs to 300; batch size to 10; the learning rate decayed by a factor of 0.1 for every 200 epochs, and the class weights \(\boldsymbol{\beta}\) are set to [0.25, 1] for filtering 20% of the training data. To train the "M.S. MpMRI" model for patch-driven filtering, we set the initial learning rate to 0.0001; weight decay to 0.01; total training epochs to 300. The class weights \(\boldsymbol{\beta}\) in Equation (6) are set to [0.25, 1] for both filtering 10% and 20% of the training data. For patient-driven filtering, all parameters were the same except the class weights \(\boldsymbol{\beta}\) are set to [0.25, 1] for filtering 10% and [0.25, 1.25] for filtering 20% of the training data. ### Evaluation The traditional classification metrics e.g., accuracy, sensitivity, specificity, and AUC for this task. Reporting the patient-level performance are more relevant to the real clinical setting. However, since we used patches as input to the model, to be able to calculate the performance metrics at patient-level, we need to aggregate individual results obtained from the individual patches. To achieve this, we first use the classifier to predict test patches, which are then grouped sequentially into \(x\) groups, for each patient (20 probabilities due to the augmentation mentioned in Section 3.2) and then compute the _median_ probability \(\tilde{p}_{i}\) as the aggregated probability of that patient. Finally, a threshold of 0.5 is used to determine whether the patient has PCa or not, i.e., assigning label 1 if \(\tilde{p}_{i}>0.5\), and 0 otherwise. ## 6 Results and Discussion In this section, we report **patient-based** classification results; the performance of our methods on patches are reported in Appendix A. ### Translated samples from Domain Transfer Figure 5 shows the visualization of the difference between original 3.0T T2 and ADC images in the ProstateX Challenge and the corresponding translated 1.5T T2 and ADC images using our proposed domain transfer framework. As shown, domain transfer reduces the image contrast and results in loss of minor details in the original 3.0T images. However, to evaluate the effectiveness of the domain transfer framework, a user study involving radiologists' interpretation is necessary. Our proposed domain transfer framework shows potential improvements over the baseline model SDNet for the following reason. In SDNet (Grebenisan et al., 2021), modality features from randomly selected 1.5T images are merged with anatomical features from 3.0T rather than from the whole distribution of all 1.5T images. In contrast, our method learns the overall data distribution in an adversarial manner, capturing the entire distribution of 1.5T images and performing the translation. Moreover, our method ensures the translated image contains the crucial features of the original image, and the generator only modifies certain parts of the image. Our image translation method is thus more suitable for this task and can be further validated by the classification performance presented in the following sections. Figure 5: Translation from 3.0T T2 and ADC images to 1.5T-like images of a random patient in the “ProstateX” Challenge dataset. _left two figures:_ the original 3.0T T2 image, and the same image that is translated to 1.5T. _right two figures:_ the original 3.0T ADC image, and the same image that is translated to 1.5T. ### PCa classification without filtering Table 1 summarizes the main results of this study, which contains the PCa classification performance of all experiments we conducted. The table is divided into three sections. The first section corresponds to experiments conducted without using either the dataset filtering or uncertainty estimation, as described in the first category of Section 5. The second and third sections represent experiments with dataset filtering and uncertainty estimation described in the second category of Section 5. In the first section of Table 1, we observe that the AUC of using the co-teaching framework with "MpMRI" architecture as the base model achieves the best AUC and outperforms the baseline. We also noticed that the sensitivity increases by approximately 50% while the specificity only decreases 10% for our co-teaching model compared to the baseline model, indicating our model has better learning abilities for classifying both positive and negative data samples. In the training process, we adopt a greedy approach of assuming 10% of the samples to be noisy. Consequently, both models need to designate a portion of the data in each batch as "clean" to update the parameters. This strategy allows our model to prioritize learning from the clean data, leading to enhanced robustness. **Ablation Study:** We embed the results of the ablation study in the first section of Table 1. Our ablation here is two-fold: alteration of the number of input modalities, and alteration of the architecture of the model. To examine the effect of data modalities on the classification performance, we compare the T2-only model with (Vol.)MpMRI and M.S. MpMRI models, both use T2 and ADC patches as input. We observe a significant improvement in classification performance with the addition of the ADC modality, suggesting that multi-modal information is useful in guiding the model to classify clinically significant PCa. To examine the effect of model architecture on classification performance, we compare the MpMRI and M.S. MpMRI models, which have different architectures, and the co-teaching model. We find that the model with simpler inputs, MpMRI, performs better, and the results can be further improved by using the co-teaching framework. ### PCa classification with filtration In this section, we conduct experiments using two different architectures (MpMRI and M.S. MpMRI) and with training set filtering at various rates. The evidential focal loss described in Section 4.2.2 is used to optimize the models. The co-teaching framework is excluded from this section for the following reason: while co-teaching _implicitly_ handles noisy labels or samples in the training set, the training set filtering in Section 4.2.2 is an explicit alternative way of dealing with them. The co-teaching framework will first update its model parameters with simpler and cleaner samples during training. However, through the filtering process, data samples with high uncertainty values are considered potentially noisy and do not involve in the training process. We argue that it would be a duplicate procedure if we use co-teaching and filtering the training data simultaneously. Our hypothesis for training set filtering is by explicitly eliminating those highly uncertain data samples from the set and optimizing only on the rest of the "confident" samples using the evidential focal loss (Section 4.2.2), we could produce a more robust model. Therefore, to coalesce our proposed loss function with training set filtering, we do not use co-teaching and instead, we select MpMRI and M.S. MpMRI for experiments in this section. We use the MpMRI (resp. M.S. MpMRI) model to compute the uncertainty for all training data first, and then the filtering process can be either done in patch-driven or patient-driven on the training set, as we discussed in Section 4.2.2. In the second part of Table 1, we present the **patient-based results** on filtering 10% and 20% of training _patches_ in the training set for the two selected models, while the third part of the table reports the results of filtering 10% and 20% of training _patients_. The results from both sections demonstrate that the binary classification performance improves when filtering more uncertain data for both models. Comparing these results with those from the first section of Table 1, we can conclude that the dataset filtering method applied to the training set, together with the evidential focal loss we proposed, can effectively improve the classification performance. Moreover, an interesting observation is that the performance gradually deteriorates in _patient-driven_ filtering. The reason behind this may be that in patch-driven filtering, we simply exclude some training patches with a high uncertainty value in the training process, no matter which patient the patches belong to. However, in the case of patient-driven filtering, we have to consider the average uncertainty of the 20 patches for each patient. If the average uncertainty of a patient is below the threshold, all the patches would be used for training, regardless of whether a specific patch has a very high uncertainty value. Therefore, there is a risk that we may falsely retain high uncertain patches because the corresponding patient has relatively low uncertainty on average, which can affect the model's performance. This is also the reason why patch-driven filtration results are better. \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c} & Data & F.R. & F.M. & Acc. & Sen. & Spec. & AUC \\ \hline SDNet(baseline) & T2 & 0\% & N/A & 79.4 & 28.6 & **92.6** & 76.2\(\pm\)17.5 \\ T2-only & T2 & 0\% & N/A & 64.7 & 71.4 & 63.0 & 77.8\(\pm\)18.0 \\ MpMRI & T2+ADC & 0\% & N/A & 79.4 & **85.7** & 77.8 & 84.7\(\pm\)15.5 \\ Vol. MpMRI & T2+ADC & 0\% & N/A & 67.6 & 71.4 & 66.7 & 68.9\(\pm\)26.1 \\ M.S. MpMRI & T2+ADC & 0\% & N/A & 73.5 & 71.4 & 74.1 & 82.5\(\pm\)14.3 \\ MpMRI+co-teaching & T2+ADC & 0\% & N/A & **82.3** & **85.7** & 81.2 & **88.4\(\pm\)10.6** \\ \hline MpMRI & T2+ADC & 10\% & patch & 82.4 & 85.7 & 81.5 & 85.7\(\pm\)9.9 \\ M.S. MpMRI & T2+ADC & 10\% & patch & 82.4 & 71.4 & 85.2 & 83.6\(\pm\)13.5 \\ MpMRI & T2+ADC & 20\% & patch & **85.3** & **100** & 81.5 & **98.4\(\pm\)1.6** \\ M.S. MpMRI & T2+ADC & 20\% & patch & **85.3** & 71.4 & **88.9** & 92.6\(\pm\)7.4 \\ \hline MpMRI & T2+ADC & 10\% & patient & **88.2** & **100** & **85.2** & **92.6\(\pm\)7.4** \\ M.S. MpMRI & T2+ADC & 10\% & patient & 85.3 & **100** & 81.5 & 86.2\(\pm\)9.0 \\ MpMRI & T2+ADC & 20\% & patient & 73.5 & 85.7 & 70.4 & 86.8\(\pm\)12.6 \\ M.S. MpMRI & T2+ADC & 20\% & patient & 73.5 & 71.4 & 74.1 & 84.6\(\pm\)14.2 \\ \hline \end{tabular} \end{table} Table 1: **Patient-based results** of all performed experiments. Acc., Sen., Spec., and AUC are the shorts for Accuracy, Sensitivity, Specificity, and AUC, respectively. Standard deviations are computed from the 95% bootstrap confidence interval with \(n=3000\) samples. The “F.R.” and “F.M.” represents the filtering rate and the filtering method, respectively. The best results for each section are in black **bold**; blue **bold**, and red **bold** separately. All units of the numeric values are in %. **Ablation Study:** As previously mentioned, the first section of Table 1 corresponds to experiments conducted without using evidential focal loss or filtering. On the other hand, the second and third sections encompass experiments that incorporate both these elements. In order to solely examine the influence of our proposed loss, we conducted an experiment where the evidential focal loss was employed without any filtering (0%). This results are summarized in Table 2 in comparison with 20% patch-based filtering approach. As can be seen, even without any data filtering during the training, we could correctly classify all patients with clinically significant PCa (sensitivity = 100%), which demonstrates a significant improvement compared to the baseline result in Grebenisan et al. (2021). As expected, the addition of data filtering further improves the results. The original results based on image patches of Table 2 can be found in Appendix A. ### Filtering during deployment So far, we explored the effect of the data filtering strategy during training to improve the model robustness. It is also possible to apply filtering on the test set, i.e. when deploying the model to real clinical routines. This is equivalent to refraining from making decisions on the test samples that are identified as highly uncertain. We use the pre-trained MpMRI and M.S. MpMRI models, each with 0% and 20% training filtering rate as final models and evaluate their performance on the test set. Figure 6 shows the experiment on filtering 0% to 40% of the test data when deploying the pre-trained models with 20% filtering during training. The performance of the other two models (0% filtering) can be found in Appendix B. We observe that the model improves its performance when filtering out highly uncertain patients from the test set, and eventually classified all patients correctly, as shown in Figure 5(a). This approach has practical applications under real clinical settings, as it can help radiologists save much time by focusing on patients that have been filtered out (with high uncertainty value) during the diagnostic process, rather than well-classified patients. ## 7 Conclusion In this work, we presented a novel approach for unpaired image-to-image translation of prostate mp-MRI and developed a robust deep-learning model for classifying clinically sig \begin{table} \begin{tabular}{c|c c c c|c c c c} & \multicolumn{4}{c|}{filter 0\%} & \multicolumn{4}{c}{filter 20\%} \\ \hline & Acc. & Sen. & Spec. & AUC & Acc. & Sen. & Spec. & AUC \\ \hline MpMRI & 82.4 & **100** & 77.8 & 89.4\(\pm\)9.1 & **85.3** & **100** & **81.5** & **98.4\(\pm\)1.6** \\ \hline M.S. MpMRI & 76.5 & **100** & 70.4 & 82.0\(\pm\)12.7 & **85.3** & 71.4 & **88.9** & **92.6\(\pm\)7.4** \\ \hline \end{tabular} \end{table} Table 2: Ablation on employing proposed evidential focal loss with and without data filtering for the two selected architectures. The **Patient-based** results are reported. Acc., Sen., Spec., and AUC are the shorts for Accuracy, Sensitivity, Specificity, and Area Under (ROC) Curve, respectively. Standard deviations are computed from the 95% bootstrap confidence interval with \(n=3000\) samples. Best results are in **bold**. All units of the numeric values are in %. nificant PCa using evidential focal loss. We demonstrated the effectiveness of our method on our local dataset, reinforced by a publicly available one, and showed that uncertainty-aware filtering during both training and deployment can significantly improve the PCa classification performance. Out method has the potential to assist with and expedite the diagnostic process by suggesting highly uncertain patients where clinicians can focus on precise diagnosis and fast track those with high prediction certainty. While our approach has shown promising results, there are still opportunities for improvement. One potential area for future work is to consider the spatial dependency between slices in volumetric MRI. Currently, our domain transfer framework only accepts 2D images as input and output, and we reshape the volumetric MRI into several 2D slices. However, explicitly splitting 3D images into 2D slices may eliminate the spatial dependency within each MRI data and affect the classification results. Therefore, one possible solution is to translate the 3D MRI as a whole from 3.0T to 1.5T instead of translating a single slice at a time. Lastly, there is great potential for further improving the classification performance by combining more images from different MRI functional sequences, such as b-value and K\({}^{trans}\). We have already demonstrated that incorporating additional ADC images significantly enhances classification performance. We believe that if we successfully translate other images from b-value or K\({}^{trans}\) acquired at 3.0T to 1.5T and incorporate them into the classification, the results could be further improved. However, the additional MRI sequences may not be available in the local 1.5T dataset. The conversion process may become feasible if we acquire those sequences from local hospitals. Figure 6: Performance of filtering 0% to 40% of test data on the selected models. 6a represents the test performance of “MpMRI” model with 20% filtration on the training set, and 6b represents the test performance of “M.S. MpMRI’ model with 20% filtration on the training set. ## Acknowledgments This work was supported by the Natural Sciences and Engineering Council of Canada, Canadian Institutes for Health Research, and Queen's University. Parvin Mousavi is supported by Canada CIFAR AI Chair and the Vector AI Institute. ## Ethical Standards All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki Declaration and its later amendments or comparable ethical standards. ## Conflicts of Interest We declare we don't have conflicts of interest.
2304.01237
A Guide for Practical Use of ADMG Causal Data Augmentation
Data augmentation is essential when applying Machine Learning in small-data regimes. It generates new samples following the observed data distribution while increasing their diversity and variability to help researchers and practitioners improve their models' robustness and, thus, deploy them in the real world. Nevertheless, its usage in tabular data still needs to be improved, as prior knowledge about the underlying data mechanism is seldom considered, limiting the fidelity and diversity of the generated data. Causal data augmentation strategies have been pointed out as a solution to handle these challenges by relying on conditional independence encoded in a causal graph. In this context, this paper experimentally analyzed the ADMG causal augmentation method considering different settings to support researchers and practitioners in understanding under which conditions prior knowledge helps generate new data points and, consequently, enhances the robustness of their models. The results highlighted that the studied method (a) is independent of the underlying model mechanism, (b) requires a minimal number of observations that may be challenging in a small-data regime to improve an ML model's accuracy, (c) propagates outliers to the augmented set degrading the performance of the model, and (d) is sensitive to its hyperparameter's value.
Audrey Poinsot, Alessandro Leite
2023-04-03T09:31:13Z
http://arxiv.org/abs/2304.01237v1
# A Guide for Practical Use of ADMG Causal Data Augmentation ###### Abstract Data augmentation is essential when applying machine learning (ML) in small-data regimes. It generates new samples following the observed data distribution while increasing their diversity and variability to help researchers and practitioners improve their models' robustness and, thus, deploy them in the real world. Nevertheless, its usage in tabular data still needs to be improved, as prior knowledge about the underlying data mechanism is seldom considered, limiting the fidelity and diversity of the generated data. Causal data augmentation strategies have been pointed out as a solution to handle these challenges by relying on conditional independence encoded in a causal graph. In this context, this paper experimentally analyzed the acyclic-directed mixed graph (ADMG) causal augmentation method considering different settings to support researchers and practitioners in understanding under which conditions prior knowledge helps generate new data points and, consequently, enhances the robustness of their models. The results highlighted that the studied method (a) is independent of the underlying model mechanism, (b) requires a minimal number of observations that may be challenging in a small-data regime to improve an ML model's accuracy, (c) propagates outliers to the augmented set degrading the performance of the model, and (d) is sensitive to its hyperparameter's value. ## 1 Introduction Machine learning (ML) models require quality data to be able to discover helpful information, perform well on unseen data, and be robust to environmental changes. Although some models can handle noisy and high-dimensional datasets, their usage in high-stable small-data regimes is usually challenging. In this case, one can use data augmentation techniques to deal with the lack of training data to improve models' performance and limit overfitting by artificially increasing the number of samples and the diversity of the training set (Van Dyk and Meng, 2001). They have been successfully used in computer vision (CV) (Zhong et al., 2020; Hendrycks et al., 2021) and natural language processing (NLP) (Xie et al., 2020; Hao et al., 2023) tasks, by providing model regularization during training and consequently, helping reducing overfitting. Nonetheless, these techniques cannot be easily extended to tabular or time series data (Talavera et al., 2022). Likewise, they usually focus on increasing samples' diversity or variability (Wen et al., 2021) and rarely both. Knowing the underlying causal mechanism may help data augmentation techniques handle these issues by taking advantage of partial knowledge encoded in a causal graph (CG). Thus, once one has been built, we can use it to infer the conditional independence relations that a data distribution should satisfy. As a result, one can combine data from an interventional distribution with augmented and observed ones (Ilse et al., 2021) to improve both the diversity and variability of a dataset, hoping to improve the robustness of an ML model. Such a strategy can be implemented by following a causal boosting procedure (Little and Badawy, 2019) or exploring prior knowledge of conditional independence encoded in a causal graph (Teshima and Sugiyama, 2021). The former generates new samples by weighting the data coming from intervention distributions. In contrast, the latter generates new data samples by simultaneously considering all possible resampling paths from the conditional empiric distribution of each variable assuming the existence of an acyclic-directed mixed graph (ADMG) (Richardson, 2003). In this context, this paper experimentally1 assesses the characteristics of the ADMG data augmentation method (Teshima and Sugiyama, 2021) (Section 2) under the fidelity, diversity, and generalization (Alaa et al., 2022) perspective in a small-data regime configuration, considering different problem's properties, and with the presence of noisy data (Section 3). The goal is to understand under which conditions this method can help practitioners increase the robustness and deal with overfitting of their ML models by augmenting their datasets using prior knowledge encoded in a causal graph. Another objective comprises understanding under which conditions an inadequate parametrization setting can lead to unexpected results; i.e., performance degradation. Footnote 1: The code is available at github.com/audreypoinsot/admg_data_augmentation ## 2 ADMG Data Augmentation In this section, the ADMG causal data augmentation method (Teshima and Sugiyama, 2021) is presented. From now on, we refer to this method as CausalDA. Let us assume we want to train a model \(f\) using the loss \(L\) on the dataset \(D=(D_{train},D_{test})\) composed of a training and a testing set with \(d\) dimensions. Let us assume a known ADMG causal graph \(\mathcal{G}\) linking the \(d\) variables ordered according to the topological order induced by the graph \(\mathcal{G}\). Let us use the following notations: * \(n=|D_{train}|\) the number of training data * \(X_{k}\) the \(k^{th}\) data point of the training set \(D_{train}=\{X_{i}\}_{i\in[1,n]}\) * \(X_{k}^{j}\) the value taken by the \(j^{th}\) variable of the \(k^{th}\) training point, \(X_{k}=\{X_{k}^{1},...,X_{k}^{d}\}\) * \(X_{k}^{J}\) with \(J\) a set of variables, the value taken by the \(J\) variables of the \(k^{th}\) training point * \(D_{aug}\) the augmented dataset using \(D_{train}\) * \(Z_{i}\) the \(i^{th}\) augmented data point from \(D_{aug}\) * \(a(j)\) the ancestors of the variable \(j\) in the causal graph \(\mathcal{G}\) \(D_{aug}\) is built as the cartesian product of all the observed variables in the training set: \[D_{aug}=\{Z_{i}\}_{i\in[1,n^{d}]},\quad Z_{i}=\{X_{i_{1}}^{1},...,X_{i_{j}}^{ j},...,X_{i_{d}}^{d}\}\] with \(X_{i_{j}}\) the data point used to copy its value of the \(j^{th}\) variable to use for the augmented point \(Z_{i}\). Each \(Z_{i}\) is associated with a weight \(w_{i}\) which could be interpreted as a probability of existence for the augmented point \(Z_{i}\). Indeed, \(w_{i}\) measures the probability of the variables values of the augmented point \(Z_{i}\) given variables ancestor values. Probabilities are estimated with Kernels, \(K^{j}\) denoting the kernel used to estimate the probability of the \(j^{th}\) variable given its ancestors. \[w_{i}=\prod_{j=1}^{d}w_{i}^{j}=\prod_{j=1}^{d}\frac{K^{j}(Z_{i}^{a(j)}-X_{i_{ j}}^{a(j)})}{\sum_{k=1}^{n}K^{j}(Z_{i}^{a(j)}-X_{k}^{a(j)})} \tag{1}\] Finally, a model \(f\) is trained on the augmented set using a weighted loss: \[L_{aug}(f)=\sum_{i\in[1,n^{d}]}w_{i}L(f,Z_{i}) \tag{2}\] In practice, the weights are computed recursively through Algorithm 1. In order to reduce memory and computational cost, the method enables us to choose a probability threshold \(\theta\in[0,1[\) to early stop the computation of a weight (and the associated augmented point) as soon as its current value is lower than \(\theta\). ``` Input:\(D_{train}=\{X_{k}\}_{k\in[1,n]},\ \mathcal{G},\ \theta,\ L,\ \{K^{j}\}_{j\in[1,d]}\)\(\triangleright\) assuming that the variables in the training set and kernel functions are ordered according to the topological order of the graph \(\mathcal{G}\) \(W_{aug}\leftarrow\{\frac{1}{n}\}^{n}\) \(Z_{aug}\leftarrow\{X_{k}^{j}\}_{k\in[1,n]}\) for\(j\in[2,d]\)do \(Z_{aug}^{new}\leftarrow\{\}\) \(W_{aug}^{new}\leftarrow\{\}\) for\(Z_{i},w_{i}\in Z_{aug},W_{aug}\)do for\(i_{j}\in[1,n]\)do \(w_{i}^{new}\gets w_{i}\cdot\frac{K^{j}(Z_{i}^{s(j)}-X_{i_{j}}^{s(j)})}{ \sum_{k=1}^{n}K^{j}(Z_{i}^{s(j)}-X_{k}^{s(j)})}\) \(Z_{i}^{new}\leftarrow\{Z_{i};X_{j}^{j}\}\) if\(w_{i}^{new}>\theta\)then \(Z_{aug}^{new}\gets Z_{aug}^{new}\cup Z_{i}^{new}\) \(W_{aug}^{new}\gets W_{aug}^{new}\cup w_{i}^{new}\) \(Z_{aug}\gets Z_{aug}^{new}\) \(W_{aug}\gets W_{aug}^{new}\) ``` **Output:\(\hat{f}\in\text{arg}\ \underset{f}{\min}\ \sum_{(v_{i},Z_{i})_{i\in(W_{aug},Z_{aug})}}w_{i}L(f,Z_{i}),\ \ D_{aug}=(W_{aug},Z_{aug})\)** ``` **Algorithm 1** CausalDA algorithm ## 3 Experimental Design ### Dataset We relied on synthetic data to perform all the experiments. It enabled us to have full control of the problem represented by the data. Moreover, the simulated data were all sampled from structural causal models (SCMs), since CausalDA makes the assumption that the data are generated through a causal model. See (Pearl, 2009) for a detailed definition of a SCM. We used the Causal Discovery Toolbox (Kalainathan et al., 2020, 2022) to generate each SCM. The directed acyclic graph (DAG) of each SCM was generated using the Erdos-Renyi model (Erdos & Renyi, 1959) given a number of nodes and an expected degree. After each new edges' samples, we checked if it does not lead to cycle in the DAG. The mechanism functions were generated from a set of parametric functions (e.g., linear or polynomial) whose parameters were randomly sampled from some given probability distributions, see Appendix A.4. The source variables (i.e., vertices without parents in the causal graph) were generated using gaussian mixture models (GMMs) with four components and a spherical covariance. Finally, additive noise variables were introduced into the causal mechanisms. They were all i.i.d. and created according to a normal distribution. Once a SCM was built, the data were generated by sampling the realizations of the source and the noise variables. Finally, the mechanism functions computed the realizations of the variables following the topological order of the causal graph. ### Evaluation methodology We considered different scenarios to assess the characteristics of CausalDA to provide some insights to practitioners about CausalDA's response to the various properties their problem might have. The scenarios, whose defaults parameters are detailed in Appendix A.5, included: * **Non-linear data generation setting**: by varying the family functions of the mechanism included linear, polynomial, sigmoid, Gaussian process, and neural networks. * **Small-data regime**: by varying the number of observations from a few samples to a hundred samples (i.e., \([30,40,60,80,100,300,500,700]\)) * **High-dimension scenario**: by varying the number of variables in a dataset from seven to twenty-five (i.e., \([7,8,9,10,15,20,25]\)) * **Highly dependent input variables setting**: by varying the expected degree of the causal graph in \([0,1,2,3,4,5,6,7]\) * **High aleatoric uncertainty setting**: by varying the additive noise amplitude in \([0.1,0.2,0.4,0.6,0.8,1]\) * **Noisy acquisition procedure** (i.e., outliers): by varying the fraction of outliers in \([0.01,0.02,0.03,0.04,0.05,0.1,0.15]\) * **Inadequate parametrization scenario**: by varying the probability threshold \(\theta\) defined in Section 2. \(\theta\in[10^{-1},10^{-2},10^{-3},10^{-4},10^{-5}]\) For each of these scenarios, we compared the distributions of the original dataset and the augmented one by measuring the Kullback Leibler divergence, the Wasserstein distance, and the average relative difference in variance among the variables. We also benchmarked CausalDA against a baseline. In this case, we split the original dataset (\(\mathcal{D}\)) into train and test sets following a \(70\%,30\%\) split strategy. Then, we trained two eXtreme Gradient Boosting (XGBoost) models on the weighted augmented set (\(D_{aug}:=(Z_{i},w_{i}),\ i\in[1,n^{d}]\)) and on the original training set to be our baseline. We measured their mean absolute percentage error (MAPE) and R2 scores on the test set of the original dataset to predict each variable of the problem. Each XGBoost model was trained taking into account the data weights for the augmented set and uniform weights for the original training set) using a threefold cross-validation process to search from the best parameters set among the \(n\_\)estimators \(\in[10,50,200]\) and reg_lambda \(\in[1,10,100]\). For the outlier scenario, we additionally compare the distributions of the altered augmented data (i.e., with outliers) and the normally augmented data (i.e., without outliers) by using the same metrics. Finding the causal graph based on the observed variables is an NP-hard combinatorial optimization problem, which limits the scalability of existing approaches to a few dozen variables (Chickering, 1996; Chickering et al., 2004). This is why we opted to start exploring this limitation in this work. Nevertheless, we will leave for future work the study in which the number of features is higher than the number of observations. ## 4 Results This section describes the results of the scenarios described in Section 3. Appendices A.1 to A.3 show complementary results, where one can see that CausalDA and the baseline have similar performance when the input variables are highly correlated and that CausalDA is independent of the aleatoric uncertainty of the data and the mechanisms of the underlying generation model. **Inadequate parametrization.** CausalDA relies on its probability threshold parameter \(\theta\in[0,1]\) (Section 2) to prune the augmented data, which affects their distribution. While a probability threshold close to one accentuates the correlations of the observed data, a threshold close to zero relaxes them according to the causal graph, thus, generating more data points, as illustrated in Figs. 0(a), 0(b) and 0(d). The fact that the variance decreases with the fraction of newly generated data,Fig. 0(c) vs. Fig. 0(a), shows that CausalDA does not tend to increase diversity in the dataset but changes its distribution in dense areas of the observed set. Hence, for an appropriate choice of probability threshold, we expect CausalDA to improve an ML model predictions on the data support by providing a refined data distribution. Figure 0(e) illustrates this finding. The probability threshold parameter seems to have a very narrow value range to improve the performance of the XGBoost models. Thus, it must be carefully defined by the practitioners. Indeed, this threshold can be interpreted as the minimal probability of accepting a new value for a variable given its parents. From Eq. (1), one can easily see that \(w_{i}>\theta\implies w_{i}^{j}>\theta,\ \forall j\in[1,d]\). Hence, we encourage practitioners to analyze the distribution of each variable given its parents in order to make an informed choice on the probability threshold to use. Small-data regime.Figure 2 shows that CausalDA requires at least 300 observations to improve XGBoost's performance. This quantity can be considered "relatively high" given the study scenario: (a) no outliers, (b) use of the correct causal graph, (c) in-distribution, (d) data generated from GMM, and (e) neural networks without high discontinuity or divergence. Hence, improving some ML model predictions with CausalDA in a small-data regime may be challenging. One can explain it by observing in Eq. (1) that the kernel density estimator overfits when there are only a few data points. Likewise, because each new data point is generated conditioned on the values taken by its parents, CausalDA needs several observed points with the same parents' realizations to generate new ones. Hence, we recommend considering CausalDA not as a solution to compensate for the lack of data but rather as a method to refine the estimation of the data distribution via weighted data augmentation. High-dimension scenario.Figure 3 shows that increasing dimension favors CausalDA. Indeed, CausalDA takes advantage of the prior knowledge about the conditional independence encoded in the causal graph to improve ML models' generalization. The literature has shown that such a problem is a challenge in low-dimensional data (Shah and Peters (2020)) because high-dimensional settings with a known causal graph and expected degree lead to more conditional independence. Figure 3 also emphasizes that increasing the dimension increases the probability for the whole dataset to be filtered. Indeed, the probability threshold can be interpreted as follows: For a given \(Z_{i}\), under the hypothesis that all the \(w_{i}^{j}\) are equal to \(c\), \(\sqrt[4]{\theta}\) is the minimum value of \(c\) for \(Z_{i}\) not to be pruned. As \(\sqrt[4]{\theta}\) increases with \(d\), the higher the dimension, the higher the probability of the weights not Figure 1: CausalDA output characteristics depending on the probability threshold Figure 2: XGBoost median performance depending on the size of the dataset to be pruned for a fixed probability threshold. As a result, practitioners also have to consider the dimension of the problem when choosing an appropriate probability threshold value. Discussion.The results presented in this section enabled us to understand under which conditions CausalDA can help practitioners improve their ML models. First, we observed that CausalDA performance is independent of the underlying causal generation process. Nevertheless, it depends on the acquisition procedure because it is sensitive to outliers. Second, based on our experiments, the method requires at least 300 samples, making it unsuitable for small-data regimes. It can instead be used to improve the generalization of a ML model by providing a more refined data distribution, faithful to the observed one without increasing the diversity, using the prior knowledge encoded in the causal graph. Third, CausalDA highly relies on the probability threshold parameter whose choice might be complex for practitioners. Indeed, up to now, no procedure has been developed to ensure a more guided choice for this parameter. We estimate that this last point is the most critical for practitioners to use this method in real-world use cases. That is why we would like to focus our future work on automating the choice of the probability threshold. Possible solutions include using the ML model to be trained to adjust this parameter automatically or employing Monte Carlo Tree Search (MCTS) (Kocsis and Szepesvari, 2006) when computing the weights instead of the pruning procedure. Also, let us highlight that the experiments of this paper are not self-sufficient. Thus, we would like to deepen our evaluation by, first, using conditional independence tests to check if CausalDA indeed increases the conditional independences encoded in the causal graph, second, evaluating the method on real data, and third, analyzing the sensitivity to an erroneous causal graph. Indeed, building a causal graph might be challenging and, as far as we know, there is no general procedure to validate its truthful. ## 5 Conclusion and Further Works Data scarcity is a significant challenge when applying ML in high-stake domains such as healthcare and finance. Over the last few years, various approaches have been developed to enable researchers and practitioners to increase the size of their datasets artificially and, consequently, the robustness and generalization of their ML models. Causal data augmentation strategies aim to handle these endeavors by relying on conditional independence encoded in a causal graph. This paper experimentally analyzed the acyclic-directed mixed graph data augmentation method (Teshima and Sugiyama, 2021) considering several scenarios. The goal was to help researchers and practitioners understand under which conditions their prior knowledge help in generating new data that enhance the performance of their models, as well as the influence of the parameters of the data augmentation strategy underneath the presence of outliers, error measures (i.e., aleatoric uncertainty), and the minimal number of samples of the observed data. Experimental results showed that the sample size is essential when employing the method. Likewise, it propagates the outliers when presented in the data. Furthermore, its hyperparameters must be carefully defined for each dataset. In future work, we plan first to carry out further experiments using, notably, conditional independence tests and real data and, secondly, to automatize the hyperparameters optimization process. ## Acknowledgments This research was partially funded by the European Commission within the HORIZON program (TRUST-Al Project, Contract No. 952060).
2307.13363
3DRP-Net: 3D Relative Position-aware Network for 3D Visual Grounding
3D visual grounding aims to localize the target object in a 3D point cloud by a free-form language description. Typically, the sentences describing the target object tend to provide information about its relative relation between other objects and its position within the whole scene. In this work, we propose a relation-aware one-stage framework, named 3D Relative Position-aware Network (3DRP-Net), which can effectively capture the relative spatial relationships between objects and enhance object attributes. Specifically, 1) we propose a 3D Relative Position Multi-head Attention (3DRP-MA) module to analyze relative relations from different directions in the context of object pairs, which helps the model to focus on the specific object relations mentioned in the sentence. 2) We designed a soft-labeling strategy to alleviate the spatial ambiguity caused by redundant points, which further stabilizes and enhances the learning process through a constant and discriminative distribution. Extensive experiments conducted on three benchmarks (i.e., ScanRefer and Nr3D/Sr3D) demonstrate that our method outperforms all the state-of-the-art methods in general. The source code will be released on GitHub.
Zehan Wang, Haifeng Huang, Yang Zhao, Linjun Li, Xize Cheng, Yichen Zhu, Aoxiong Yin, Zhou Zhao
2023-07-25T09:33:25Z
http://arxiv.org/abs/2307.13363v1
# 3DRP-Net: 3D Relative Position-aware Network for 3D Visual Grounding ###### Abstract 3D visual grounding aims to localize the target object in a 3D point cloud by a free-form language description. Typically, the sentences describing the target object tend to provide information about its relative relation between other objects and its position within the whole scene. In this work, we propose a relation-aware one-stage framework, named **3D** Relative Position-aware **N**etwork (3DRP-Net), which can effectively capture the relative spatial relationships between objects and enhance object attributes. Specifically, 1) we propose a **3D** Relative **P**osition **M**ulti-head **A**ttention (3DRP-MA) module to analyze relative relations from different directions in the context of object pairs, which helps the model to focus on the specific object relations mentioned in the sentence. 2) We designed a soft-labeling strategy to alleviate the spatial ambiguity caused by redundant points, which further stabilizes and enhances the learning process through a constant and discriminative distribution. Extensive experiments conducted on three benchmarks (i.e., ScanRefer and Nr3D/Sr3D) demonstrate that our method outperforms all the state-of-the-art methods in general. The source code will be released on GitHub. ## 1 Introduction Visual grounding aims to localize the desired objects based on the given natural language description. With the rapid development and wide applications of 3D vision [22, 23, 24] in recent years, 3D visual grounding task has received more and more attention. Compared to the well-studied 2D visual grounding [23, 22, 21, 20, 26, 25, 27, 28, 29], the input sparse point clouds in the 3D visual grounding task are more irregular and more complex in terms of spatial positional relationships, which makes it much more challenging to locate the target object. In the field of 3D visual grounding, the previous methods can be mainly categorized into two groups: the two-stage approaches [23, 24, 25, 26, 27, 28, 29, 30] and the one-stage approaches [24]. The former ones follow the detection-and-rank paradigm, and thanks to the flexibility of this architecture, they mainly explore the benefits of different object relation modeling methods for discriminating the target object. The latter fuse visual-text features to predict the bounding boxes of the target objects directly, and enhance the object attribute representation by removing the unreliable proposal generation phase. However, these two methods still have limitations. For two-stage methods, the model performance is highly dependent on the quality of the object proposals. However, due to the sparsity and irregularity of the input 3D point cloud, sparse proposals may leave out the target object, while dense proposals will bring redundant com Figure 1: 3D visual grounding is the task of grounding a description in a 3D scene. In the sentences, all the words indicating the relative positions of the target object are bolded. Notice that relative position relations between objects are crucial for distinguishing the target object, and the relative position-related descriptions in 3D space are complex (e.g., “above”, “on the left”, “in front of”, and “next to”, etc.) putational costs and make the matching stage too complicated to distinguish the target object. As for the one-stage methods, although the existing approach Luo et al. (2022) achieves better performance, they can not capture the relative spatial relationships between objects, which makes it often fail in samples that rely on relative relation reasoning. As shown in Fig.1, the majority of sentences in 3D visual grounding contain relative spatial relation descriptions. Furthermore, due to the spatial complexity of the 3D scene, there are various relative position-related descriptions from different orientations. To further illustrate that relative position is a general and fundamental issue in 3D visual grounding tasks, we analyze the frequency of relative position words in ScanRefer and Nr3D/Sr3D, and the results show that at least \(90\%\) of the sentences describe the relative position of objects, and most of them contain multiple spatial relations. Detailed statistics can be found in supplementary materials. To alleviate above problems, we propose a one-stage 3D visual grounding framework, named **3D** **R**elative **P**osition-aware **N**etwork (3DRP-Net). Our 3DRP-Net combines and enhances the advantages of the two-stage approaches for relations modeling and the one-stage approaches for proposal-free detection while avoiding the shortcomings of both methods. For the relations modeling, we devise a novel **3D** **R**elative **P**osition **M**ulti-head **A**ttention (3DRP-MA) module, which can capture object relations along multiple directions and fully consider the interaction between the relative position and object pairs which is ignored in previous two-stage methods Yuan et al. (2021); Zhao et al. (2021); Huang et al. (2021). Specifically, we first extract features from the point cloud and description, and select key points. Then, the language and visual features interacted while considering the relative relations between objects. For the relation modeling, We introduce learnable relative position encoding in different heads of the multi-head attention to capture object pair relations from different orientations. Moreover, in sentences, the relative relations between objects are usually described as _"Object 1-Relation-Object 2"_, such as "tv is on the tv cabinet" and "curtain is hanging on the window" in Fig.1. The relation is meaningful only in the context of object pairs, thus our relative position encoding would interact with the object pairs' feature, to better capture and focus on the mentioned relations. Besides, as discussed in Qi et al. (2019), point clouds only capture surface of object, and the 3D object centers are likely to be far away from any point. To accurately reflect the location of objects and learn comprehensive object relation knowledge, we sample multiple key points of each object. However, redundant key points may lead to ambiguity. To achieve disambiguation while promoting a more stable and discriminative learning process, we propose a soft-labeling strategy that uses a constant and discriminative distribution as the target label instead of relying on unstable and polarized hard-label or IoU scores. Our main contributions can be summarized as follows: * We propose a novel single-stage 3D visual grounding model, called **3D** **R**elative **P**osition-aware **N**etwork (3DRP-Net), which for the first time captures relative position relationships in the context of object pairs for better spatial relation reasoning. * We design a **3D** **R**elative **P**osition **M**ulti-head **A**ttention (3DRP-MA) module for simultaneously modeling spatial relations from different orientations of 3D space. Besides, we devise a soft-labeling strategy to alleviate the ambiguity while further enhancing the discriminative ability of the optimal key point and stabilizing the learning process. * Extensive experiments demonstrate the effectiveness of our method. Our 3DRP-Net achieves state-of-the-art performance on three mainstream benchmark datasets ScanRefer, Nr3D, and Sr3D. ## 2 Related Work ### 3D Visual Grounding Recent works in 3D visual grounding can be summarized in two categories: two-stage and one-stage methods. We briefly review them in the following. **Two-stage Methods.** Two-stage approaches follow the detection-and-rank scheme. In the first stage, 3D object proposals are generated by a pre-trained 3D object detector Chen et al. (2020) or with the ground truth Achlioptas et al. (2020). In the second stage, the best matching proposals would be selected by leveraging the language description. Advanced two-stage methods achieve good performance by better modeling the relationships among objects. Referit3D Achlioptas et al. (2020) and TGNN Huang et al. (2021) make use of the graph neural network Scarselli et al. (2008) to model the relationships between objects. 3DVG-Transformer Zhao et al. (2021) utilize attention mechanisms Vaswani et al. (2017) to enable interactions between proposals, and the similarity matrix can be adjusted based on the relative Euclidean distances between each pair of proposals. **One-stage Methods.** One-stage approaches avoid the unstable and time-consuming object proposals generation stage under the detection-and-rank paradigm. The visual features extracted by the backbone are directly and densely fused with the language features, and the fused features are leveraged to predict the bounding boxes and referring scores. 3D-SPS Luo et al. (2022) first addresses the 3D visual grounding problem by one-stage strategy. It firstly filters out the key points of language-relevant objects and processes inter-model interaction to progressively down-sample the key points. Our work utilizes the advanced one-stage framework and introduces a novel relative relation module to effectively capture the intricate relations between objects, enabling our model to achieve superior performance. ### Position Encoding in Attention The attention mechanism is the primary component of transformer Vaswani et al. (2017). Since the attention mechanism is order-independent, information about the position should be injected for each token. In general, there are two mainstream encoding methods: absolute and relative position encoding. **Absolute Position Encoding.** The original transformer Vaswani et al. (2017) considers the absolute positions, and the encodings are generated based on the sinusoids of varying frequency. Recent 3D object detection studies also use absolute position encodings. In Group-free Liu et al. (2021), the encodings are learned by the center and size of the predicted bounding box, while the Fourier function is used in 3DETR Misra et al. (2021). **Relative Position Encoding.** Recently, some advanced works in natural language processing He et al. (2020); Raffel et al. (2020); Shaw et al. (2018) and image understanding Liu et al. (2021); Hu et al. (2019, 2018) generate position encoding based on the relative distance between tokens. Relative relation representations are important for tasks where the relative ordering or distance matters. Our method extends relative position encoding to 3D Euclidean space and enhances relative relation reasoning ability in 3D visual grounding. Figure 2: 3DRP-Net is a transformer-based one-stage 3D VG model which takes a 3D point cloud and a description as inputs and outputs the bounding box of the object most relevant to the input expression. In the stacked transformer layer, the 3DRP-MA captures the relative relations between points in the 3D perspective. Specifically, the two self-attentions based on 3DRP-MA capture the relative relations between objects, while the cross-attention between key points and seed points enhances the global position information. Method This section introduces the proposed 3D Relative Position-aware Network (3DRP-Net) for 3D visual grounding. In Sec.3.1, we present an overview of our method. In Sec.3.2, we dive into the technical details of the 3D Relative Position Multi-head Attention (3DRP-MA) module and how to comprehensively and efficiently exploit the spatial position relations in the context of object pairs. In Sec.3.3 and Sec.3.4, we introduce our soft-labeling strategy and the training objective function of our method. ### Overview The 3D visual grounding task aims to find the object most relevant to a given textual query. So there are two inputs in the 3D visual grounding task. One is the 3D point cloud which is represented by the 3D coordinates and auxiliary features (RGB values and normal vectors in our setting) of \(N\) points. Another input is a free-form natural language description with \(L\) words. The overall architecture of our 3DRP-Net is illustrated in Fig.2. _Firstly_, we adopt the pre-trained PointNet++ (Qi et al., 2017) to sample \(S\) seed points and \(K\) key points from the input 3d point cloud and extract the \(C\)-dimensional enriched points feature. For the language description input, by using a pre-trained language encoder (Radford et al., 2021), we encode the \(L\)-length sentences to \(D\)-dimensional word features. _Secondly_, a stack of transformer layers are applied for multimodal fusion. The features of key points are accordingly interacted with language and seed points to group the scene and language information for detection and localization. Our new 3D relative position multi-head attention in each layer enables the model to understand vital relative relations among objects in the context of each object pair. _Eventually_, we use two standard multi-layer perceptrons to regress the bounding box and predict the referring confidence score based on the feature of each key point. As shown in Fig.2, in the training phase, we generate the target labels of referring scores based on the IoUs of the predicted boxes. During inference, we only select the key point with the highest referring score to regress the target bounding box. ### 3D Relative Position Multi-head Attention When describing an object in 3D space, relations between objects are essential to distinguish objects in the same class. Given the spatial complexity of 3D space and the potentially misleading similar relative positions between different object pairs, a precise and thorough comprehension of the relative position relationships is crucial for 3D visual grounding. However, existing 3D visual grounding methods fail to effectively address complex spatial reasoning challenges, thereby compromising their performance. To address this limitation, we propose a novel 3D relative position multi-head attention to model object relations in the context of corresponding object pairs within an advanced one-stage framework. #### 3.2.1 Relative Position Attention Before detailing our relative position attention, we briefly review the original attention mechanism in (Vaswani et al., 2017). Given an input sequence \(x=\{x_{1},...,x_{n}\}\) of \(n\) elements where \(x_{i}\in\mathbb{R}^{d_{x}}\), and the output sequence \(z=\{z_{1},...,z_{n}\}\) with the same length where \(z_{i}\in\mathbb{R}^{d_{z}}\). Taking single-head attention, the output can be formulated as: \[q_{i}=x_{i}W^{Q},\;k_{j}=x_{j}W^{K},\;v_{i}=x_{i}W^{V} \tag{1}\] \[a_{i,j}=\frac{q_{i}{k_{j}}^{T}}{\sqrt{d}},\;z_{i}=\sum_{j=1}^{n}\frac{exp(a_{i,j})}{\sum_{k=1}^{n}exp(a_{i,k})}v_{j} \tag{2}\] where \(W_{Q},W_{K},W_{V}\in\mathbb{R}^{d_{x}\times d_{x}}\) represents the projection matrices, \(a_{i,j}\) is the attention weight from element \(i\) to \(j\). Based on the original attention mechanism, we propose a novel relative position attention that incorporates relative position encoding between elements. Since the semantic meaning of a relative relation _"Object 1-Relation-Object 2"_ is also highly dependent on the object pairs involved, it is essential for the position encoding to fully interact with object features in order to accurately capture the specific relative relations mentioned in the description. To this end, the attention weight \(a_{i,j}\) in our proposed relative position attention is calculated as follows: \[a_{i,j}=\frac{q_{i}k_{j}^{T}+q_{i}{r_{p(d_{ij})}^{k}}^{T}+r_{p(d_{ji})}^{q}k_{ j}^{T}}{\sqrt{3d}} \tag{3}\] where \(d_{ij}\) represents the relative distance from element \(i\) to element \(j\), while \(d_{ji}\) is the opposite. \(p(d)\in[0,2k)\) is an index function that maps continuous distance to discrete value, as detailed in Eq.4. \(r_{p(.)}^{k},r_{p(.)}^{q}\in\mathbb{R}^{(2k+1)\times d_{x}}\) is the learnable relative position encoding. Considering a typical object relation expression _"Object 1-Relation-Object 2"_, our attention weight can be understood as a sum of three attention scores on object pairs and relation: _Object 1-to-Object 2_, _Object 1-to-Relation_, and _Relation-to-Object 2_. #### 3.2.2 Piecewise Index Function The points in the 3D point cloud are unevenly distributed in a Euclidean space, and the relative distances are continuous. To enhance the relative spatial information and reduce computation costs, we propose to map the continuous 3D relative distances into discrete integers in a finite set. Inspired by Wu et al. (2021), we use the following piecewise index function: \[p(d)=\begin{cases}[d],&|d|\leqq\alpha\\ sign(d)\times min(k,[\alpha+\frac{ln(|d|/a)}{ln(\beta/a)}(k-\alpha)]),&|d|> \alpha\end{cases} \tag{4}\] where \([\cdot]\) is a round operation, \(sign(\cdot)\) represents the sign of a number, i.e., returning 1 for positive input, -1 for negative, and 0 for otherwise. Eq.4 performs a fine mapping in the \(\alpha\) range. The further over \(\alpha\), the coarser it is, and distances beyond \(\beta\) would be mapped to the same value. In the 3D understanding field, many studies Zhao et al. (2021); Misra et al. (2021) have demonstrated that neighboring points are much more important than the further ones. Therefore, mapping from continuous space to discrete values by Eq.4 would not lead to much semantic information loss while significantly reducing computational costs. #### 3.2.3 Multi-head Attention for 3D Position Till now, our relative position attention module can handle the interaction between object features and relative position information in continuous space. However, points in 3D space have much more complicated spatial relations than pixels in 2D images or words in 1D sentences. As shown in Table 4, relying on a single relative distance metric leads to insufficient and partial capture of inter-object relations. This makes it difficult to distinguish the target object when multiple spatial relations are described in the language expression. Therefore, we attempt to capture object relations from multiple directions. Specifically, we encode the relative distances under x, y, z coordinates, and the Euclidean metric, denoted as \(D_{x}\), \(D_{y}\), \(D_{z}\), and \(D_{e}\), respectively. These four relative position metrics represent most of object relations in the language description (e.g., \(D_{x}\) for "left, right", \(D_{y}\) for "front, behind", \(D_{z}\) for "top, bottom", \(D_{e}\) for "near, far"). Based on the architecture of multi-head attention, each relative position encoding is injected into the relative position attention module of each head. Such a 3DRP-MA allows the model to jointly attend to information from different relative relations in 3D space. ### Soft-labeling Strategy Due to the object center are often not contained in the given point clouds, we select multiple key points for each object to better reflect its location. Therefore, as shown in Fig.3, there will be lots of accurately predicted boxes achieving high Intersection over Union (IoU) of target object. Previous methods Chen et al. (2020); Zhao et al. (2021); Luo et al. (2022) use one-hot or multi-hot labels to supervise the referring score. The key points whose predicted box has the top \(N_{s}\) highest IoU are set to 1, and others are set to 0, which can encourage the model to select the most high-IoU proposals. However, the simple hard-labeling strategy results in two problems: Firstly, proposals with similar and high IoUs may be labeled differently as 1 and 0, which can cause an unstable training phase. Secondly, it becomes difficult to distinguish between optimal and sub-optimal proposals, affecting the model's ability to accurately identify the most accurate proposal. To tackle these issues, we introduce a soft-labeling strategy to smooth the label distribution and encourage the model to effectively distinguish the optimal proposal. To be specific, the soft-labeling function can be calculated as follow: \[\hat{s}_{i}=exp(-\frac{i^{2}}{2\sigma^{2}}+1) \tag{5}\] where \(i\in\{0,...,N_{s}\}\) represents the \(i\)-th highest IoU. We set \(\sigma\) as \([N_{s}/3]\) to control the smoothness of the distributions. The target label of the keypoint whose predicted box's IoU is \(i\)-th highest and greater than 0.25 is set to \(\hat{s}_{i}\), and others are set to 0. Figure 3: Comparison of various labeling strategies. Although this strategy is simple, its role is to do more as one stroke, and the insight it provides is non-trivial. _For discriminative ability_, the soft-labels enhance the difference between the optimal and sub-optimal proposals, which enforces the model to accurately identify the best key point for regressing detection box. In contrast, when hard-labels or IoU scores are used as the target labels, there is little difference between optimal and sub-optimal proposals from the perspective of learning objectives. _For stability_, compared to hard-labels, our soft-labels can cover a broader range of accurate proposals with a smoother label distribution, and excluding the proposals with low IoU further stabilizes the learning process. Additionally, compared to directly using IoU scores, the constant distribution in soft-labels provides a more stable loss across different samples. For example, if we have two samples with vastly different target objects, such as a large bed and a small chair, the bed sample would have significantly more key points selected, resulting in more proposals of the target object. Using IoU scores as labels would ultimately lead to a much larger loss for the bed sample than the chair sample, which is clearly unreasonable. ### Training and Inference We apply a multi-task loss function to train our 3DRP-Net in an end-to-end manner. **Referring Loss.** The Referring loss \(L_{ref}\) is calculated between the target labels \(\hat{S}\) discussed in Sec.3.3 and predicted referring scores \(S\) of \(K\) keypoints with focal loss Lin et al. (2017). **Keypoints Sampling Loss.** Following the loss used in Luo et al. (2022), we apply the key points sampling loss \(L_{ks}\) to make sure the selected key points are relevant to any object whose category is mentioned in the description. **Detection Loss.** To supervise the predicted bounding boxes, we use the detection loss \(L_{det}\) as an auxiliary loss. Following Luo et al. (2022), the \(L_{det}\) consists of semantic classification loss, objectness binary classification loss, center offset regression loss and bounding box regression loss. **Language Classification Loss.** Similar to Chen et al. (2020), We introduce the language classification loss \(L_{text}\) to enhance language encoder. Finally, the overall loss function in the training process can be summarized as \[L=\alpha_{1}L_{ref}+\alpha_{2}L_{ks}+\alpha_{3}L_{det}+\alpha_{4}L_{text} \tag{6}\] where the balancing factors \(\alpha_{1}\), \(\alpha_{2}\), \(\alpha_{3}\), \(\alpha_{4}\) are set default as 0.05, 0.8, 5, 0.1, respectively, and the \(L_{ref}\) and \(L_{det}\) are applied on all decoder stages following the setting in Qi et al. (2019). ## 4 Experiment ### Datasets and Metrics **ScanRefer.** The ScanRefer dataset Chen et al. (2020) annotates 800 scenes with 51,583 language descriptions based on ScanNet dataset Dai et al. (2017). Following the ScanRefer benchmark, we split the train/val/test set with 36,655, 9,508, and 5,410 samples, respectively. **Nr3D/Sr3D.** The Nr3D and Sr3D are two sub-datasets in ReferIt3D Achlioptas et al. (2020). They are also annotated on the indoor 3D scene dataset Scannet Dai et al. (2017). Nr3D contains 41,503 human utterances collected by ReferItGame, and Sr3D contains 83,572 synthetic descriptions generated based on a "target-spatial relationship-anchor object" template. **Evaluation Metric.** For ScanRefer Chen et al. (2020), following previous work, we use Acc@_m_IoU as the evaluation metric, where \(m\in\{0.25,0.5\}\). This metric represents the ratio of the predicted bounding boxes whose Intersection over Union (IoU) with the ground-truth (GT) bounding boxes is larger than \(m\). For Sr3D and Nr3D Achlioptas et al. (2020), the ground truth bounding boxes are available, and the model only needs to identify the described object from all the bounding boxes. Therefore, the evaluation metric of these two datasets is accuracy, _i.e._, the percentage of the correctly selected target object. ### Quantitative Comparison We compare our 3DRP-Net with other state-of-the-art methods on these three 3D visual grounding benchmarks. **ScanRefer.** Table 1 shows the performance on ScanRefer. 3DRP-Net outperforms the best two-stage method by \(+4.20\) at [email protected] and \(+4.40\) at [email protected] and exceeds the best one-stage method by \(+2.45\) at [email protected] and \(+2.47\) at [email protected]. Even when compared to 3DJCG, which utilizes an extra Scan2Cap Chen et al. (2021) dataset to assist its training, our 3DRP-Net still shows superiority in all metrics. Specifically, for the "Multiple" subset, 3DRP-Net achieves \(+2.66\) and \(+2.34\) gains when compared with the advanced one-stage model in terms of [email protected] and [email protected], which vali dates the proposed 3DRP-MA module is powerful for modeling complex relative position relations in 3D space and significantly contributes to distinguishing the described target object from multiple interfering objects. **Nr3D/Sr3D.** Note that the task of Nr3D/Sr3D is different from ScanRefer, which aims to identify the described target object from all the given ground-truth bounding boxes. Therefore, the soft-labeling strategy and the keypoint sampling module are removed. We only verify the effectiveness of 3DRP-MA on these two datasets. Besides, the data augmentation methods in ViL3DRel Chen et al. (2022) are also used in our training phase for a fair comparison. The accuracy of our method, together with other state-of-the-art methods, is reported in Table 2. 3DRP-Net achieves the overall accuracy of \(65.9\%\) and \(74.1\%\) on Nr3D and Sr3D, respectively, which outperforms all existing methods by a large margin. In the more challenging "Hard" subset, 3DRP-Net significantly improves the accuracy by \(+2.3\%\) in Nr3D and \(+1.6\%\) in Sr3D, again demonstrating our method is beneficial for distinguishing objects by capturing the relative spatial relations. ### Ablation Study We conduct ablation studies to investigate the contribution of each component. All the ablation study results are reported on the ScanRefer validation set. **Relation Modeling Module.** We compared our proposed 3DRP-MA with the relation modules in other 3D visual grounding methods. For fair comparisons, we also introduce distances in x, y, z coordinates and Euclidean space to other relation modules. The results are provided in Table 3, comparing rows 1, 2 and 6, our 3DRP-MA is far superior to the relation modules in 3DVG-Trans and 3DJCG, and the performance improvement mainly comes from the subsets that rely on relative relationship reasoning for localization, namely the "One-Rel" and "Multi-Rel" subsets. **Relative Position Encoding.** In Sec.3.2.3, we discuss the complexity of relative relations in 3D space and propose four relative position encodings based on relative distance in x,y,z coordinates (\(D_{xyz}\)), and the Euclidean metric (\(D_{e}\)), respectively. From Table 3, both \(D_{xyz}\) and \(D_{e}\) can bring significant improvement for subsets that require relative relation reasoning. Row 6 demonstrates \begin{table} \begin{tabular}{c c|c c c|c c|c c} \hline \hline \multirow{2}{*}{Methods} & \multirow{2}{*}{Extra} & \multicolumn{2}{c|}{Unique} & \multicolumn{2}{c|}{Multiple} & \multicolumn{2}{c}{Overall} \\ & & [email protected] & [email protected] & [email protected] & [email protected] & [email protected] & [email protected] \\ \hline \hline \multirow{8}{*}{_Two-stage:_} & ScanRefer & - & 67.64 & 46.19 & 32.06 & 21.26 & 38.97 & 26.10 \\ & TGNN & - & 68.61 & 56.80 & 29.84 & 23.18 & 37.37 & 29.70 \\ & InstanceRefer & - & 77.45 & 66.83 & 31.27 & 24.77 & 40.23 & 32.93 \\ & SAT & 2D assist & 73.21 & 50.83 & 37.64 & 25.16 & 44.54 & 30.14 \\ & 3DVG-Transformer & - & 77.16 & 58.47 & 38.38 & 28.70 & 45.90 & 34.47 \\ & MVT & - & 77.67 & 66.45 & 31.92 & 25.26 & 40.80 & 33.26 \\ & 3DJCG & Scan2Cap & 78.75 & 61.30 & 40.13 & 30.08 & 47.62 & 36.14 \\ & ViL3DRel & - & 81.58 & **68.62** & 40.30 & 30.71 & 47.94 & 37.73 \\ \hline \multirow{2}{*}{_One-stage:_} & 3D-SPS & - & 81.63 & 64.77 & 39.48 & 29.61 & 47.65 & 36.43 \\ & **3DRP-Net (Ours)** & - & **83.13** & 67.74 & **42.14** & **31.95** & **50.10** & **38.90** \\ \hline \hline \end{tabular} \end{table} Table 1: Comparisons with state-of-the-art methods on _ScanRefer_. We highlight the best performance in **bold**. \begin{table} \begin{tabular}{c|c c c c c|c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{5}{c|}{Nr3D} & \multicolumn{5}{c}{Sr3D} \\ \cline{2-10} & Easy & Hard & \begin{tabular}{c} View \\ Dep \\ \end{tabular} & \begin{tabular}{c} View \\ Indep \\ \end{tabular} & Overall & Easy & Hard & \begin{tabular}{c} View \\ Dep \\ \end{tabular} & \begin{tabular}{c} View \\ Indep \\ \end{tabular} & Overall \\ \hline \hline ReferIt3DNet & 43.6 & 27.9 & 32.5 & 37.1 & 35.6 & 44.7 & 31.5 & 39.2 & 40.8 & 40.8 \\ InstanceRefer & 46.0 & 31.8 & 34.5 & 41.9 & 38.8 & 51.1 & 40.5 & 45.4 & 48.1 & 48.0 \\ 3DVG-Transformer & 48.5 & 34.8 & 34.8 & 43.7 & 40.8 & 54.2 & 44.9 & 44.6 & 51.7 & 51.4 \\ LanguageRefer & 51.0 & 36.6 & 41.7 & 45.0 & 43.9 & 58.9 & 49.3 & 49.2 & 56.3 & 56.0 \\ SAT & 56.3 & 42.4 & 46.9 & 50.4 & 49.2 & 61.2 & 50.0 & 49.2 & 58.3 & 57.9 \\ 3D-SPS & 58.1 & 45.1 & 48.0 & 53.2 & 51.5 & 65.4 & 56.2 & 49.2 & 63.2 & 62.6 \\ MVT & 61.3 & 49.1 & 54.3 & 55.4 & 55.1 & 66.9 & 58.8 & 58.4 & 64.7 & 64.5 \\ ViL3DRel & 70.2 & 57.4 & 62.0 & 64.5 & 64.4 & 74.9 & 67.9 & 63.8 & 73.2 & 72.8 \\ \hline **3DRP-Net(Ours)** & **71.4** & **59.7** & **64.2** & **65.2** & **65.9** & **75.6** & **69.5** & **65.5** & **74.9** & **74.1** \\ \hline \hline \end{tabular} \end{table} Table 2: Comparisons with state-of-the-art methods on _Nr3D_ and _Sr3D_. We highlight the best performance in **bold**. that considering relative relations from multiple directions further helps capture comprehensive and sufficient object relations and distinguish the target object from multiple distractors. **Pair-aware relation attention.** The typical description of a spatial relation can be expressed as _"Object 1-Relation-Object 2"_. Our pair-aware relation attention can be considered as the sum of two scores: _Object 1-to-Relation_ (O1-R) and _Relation-to-Object 2_ (R-O2). To further verify the superiority of capturing the relation in the context of an object pair, we ablate the two scores, and the results are illustrated in Table 4. From rows 1, 2 and 5, both O1-R and R-O2 terms benefit the 3D visual grounding task by capturing the relative relations, and the joint use of O1-R and R-O2 provides a more comprehensive understanding of spatial relation description and leads to the best performance. **3DRP-MA in each layer.** We study the effect of each 3DRP-MA module in the transformer layer. \(SA_{1}\), \(CA\) and \(SA_{2}\) respectively denote whether to replace the self-attention before interacting with seed points, the cross-attention for key points and seed points, and the self-attention before interacting with language. Row 3 to 5 in Table 4 add each 3DRP-MA in turns and the performance is gradually improved to 50.10% and 38.90%. **Soft-labeling Strategy.** Table 5 presents the performance of different labeling strategies. In hard-labeling, \(N_{s}\) represents the number of key points whose IoU is in the top \(N_{s}\) and greater than 0.25, which are labeled as 1. In soft-labeling, \(N_{s}\) is a hyperparameter in Eq.5, which controls the number of soft labels. To further demonstrate that our proposed strategy improves stability and discrimination, we also use IoUs score as a label. The "original" setting directly uses IoUs as a label, while the "linear" setting stretches IoUs linearly to the range of 0 to 1 to enhance discrimination. Compared to hard-labeling and IoUs methods, our soft-labeling strategy improves discrimination and stability. Using the "original" IoUs method lacks discrimination power and stability due to the unbalanced loss on different samples. Even using linear scaling to enhance discrimination power, this instability cannot be eliminated. Our method alleviates these problems with a discriminative constant distribution and shows comprehensive superiority in Table 5. ## 5 Conclusion In this paper, we propose a relation-aware one-stage model for 3D visual grounding, referred to as 3D Relative Position-aware Network (3DRP-Net). 3DRP-Net contains novel 3DRP-MA modules to exploit complex 3D relative relations within point clouds. Besides, we devise a soft-labeling strat \begin{table} \begin{tabular}{c c c|c c c c} \hline \hline Row & O1-R & R-O2 & \(SA_{1}\) & \(CA\) & \(SA_{2}\) & [email protected] & [email protected] \\ \hline 1 & \(\times\) & ✓ & ✓ & ✓ & ✓ & 48.83 & 38.46 \\ 2 & ✓ & \(\times\) & ✓ & ✓ & ✓ & 48.30 & 37.56 \\ \hline 3 & ✓ & ✓ & ✓ & \(\times\) & \(\times\) & 46.70 & 36.10 \\ 4 & ✓ & ✓ & ✓ & ✓ & \(\times\) & 48.72 & 37.59 \\ 5 & ✓ & ✓ & ✓ & ✓ & ✓ & **50.10** & **38.90** \\ \hline \hline \end{tabular} \end{table} Table 4: Ablation studies on 3DRP-MA in each transformer layer and pair-aware relation attention. \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline Row & \(D_{e}\) & \(D_{xyz}\) & Rel Module & Overall & Multiple & None-Rel & One-Rel & Multi-Rel \\ \hline 1 & ✓ & ✓ & 3DVG-Transformer & 36.85 & 30.16 & 34.89(+2.95\%) & 32.51(+5.51\%) & 28.03(+6.60\%) \\ 2 & ✓ & ✓ & 3DJCG & 36.43 & 29.62 & 35.51(+1.15\%) & 31.87(+7.62\%) & 27.35(+9.25\%) \\ \hline 3 & \(\times\) & \(\times\) & 3DRP-MA & 32.74 & 26.39 & 34.18(+5.09\%) & 28.39(+20.82\%) & 23.94(+24.81\%) \\ 4 & ✓ & \(\times\) & 3DRP-MA & 36.43 & 30.26 & 35.47(+1.27\%) & 32.54(+5.41\%) & 28.10(+6.33\%) \\ 5 & \(\times\) & ✓ & 3DRP-MA & 37.13 & 30.56 & 35.30(+1.76\%) & 32.87(+4.35\%) & 28.46(+4.99\%) \\ \hline 6 & ✓ & ✓ & 3DRP-MA & **38.90** & **31.91** & **35.92** & **34.30** & **29.88** \\ \hline \hline \end{tabular} \end{table} Table 3: Ablation studies on relation position encoding and different relation modeling modules. None-Rel/One-Rel/Multi-Rel represent subsets that contain zero/one/multiple relation descriptions in the original Multiple set of ScanRefer, and the relative percentage improvements compared to the different settings are marked in green. \begin{table} \begin{tabular}{c c c c} \hline \hline Strategy & \(N_{s}\) & [email protected] & [email protected] \\ \hline \multirow{2}{*}{IoUs} & Original & 48.20 & 38.06 \\ & Linear & 48.82 & 37.50 \\ \hline \multirow{2}{*}{Hard} & 1 & 47.36 & 37.25 \\ & 4 & 47.29 & 37.68 \\ & 8 & 47.30 & 37.26 \\ \hline \multirow{3}{*}{Soft} & 12 & 49.13 & 38.46 \\ & 24 & **50.10** & **38.90** \\ \cline{1-1} & 36 & 49.64 & 38.55 \\ \hline \hline \end{tabular} \end{table} Table 5: Ablation studies on the labeling strategies. egy to achieve disambiguation while promoting a stable and discriminative learning process. Comprehensive experiments reveal that our 3DRP-Net outperforms other methods. ## 6 Limitations The datasets of 3D visual grounding task are all stem from the original ScanNet dataset which brings generalization to other scene types into question. More diverse benchmarks are important for the further development of the field of 3D visual grounding.
2306.04536
JADES: The production and escape of ionizing photons from faint Lyman-alpha emitters in the epoch of reionization
We present the properties of 17 faint Ly$\alpha$ emitting galaxies (LAEs) at $z>5.8$ from the JWST Advanced Deep Extragalactic Survey (JADES) in the Hubble Ultra Deep Field/GOODS-S. These LAEs span a redshift range $z\approx5.8-8.0$ and a UV magnitude range $M_{UV}\approx-17$ to $-20.6$, with the Ly$\alpha$ equivalent width (EW) in the range $\approx 25-350$ \AA. The detection of other rest-optical emission lines in the spectra of these LAEs enables the determination of accurate systemic redshifts and Ly{\alpha} velocity offsets, as well as the physical and chemical composition of their stars and interstellar media. These faint LAEs are consistent with metal-poor systems with high ionization parameters, similar to the general galaxy population at $z>6$. We measured an average ionizing photon production efficiency, log($\xi_\rm{ion}$/erg$^{-1}$ Hz) $\approx25.57$ across our LAEs, which does not evolve strongly with redshift. We report an anti-correlation between the Ly$\alpha$ escape fraction (f_\rm{esc}) and the velocity offset from systemic redshift, consistent with model expectations. We further find that the strength and velocity offset of Ly$\alpha$ are neither correlated with galaxy spectroscopic properties nor with $\xi_\rm{ion}$. We find a decrease in $f_\rm{esc}$(Ly$\alpha$) with redshift, indicative of decreasing sizes of ionized bubbles around LAEs at high redshifts. We used a range of galaxy properties to predict Lyman continuum $f_\rm{esc}$ for our LAEs, finding that the ionizing photon output into the intergalactic medium remains roughly constant across the observed Ly$\alpha$ EW, showing a mild increase at fainter M$_{UV}$ and at higher redshifts. We derived correlations between the ionizing photon output from LAEs and $M_{UV}$, Ly$\alpha$ EW and redshifts, which can be used to constrain the ionizing photon contribution of LAEs at $z > 6$ towards cosmic reionization.
Aayush Saxena, Andrew J. Bunker, Gareth C. Jones, Daniel P. Stark, Alex J. Cameron, Joris Witstok, Santiago Arribas, William M. Baker, Stefi Baum, Rachana Bhatawdekar, Rebecca Bowler, Kristan Boyett, Stefano Carniani, Stephane Charlot, Jacopo Chevallard, Mirko Curti, Emma Curtis-Lake, Daniel J. Eisenstein, Ryan Endsley, Kevin Hainline, Jakob M. Helton, Benjamin D. Johnson, Nimisha Kumari, Tobias J. Looser, Roberto Maiolino, Marcia Rieke, Hans-Walter Rix, Brant E. Robertson, Lester Sandles, Charlotte Simmonds, Renske Smit, Sandro Tacchella, Christina C. Williams, Christopher N. A. Willmer, Chris Willott
2023-06-07T15:42:29Z
http://arxiv.org/abs/2306.04536v2
JADES: The production and escape of ionizing photons from faint Lyman-alpha emitters in the epoch of reionization ###### Abstract Context: We present the properties of 16 faint Lyman-\(\alpha\) emitting galaxies (LAEs) at \(z>5.8\) from the _JWST_ Advanced Deep Extragalactic Survey (JADES) spectroscopic data in the _Hubble_ Ultra Deep Field/GOODS-S. These LAEs span a redshift range \(z\approx 5.8-8.0\) and UV magnitude range \(M_{\rm UV}\approx-17\) to \(-20.6\), with Ly\(\alpha\) equivalent width (EW) in the range \(\approx 25-350\) A. The detection of other rest-optical emission lines in the spectra of these LAEs enables the determination of accurate systemic redshifts and Ly\(\alpha\) velocity offsets, as well as the physical and chemical composition of their stars and interstellar media. These faint LAEs are consistent with metal-poor systems with high ionization parameters, similar to the general galaxy population at \(z>6\). We measure an average ionizing photon production efficiency, \(\log(\xi_{\rm ion}/{\rm erg}^{-1}\,{\rm Hz})\approx 25.56\) across our LAEs, which does not evolve strongly with redshift. We report an anti-correlation between Ly\(\alpha\) escape fraction and velocity offset from systemic, consistent with model expectations. We further find that the strength and velocity offset of Ly\(\alpha\) are not correlated with galaxy spectroscopic properties nor with \(\xi_{\rm ion}\). We find a decrease in Ly\(\alpha\) escape fractions with redshift, indicative of decreasing sizes of ionized bubbles around LAEs at high redshifts. We use a range of galaxy properties to predict Lyman continuum escape fractions for our LAEs, finding that the ionizing photon output into the intergalactic medium from our LAEs remains roughly constant across the observed UV magnitude and Ly\(\alpha\) equivalent width, showing a mild increase with redshift. We derive correlations between the ionizing photon output from LAEs and UV magnitude Ly\(\alpha\) strengths and redshift, which can be used to build realistic, observationally-driven reionization models. ## 1 Introduction Cosmic reionization is a crucial phase transition in the Universe's history, the understanding of which is an important challenge in observational astronomy (see recent review by Robertson, 2022). The emergence of ionizing UV photons from the first structures to form in the Universe began interacting with the neutral intergalactic medium (IGM), gradually ionizing it to near completion by \(z\sim 6\) (e.g. Fan et al., 2006), although certain studies have favoured a later and to reionization (e.g. Weinberger et al., 2019; Keating et al., 2020; Bosman et al., 2022). To quantify the contribution towards the cosmic reionization budget from ionizing photon sources in the early Universe, a good understanding is needed of the space density of sources, the efficiency of hydrogen ionizing Lyman continuum (LyC; \(\lambda_{0}<912\) A) photon production, and crucially, the fraction of LyC photons that manage to escape into the IGM (e.g. Dayal & Ferrara, 2018). _JWST_ spectroscopy has offered ground-breaking insights into the state of the interstellar medium (ISM), chemical enrichment of the gas and stars as well as ionizing photon production in galaxies at \(z>6\), pushing towards fainter UV magnitudes than were previously possible from the ground (Arellano-Cordova et al., 2022; Endsley et al., 2022; Tacchella et al., 2022; Trump et al., 2022; Sun et al., 2022; Cameron et al., 2023; Curtis-Lake et al., 2023; Curti et al., 2023; Fujimoto et al., 2023; Katz et al., 2023; Robertson et al., 2023; Sanders et al., 2023). However, accurately measuring the escape fraction of LyC photons (\(f_{\rm esc}\)) becomes hard already at \(z>4\) mainly due to the increasing neutrality of the IGM (e.g. Inoue et al., 2014), which efficiently absorbs LyC photons along the line of sight. A further complication is introduced by the fact that no clear dependence between \(f_{\rm esc}\)(LyC) and galaxy properties has been observationally established (e.g. Naidu et al., 2018; Fletcher et al., 2019; Nakajima et al., 2020; Pahl et al., 2021; Saxena et al., 2022) Therefore, in order to constrain the all-important escape fraction of ionizing photons from reionization era galaxies, it is of utmost importance to find reliable indirect indicators of \(f_{\rm esc}\)(LyC). The presence of young, actively forming stars as well as relatively gas and dust-free environments is thought to enable significant \(f_{\rm esc}\)(LyC) from galaxies (e.g. Zackrisson et al., 2013), and spectroscopic and/or photometric indicators probing such conditions can be explored as indirect indicators of LyC photon escape (e.g. Flury et al., 2022, 2022; Topping et al., 2022; Masscia et al., 2023). Important insights can also be gained from high-resolution simulations of reionization-era galaxies, where a good handle on the escaping LyC radiation can be correlated with prevalent galaxy conditions (e.g. Barrow et al., 2020; Maji et al., 2022) that can then be converted into observables (e.g. Choustikov et al., 2023) and used to predict \(f_{\rm esc}\) from galaxies. Uniquely, galaxies at \(z\gtrsim 6\) that show strong Ly\(\alpha\) emission in their spectra (typically EW(Ly\(\alpha\)) \(>20\,\)A; e.g. Ajiki et al., 2003), also known as Ly\(\alpha\) emitters (or LAEs) can be excellent probes of studying how reionization unfolds over redshifts. The presence of strong Ly\(\alpha\) emission at \(z\gtrsim 6\) often traces the existence of large ionized bubbles in an otherwise neutral IGM (Miralda-Escude, 1998; Furlanetto et al., 2006; Roberts-Borsani et al., 2016; Castellano et al., 2022; Trapp et al., 2022; Tang et al., 2023; Jung et al., 2023; Saxena et al., 2023), c.f. Bunker et al. 2023), offering direct observational insights into reionized regions of the early Universe. Further, as intrinsic Ly\(\alpha\) luminosities are expected to increase with star-formation rates, the fraction of galaxies that appear to be strong LAEs can be an important diagnostic of the ionizing photon production capabilities of reionization era galaxies (e.g. Smith et al., 2019; Garel et al., 2021; Matthee et al., 2022) as well as the evolving state of IGM neutral fraction (e.g. Caruana et al., 2012, 2014; Stark, 2016; Pentericci et al., 2018; Hoag et al., 2019; Kusakabe et al., 2020; Fuller et al., 2020; Jones et al., 2023). Considerable information about the neutral gas and dust content within a galaxy can be gained by the observed strength and emission line profile of Ly\(\alpha\) (e.g. Hayes et al., 2023). The separation between the blue and red peaks in the emission as well as the offset from systemic redshift in the absence of a double-peaked profile can be used to infer neutral gas densities (Verhamme et al., 2015; Orlivo et al., 2018) and dust (Hayes et al., 2013), although it has been shown that the neutral gas distribution may play the more dominant role in controlling Ly\(\alpha\) escape (e.g. Atek et al., 2008). At \(z>6\), both the number density of LAEs (Haiman, 2002; Malhotra and Rhoads, 2006) and the shape of the Ly\(\alpha\) line originating from star-forming galaxies residing within ionized bubbles can further be used to estimate the size of those bubbles (e.g. Mason and Gronke, 2020; Hayes and Scarlata, 2023; Wristok et al., 2020). With a plethora of models available to link the observed Ly\(\alpha\) properties to both galaxy properties and the state of the IGM at \(z>6\), it is imperative to expand samples of observed LAEs in the reionization era, pushing to fainter magnitudes. Probing Ly\(\alpha\) emission from UV-faint galaxies has the added advantage of providing much tighter constraints on both bubble sizes as well as the ionized fraction of the IGM (e.g. Mason et al., 2018; Bolan et al., 2022). Importantly, Ly\(\alpha\) emission from fainter galaxies can provide additional sightlines from which the impact of galaxy associations on the production efficiency of ionizing photons (e.g. Witten et al., 2023) and their transmission through the IGM (e.g. Trapp et al., 2022) can be studied in detail. Perhaps most importantly, detailed studies of faint LAEs can inform our understanding of the key drivers of cosmic reionization, particularly testing whether compact star-forming galaxies are indeed contributing the bulk of ionizing photons towards the reionization budget (e.g. Robertson et al., 2015), which are often expected to produce large intrinsic Ly\(\alpha\) luminosities (e.g. Matthee et al., 2022). LAEs that have their Ly\(\alpha\) emission peaking close to systemic redshifts are also expected to have high LyC escape fractions (Verhamme et al., 2015; Dijkstra, 2014; Naidu et al., 2022). With signatures of hard radiation fields (Stark et al., 2015; Mainali et al., 2017; Feltre et al., 2020; Saxena et al., 2022; Roy et al., 2023) and elevated ionizing photon production efficiencies (e.g. Matthee et al., 2017; Harikane et al., 2018; Ning et al., 2023; Simmonds et al., 2023) measured from LAEs across redshifts, Ly\(\alpha\) emitting galaxies in the reionization era are exciting laboratories to both test and constrain reionization models. With access to stellar and ISM properties of LAEs at high redshifts thanks to _JWST_, it is now finally possible to study the potential role of LAEs in driving cosmic reionization. In an attempt to quantify the production and escape of both Ly\(\alpha\) and LyC photons from LAEs in the reionization era, in this study we dramatically increase the number of faint LAEs detected at \(z\gtrsim 6\) using exquistively deep spectra from the _JWST_ Advanced Deep Extragalactic Survey (JADES; Eisenstein et al., 2023). The main aim of this work is to explore the physical properties of faint LAEs in the reionization era, while also investigating the physical mechanisms within the galaxy that control the visibility of Ly\(\alpha\) emission. We further assess the impact of an increasingly neutral IGM on the emergent Ly\(\alpha\) emission at the highest redshifts. Finally, using all available spectroscopic and photometric information about our faint LAEs, we estimate their ionizing photon contribution towards the global reionization budget. In companion papers, we also measure the LAE fraction (Jones et al., 2023) as well as the size of ionized bubbles around our LAEs and their clustering (Wristok et al. submitted). The layout of this paper is as follows: Section 2 describes the _JWST_ data used in this study as well as the measurement of key spectroscopic quantities that are used in this study. Section 3 presents the chemical enrichment and ionization state inferred from the spectra of our LAEs compared with other reionization era galaxies in the literature. Section 4 explores the mechanisms within galaxies that control the escape of Ly\(\alpha\) photons along the line of sight. Section 5 discusses the implications for the reionization of the Universe from these new LAE observations and presents quantities that would help build realistic reionization models. The main conclusions of this study are presented in Section 6. Throughout this paper, we use the Planck Collaboration et al. (2020) cosmology. Magnitudes are in the AB system (Oke and Gunn, 1983) and all distances used are proper distances, unless otherwise stated. ## 2 Data and measurements ### NIRSpec data The _JWST_ observations used in this study are part of JADES, which is a collaboration between the Near-Infrared Camera (NIRCam; Rieke et al., 2022) and Near-Infrared Spectrograph (NIRSpec; Ferruit et al., 2022; Jakobsen et al., 2022) Instrument Science teams with an aim of using over 750 hours of guaranteed time observations (GTO) to study the evolution of galaxies in the Great Observatories Origins Deep Survey (GOODS)-South and GOODS-North fields (Giavalisco et al., 2004). We describe the NIRSpec and NIRCam observations and data reduction steps below. Spectroscopic data presented in this work was obtained using the Micro-Shutter Assembly (MSA; Ferruit et al., 2022) on the NIRSpec instrument on board _JWST_. Two 'Tiers' of JADES data was utilized in this study: the Deep Tier NIRSpec observations are part of the GTO program ID: 1210 (PI: Lutzgendorf) and in GOODS-S centred near the Hubble Ultra Deep Field (HUDF), obtained between 22 October and 25 October 2022 over 3 visits, and the Medium Tier observations are part of GTO program 1180 (PI: Eisenstein) obtained over a larger area in GOODS-S (see Eisenstein et al., 2023, for an overview of the field layout). For Deep observations, the PRISM/CLEAR setup, which gives wavelength coverage in the range \(0.6-5.3~{}\mu\)m with a spectral resolution of \(R\sim 100\)(Boker et al., 2023), and G140M/F070LP, G235M/F170LP, G395M/F290LP, and G395H/F290LP filter/grating setups were used, whereas for Medium observations all of the above but the G395H/F290LP filter/grating setup were used. For the Deep Tier, three sub-pointings were planned in the same field (although each sub-pointing had minor pointing differences), with each visit having a total of 33.613 ks of exposure in PRISM/CLEAR and 8.4 ks of exposure in each of the gratings. The Medium Tier observations were carried out in parallel to NIRCam observations, and therefore, consisted of several single pointings covering a larger sky area, with 3.8 ks of exposure time in the PRISM/CLEAR and 3.1 ks of exposure time in the gratings per pointing. We note that as the sources targeted were generally high-priority targets owing to their possible high redshift nature, it was possible for one target to be covered over multiple Medium tier pointings. We refer the readers to Bunker et al. (2023) and Eisenstein et al. (2023) for further details about the observational setup, strategy and challenges. The targets for spectroscopy were selected from existing deep _HST_-based catalogues as well as JADES NIRCam catalogues (Rieke & the JADES Collaboration, 2023). Candidate high redshift galaxies with photometric redshifts \(z>5.7\), identified via the classic photometric 'drop-out' technique (e.g. Steidel et al., 1996), whereby the Lyman break in the spectrum of a galaxy is captured in adjacent broad-band filters, were assigned higher priorities. Full details of the target selection and priority classes can be found in the accompanying paper by Bunker et al. (2023c). The data reduction was carried out using pipelines developed by the ESA NIRSpec Science Operations Team (SOT) and the NIRSpec GTO Team (Ferruit et al., 2022, Carniani et al. in prep). Some of the main data reduction steps implemented by the pipeline are pixel-level background subtraction, pixel-to-pixel flat-field correction, absolute flux calibration, slit-loss correction, and eventually 2-dimensional (2D) and 1-dimensional (1D) spectra extraction and co-addition. In this version of the reduction, the final 1D spectra are not extracted from the 2D spectra, but result from the weighted averaging of 1D spectra from all integrations (see Curtis-Lake et al., 2023). Due to the compact size of our LAEs, slit-loss corrections were applied by modelling it as point-like source. A nominal 3-pixel extraction aperture was used to produce the co-added 1D spectra. A detailed description of the data reduction and spectral extraction methods is given in Bunker et al. (2023) (but see also Curtis-Lake et al., 2023 and Cameron et al., 2023)). ### Identification of Lyman-alpha emitters Ly\(\alpha\) emission in the spectra of galaxies in the parent sample was identified through a combination of template fitting (Jones et al., 2023) of the R100 spectra as well as visual inspection of both the R100 and R1000 (G140M) spectra of all confirmed high-redshift galaxies in the parent sample. Using both these methods, we identified 9 candidate LAEs in Deep and 7 candidate LAEs in Medium at \(z>5.8\). We then measure the Ly\(\alpha\) line flux by fitting a single Gaussian function to the emission in both R100 and R1000 spectra. The Ly\(\alpha\) line in one of the 16 LAEs at \(z>5.8\) presented in this work fell in the detector gap in R1000. With the exception of this galaxy, all visually identified LAEs encouragingly showed clear Ly\(\alpha\) emission both in the PRISM and in G140M spectra. In Figure 1 we show the full 1D spectrum from PRISM (R100) as well as a zoom-in on the Ly\(\alpha\) emission identified in the G140M grating (R1000) from a selection of LAEs in our sample and the spectra of all LAEs are shown in Appendix A. In Table 1 we list the exposure times for the LAEs identified in this work and the references where the possible high redshift nature of these objects was first suggested. Overall, we find that the line fluxes we measure from the medium resolution grating are systematically higher than the ones measured from PRISM, as can also be seen in Figure 1, which is not surprising given the degradation in spectral resolution that PRISM spectra suffer from at shorter wavelengths. Therefore, going forward we use Ly\(\alpha\) measurements from the G140M grating, with the exception of one source for which Ly\(\alpha\) was in the detector gap. ### Systemic redshifts Accurate'systemic' redshifts were measured by identifying strong emission lines in the higher resolution Grating spectra, which generally consisted of [O ii], H\(\beta\), [O iii] and H\(\alpha\). The redshift was derived by fitting single Gaussian functions to the strongest emission lines and using a S/N-weighted combination of the centroids of the fits to obtain the best redshift solution. For the redshift range of our sources, the H\(\beta\), [O iii] and H\(\alpha\) lines fell in the G395M grating and the [O ii] line fell in the G235M grating spectra. Vacuum wavelengths for all of these strong rest-frame optical lines were used for redshift determination. We found that on average, the difference between the redshifts derived from R100 and R1000 spectra were of the order \(\Delta z\sim 0.004\), but the redshifts derived from lines in the medium dispersion gratings were found to be consistent. Therefore, the redshifts that we derive and use further in the study are from the medium dispersion gratings, which also have a much narrower line-spread function (LSF) and are more sensitive to narrow emission lines. The source IDs, JADES source names, redshifts and exposure times are given in Table 1. From here on, we use the IDs to refer to the objects presented in this paper. The references for the discovery papers of these targets can be found in Bunker et al. (2023). ### UV magnitudes and slopes UV magnitudes at rest-frame 1500 A (\(M_{\rm UV}\)) were measured directly from the R100 PRISM spectra. To do this, the spectra were shifted from observed to rest-frame using the spectroscopic redshifts and a 50 A-wide boxcar filter centred on 1500 A to measure the median flux and error. The measured fluxes and errors were then used to calculate absolute magnitudes and errors. The distribution of the UV magnitudes and Ly\(\alpha\) equivalent widths from our sample of LAEs is shown in Figure 2. The UV-faint galaxies in our sample show systematically high EW(Ly\(\alpha\)), which is likely due to the flux-limited nature of spectroscopic observations, only enabling high EW LAEs to be identified at fainter UV magnitudes. To put our sample into perspective, we also show measurements from other LAEs at \(z\gtrsim 6\) identified using _JWST_(Tang et al., 2023; Jung et al., 2023) or ground-based observations (Ning et al., 2023; Endsley et al., 2022b) in the Figure. Very clearly, the LAEs presented in this work have much fainter UV magnitudes compared to other LAEs at \(z\gtrsim 6\) in the literature. UV slopes (\(\beta\), where \(f_{\lambda}\propto\lambda^{\beta}\)) are also measured directly from the R100 spectra by fitting an exponential function using chi-squared minimization to the flux density in the wavelength range 1340 A to 2600 A, using the Calzetti et al. (1994) spectral windows to avoid strong emission and/or absorption features at rest-UV wavelengths. The redshifts, UV magnitudes at 1500 A and observed UV slopes are given in Table 2. ### Lyman-alpha velocity offsets Using accurate systemic redshifts from the medium resolution grating spectra, we then use the peak of the Ly\(\alpha\) line detected in the G140M grating spectra of our LAEs to calculate velocity offsets from the expected Ly\(\alpha\) emission (vacuum wavelength) at systemic redshift. As mentioned earlier, the wavelength calibrations between the different gratings were compared against the lower resolution PRISM spectra were noted to be slightly inconsistent, but the wavelengths across the grating spectra were all consistent with each other (see Bunker et al. 2023). Therefore, inferring the observed velocity offset of Ly\(\alpha\) from G140M spectra should not be affected by systematic offsets. We trialled two methods to estimate the Ly\(\alpha\) velocity shifts and errors. The first method involved obtaining the centroid of the line emission by fitting a single Gaussian function, with the error on the centroid (which takes into account pixel-by-pixel uncertainties) giving the error on the velocity offset. However, as Ly\(\alpha\) emission from high redshift galaxies often appears to be asymmetric due to absorption of the blue wing by both ISM and neutral IGM, the single Gaussian fit did not always accurately coincide with the observed peak of Ly\(\alpha\) emission. Therefore, the second method we employed involved calculating the Ly\(\alpha\) offset from the peak pixel of the Ly\(\alpha\) emission line, with the uncertainty on this measurement being the width of the peak pixel. If the S/N of the emission line is high enough, the peak pixel should always trace the peak of the LSF-deconvolved emission line as well, thereby returning the most accurate empirical measurement of the Ly\(\alpha\) velocity offset. ### Other emission line measurements The rest-frame optical emission line fluxes for all LAEs are measured from the higher spectral resolution grating spectra, unless the lines are not clearly detected in the grating. In that case Figure 1: Example spectra showing the PRISM (left) and G140M grating zoom-in on Ly\(\alpha\) emission (right) of faint LAEs selected from the Deep Tier spectra in JADES, with 1\(\sigma\) noise shown as green shaded region. The Ly\(\alpha\) line sensitivity in the G140M spectra is considerably higher compared to PRISM, where the decreasing spectral resolution at bluer wavelengths results in diminished emission line sensitivity. Therefore, a combination of both PRISM and Grating spectra is key to identifying Ly\(\alpha\) emission. we measure and report the line fluxes from the PRISM spectra. The main emission lines that we measure for our sample of LAEs are [O ii] \(\lambda\lambda 3726\), 3729 (which appear to be blended), H\(\beta\), [O iii] \(\lambda\lambda 4959,5007\) and H\(\alpha\). We once again fit single Gaussian functions to all of these lines, measuring the local continuum from a wavelength region adjacent to the emission line. Using these line fluxes we also calculate line ratios such as [O iii] \(\lambda 5007\)/[O ii] \(\lambda\lambda 3726\), 3729 (O32) and ([O ii] \(\lambda\lambda 3726\), 3729 + [O iii] \(\lambda\lambda 4959,5007\))/H\(\beta\) (R23). ### Dust measurements from Balmer decrements Here we use the Balmer emission line decrements calculated from H\(\alpha\)/H\(\beta\) (or H\(\beta\)/H\(\gamma\) when H\(\alpha\) is not within the spectral coverage). We calculate the intrinsic ratios of these line fluxes using pyneb(Luridiana et al., 2015), assuming a temperature of \(10^{4}\) K and electron temperatures of 100 cm\({}^{-3}\). We assume the dust attenuation curve for the Small Magellanic Cloud (SMC; Gordon et al., 2003), which has been shown to be the most appropriate for high redshift galaxies (e.g. Shivaei et al., 2020). Dust attenuation, \(E(B-V)\) is then calculated by comparing the observed Balmer line ratios with the intrinsic. We note that \begin{table} \begin{tabular}{l l r r r} \hline \hline ID & JADES Source name & \(z_{\rm spec}\) & \(T_{\rm exp}^{\rm PRISM}\) & \(T_{\rm exp}^{\rm Grating}\) \\ & & & (ks) & (ks) \\ \hline _Deep Tier_ & & & & \\ 21842 & JADES-GS+53.15682\(-\)27.76716 & 7.982 & 100.0 & 25.0 \\ 10013682* & JADES-GS+53.16746\(-\)27.77201 & 7.276 & 66.6 & 16.7 \\ 16625 & JADES-GS+53.16904\(-\)27.77884 & 6.631 & 100.0 & 25.0 \\ 18846 & JADES-GS+53.13492\(-\)27.77271 & 6.336 & 100.0 & 25.0 \\ 19342 & JADES-GS+53.16062\(-\)27.77161 & 5.974 & 100.0 & 25.0 \\ 9422 & JADES-GS+53.12175\(-\)27.79763 & 5.937 & 100.0 & 25.0 \\ 6002 & JADES-GS+53.11041\(-\)27.80892 & 5.937 & 100.0 & 25.0 \\ 19606 & JADES-GS+53.17655\(-\)27.77111 & 5.889 & 33.3 & 8.3 \\ 10056849 & JADES-GS+53.11351\(-\)27.77284 & 5.814 & 100.0 & 8.3 \\ _Medium Tier_ & & & & \\ 12637 & JADES-GS+53.13347\(-\)27.76037 & 7.660 & 19.0 & 15.5 \\ 15362 & JADES-GS+53.11634\(-\)27.76194 & 6.794 & 15.2 & 12.4 \\ 13607 & JADES-GS+53.13743\(-\)27.76519 & 6.622 & 7.6 & 6.2 \\ 14123 & JADES-GS+53.17836\(-\)27.80098 & 6.327 & 7.6 & 6.2 \\ 58850 & JADES-GS+53.09517\(-\)27.76061 & 6.263 & 3.8 & 3.1 \\ 17138 & JADES-GS+53.08604\(-\)27.74760 & 6.204 & 3.8 & 3.1 \\ 9365 & JADES-GS+53.16280\(-\)27.76084 & 5.917 & 7.6 & 6.2 \\ \hline \end{tabular} \end{table} Table 1: IDs, redshifts and exposure times for the Ly\(\alpha\) emitting galaxies identified in this study. Figure 2: Distribution of EW(Ly\(\alpha\)) and \(M_{\rm UV}\) (left) and redshift (right) of galaxies from our Deep and Medium observations. Also shown for comparison are measurements from the literature of LAEs at \(z\gtrsim 6\) from Tang et al. (2023), Jung et al. (2023), Ning et al. (2023) and Endsley et al. (2022b), as well as from GNz11 at \(z=10.603\)(Bunker et al., 2023) (see Section 2.10 for a brief explanation about each of these data sets). The LAEs presented in this work are much fainter in the UV than those that have been previously analysed in the literature. Our sample also includes the extremely high EW LAE with \(M_{\rm UV}\sim-17.0\) that was recently reported by Saxena et al. (2023). the H\(\alpha\) line is detected for all but one galaxy in our sample, and therefore, we primarily use the observed H\(\alpha\)/H\(\beta\) ratios to calculate the ionizing photon production efficiency, or \(\xi_{\rm ion}\), given by where H\(\alpha\) moves out of NIRSpec coverage we use H\(\beta\)/H\(\gamma\). ### Ionizing photon production efficiency We use the H\(\alpha\) flux (or H\(\beta\) when H\(\alpha\) is not within the spectral coverage) and the monochromatic luminosity at 1500 A to calculate the ionizing photon production efficiency, or \(\xi_{\rm ion}\), given by \[\left(\frac{\xi_{\rm ion}}{\rm erg^{-1}}\;{\rm Hz}\right)=\frac{N(H^{0})}{L_{ 1500,\rm int}} \tag{1}\] where \(N(H^{0})\) is the intrinsic hydrogen ionizing photon production rate in units of s\({}^{-1}\) and \(L_{1500,\rm int}\) is the intrinsic (dust-corrected) luminosity density at rest-frame 1500 A in units of erg s\({}^{-1}\) Hz\({}^{-1}\). The H\(\alpha\) line (or other Balmer lines) luminosity can be used to calculate the intrinsic ionizing photon production rate. Assuming \(T_{e}=10^{4}\) K and \(n_{e}=100\) cm\({}^{-3}\), \[N(H^{0})\times(1-f_{\rm esc})=7.3\times 10^{11}L({\rm H}\alpha) \tag{2}\] where \(f_{\rm esc}\) is the escape fraction of LyC photons out of the galaxy (e.g. Maseda et al., 2020; Simmonds et al., 2023). To calculate \(\xi_{\rm ion}\) for our LAEs, we assume Case-B recombination, i.e. \(f_{\rm esc}\)(LyC) \(=0\) (see Section 5, however, for a discussion about non-zero \(f_{\rm esc}\)(LyC)). ### Lyman-alpha escape fractions We now use the strength of Balmer emission lines seen in the spectrum together with the inferred dust attenuation to derive a Ly\(\alpha\) escape fraction for all LAEs in our sample. Assuming Case-B recombination, \(n_{e}=100\) cm\({}^{-3}\) and \(T_{e}=10,000\) K, the intrinsic Ly\(\alpha\)/H\(\alpha\) ratio is 8.2 (e.g. Osterbrock, 1989). We then calculate \(f_{\rm esc}\)(Ly\(\alpha\)) as the ratio of the observed (dust-corrected) Ly\(\alpha\) to Balmer line emission to the intrinsic ratio, which for the H\(\alpha\) emission line looks like: \(f_{\rm esc}\)(Ly\(\alpha\)) = L(Ly\(\alpha\))/(8.2\(\times\) L(H\(\alpha\))). The observed Ly\(\alpha\) properties, which include line fluxes, equivalent widths, velocity offset from systemic redshift and the Ly\(\alpha\) escape fraction (\(f_{\rm esc}\)) are given in Table 3. ### Comparison samples from the literature To put our results into a more global context while also increasing the baseline of several physical parameters that were also measured from our sample of faint LAEs, we describe here a selection of literature samples of LAEs at \(z\gtrsim 6\) with which we compare our results. Perhaps the most immediate comparison is offered by LAEs identified by Tang et al. (2023) using _JWST_ spectroscopy through the CEERS survey (see also Fujimoto et al., 2023). We also include CEERS results from Jung et al. (2023) in this study. Since CEERS is shallower and wider than JADES, it more efficiently selects the rare UV-bright galaxies by probing a much larger volume at high redshifts. We also use the compilation of \(z\gtrsim 6\) LAEs from Endsley et al. (2022) that have ALMA emission line measurements, enabling robust measurements of the Ly\(\alpha\) velocity offsets. This compilation includes LAEs from the ALMA REBELS survey (Bouwens et al., 2022) as well as other LAEs at \(z>6\): CLM1 (Cuby et al., 2003; Willott et al., 2015), WMt5 (Willott et al., 2015), B14-6566 (Furusawa et al., 2016; Hashimoto et al., 2019), EGS-zs8-1 (Oesch et al., 2015; Stark et al., 2017), COS-z7-1 (Pentericci et al., 2016; Laporte et al., 2017; Stark et al., 2017), COSMOS24108 (Pentericci et al., 2016, 2018b), NTTDF6345 (Pentericci et al., 2011, 2016), UDS16291 (Pentericci et al., 2016, 2018), BDF-3299 (Vanzella et al., 2011; Maiolino et al., 2015; Carniani et al., 2017), RXJ2248-ID3 (Mainali et al., 2017), A383-5.2 (Stark et al., 2015; Knudsen et al., 2016), VR7 (Matthee et al., 2019, 2020), CR7 (Sobral et al., 2015; Matthee et al., 2017) and Himiko (Ouchi et al., 2013; Carniani et al., 2018). We also include Ly\(\alpha\) and H\(\alpha\) based measurements from Ning et al. (2023) as well as Simmonds et al. (2023), which use narrow/medium band photometry to infer H\(\alpha\) strengths in spectroscopically confirmed LAEs at \(z\sim 6\) identified from MUSE data, enabling the determination of \(f_{\rm esc}\)(Ly\(\alpha\)) and \(\xi_{\rm ion}\). Finally, we also use the Ly\(\alpha\) emission measurements from GNz-11, spectroscopically confirmed to lie at \(z=10.60\) with weak Ly\(\alpha\) emission detected in the medium band NIRSpec gratings (Bunker et al., 2023). ## 3 Spectroscopic properties of Lyman-alpha emitters at \(z\gtrsim 5.8\) In this section we explore the general spectroscopic properties of LAEs identified in the JADES Deep and Medium Tier surveys, with the aim of comparing the ionization and chemical enrichment of LAEs with the general galaxy population at \(z\gtrsim 6\) as well as evaluating the ionizing photon production efficiencies of LAEs across cosmic time. ### Chemical enrichment and dust In Figure 3, we show R23 vs. O32 line ratios for our Deep- and Medium-tier samples of LAEs. These line ratios are widely used tracers of metallicity and ionisation parameter respectively, with the former forming a two-valued relation with metallicity. For \begin{table} \begin{tabular}{l c c c} \hline \hline ID & \(z_{\rm spec}\) & \(M_{\rm UV}\) & \(\beta\) \\ \hline _Deep Tier_ & & & \\ 21842 & 7.982 & \(-18.75^{+0.05}_{-0.05}\) & \(-2.52\pm 0.03\) \\ 10013682* & 7.276 & \(-17.00^{+0.66}_{-1.93}\) & \(-2.17\pm 0.60\) \\ 16625 & 6.631 & \(-18.79^{+0.10}_{-0.10}\) & \(-2.59\pm 0.02\) \\ 18846 & 6.336 & \(-20.15^{+0.08}_{-0.05}\) & \(-2.43\pm 0.01\) \\ 19342 & 5.974 & \(-18.67^{+0.03}_{-0.03}\) & \(-2.75\pm 0.04\) \\ 9422 & 5.937 & \(-19.72^{+0.04}_{-0.04}\) & \(-2.33\pm 0.04\) \\ 6002 & 5.937 & \(-18.84^{+0.07}_{-0.07}\) & \(-2.59\pm 0.01\) \\ 19606 & 5.889 & \(-18.61^{+0.17}_{-0.13}\) & \(-2.70\pm 0.06\) \\ 10056849 & 5.814 & \(-17.95^{+0.07}_{-0.06}\) & \(-2.49\pm 0.04\) \\ _Medium Tier_ & & & \\ 12637 & 7.660 & \(-20.59^{+0.07}_{-0.07}\) & \(-2.20\pm 0.02\) \\ 15362 & 6.794 & \(-18.86^{+0.23}_{-0.29}\) & \(-2.14\pm 0.15\) \\ 13607 & 6.622 & \(-18.77^{+0.08}_{-0.08}\) & \(-1.79\pm 0.29\) \\ 14123 & 6.327 & \(-18.83^{+0.14}_{-0.12}\) & \(-2.26\pm 0.21\) \\ 58850 & 6.263 & \(-19.96^{+0.24}_{-0.30}\) & \(-1.93\pm 0.06\) \\ 17138 & 6.204 & \(-18.97^{+0.34}_{-0.33}\) & \(-2.26\pm 0.54\) \\ 9365 & 5.917 & \(-19.76^{+0.24}_{-0.19}\) & \(-2.52\pm 0.09\) \\ \hline \hline \end{tabular} \end{table} Table 2: Observed rest-frame UV magnitude (\(M_{\rm UV}\)) and slope (\(\beta\)) measured for LAEs in this study. comparison, we show \(z<0.1\) galaxies from the SDSS MPA-JHU catalogs (Aihara et al., 2011)1, as well as non-Lyman-\(\alpha\)-emitting galaxies at \(z>5.5\) from JADES Deep (Cameron et al., 2023), and measurements from individual and stacked galaxies at \(z\gtrsim 5\) not selected on presence or otherwise of Ly\(\alpha\)(Mascia et al., 2023; Nakajima et al., 2023; Sanders et al., 2023; Tang et al., 2023). Our LAEs on average appear to be metal poor with high ionization parameters, lying away from the locus of typical star-forming galaxies at \(z<0.1\) from SDSS toward high O32 and R23. Instead, they are more similar to what has been reported for the general galaxy population at \(z>6\). Footnote 1: [https://www.sdss3.org/dr10/spectro/galaxy_mpajhu.php](https://www.sdss3.org/dr10/spectro/galaxy_mpajhu.php) We do note that LAEs from our Deep Tier survey, which tend to have fainter UV magnitudes (\(\lesssim-20.1\)), show slightly higher O32 ratios and lower R23 ratios compared to LAEs found in the Medium Tier survey as well as other brighter LAEs at \(z>6\). This is consistent with the finding in Cameron et al. (2023) that \(z\sim 6\) galaxies from deep JADES observations show much higher O32, and lower R23 than those measured from stacks of CEERS galaxies (Sanders et al., 2023), which are typically brighter. This is indicative of higher ionization parameters and lower chemical enrichment in fainter, less massive galaxies at \(z>6\). Overall we find that the parameter space on this plot occupied by LAEs at \(z\gtrsim 6\) is roughly the same as the general galaxy population at these redshifts. This suggests that the detection of Ly\(\alpha\) emission from a galaxy in the EoR may not necessarily depend on the chemical or ionization state that is in, but may be more driven by opporture sight-lines probing sufficiently ionized regions of the Universe. We do not measure any presence of dust from Balmer decrements derived from H\(\alpha\)/H\(\beta\) ratios for our LAEs (in agreement with Sandles et al. in prep), which suggests that such systems are relatively dust-free, which is also a prerequisite for the leakage of significant fractions of Lyman continuum photons from a galaxy into the IGM. ### Ionizing photon production The average ionizing photon production efficiency across our sample of faint LAEs is \(\xi_{\rm ion}\) = 25.56 Hz erg\({}^{-1}\), which is shown as a dashed line in Figure 4. When comparing with other measurements for LAEs at the highest redshifts in the literature, we do not see significant evolution in \(\xi_{\rm ion}\) as a function of redshift at \(z>6\). Interestingly, there does not seem to be any strong dependence of \(\xi_{\rm ion}\) on the equivalent width of Ly\(\alpha\) emission either. Assuming Case B recombination and \(f_{\rm esc}({\rm Ly}\alpha)\) of unity, \(\xi_{\rm ion}\) may be expected to increase linearly with EW(Ly\(\alpha\)). The fact that there is no clear correlation between these two quantities across a wider sample of known LAEs at \(z>6\) spanning orders of magnitude in brightness suggests that the mechanisms that are responsible for the production of ionizing photons in a galaxy are not the ones that also control the escape of Ly\(\alpha\) photons from the galaxy. In other words, the neutral gas and dust content, which preferentially affects the transmission of Ly\(\alpha\) photons, does not seem to closely depend on properties such as stellar metallicities or ages that control the production of ionizing photons. Combined with the lack of redshift evolution in \(\xi_{\rm ion}\), the picture that emerges is that the ionizing photon production is not closely linked to the strength of the emergent Ly\(\alpha\) line emission, as it is likely more dependent on the physical and chemical properties of star-forming regions that do not seem to evolve strongly between \(z=6-8.5\). This may have important consequences for modelling the production and escape of ionizing photons from galaxies within the reionization epoch, which we will revisit in Section 5. The chemical enrichment and ionizing properties of LAEs in this study are given in Table 4. ## 4 What governs the escape of Lyman-alpha photons? In this section we explore which galaxy property best traces the Ly\(\alpha\) velocity offset from the systemic redshift as well as the escape fraction of Ly\(\alpha\) photons, \(f_{\rm esc}({\rm Ly}\alpha)\), which are widely regarded to be tracing escape channels for hydrogen ionizing LyC \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline ID & \(z_{\rm spec}\) & \(F_{\rm R100}^{\rm Ly\alpha}\) & EW\({}_{\rm R100}^{\rm Ly\alpha}\) & \(F_{\rm R1000}^{\rm Ly\alpha}\) & EW\({}_{\rm R1000}^{\rm Ly\alpha}\) & \(\Delta v_{\rm sys}^{\rm Ly\alpha}\) & \(f_{\rm esc}({\rm Ly}\alpha)\) \\ & & (\(\times 10^{-18}\) cgs) & (Å) & (\(\times 10^{-18}\) cgs) & (Å) & (km s\({}^{-1}\)) & \\ \hline _Deep Tier_ & & & & & & & \\ 21842 & 7.982 & \(0.3\pm 0.2\) & \(14.1\pm 5.7\) & \(0.7\pm 0.1\) & \(29.2\pm 3.3\) & \(166.5\pm 81.3\) & \(0.09\pm 0.01\) \\ 10013682* & 7.276 & \(1.5\pm 0.2\) & \(258.5\pm 43.0\) & \(2.2\pm 0.5\) & \(337.2\pm 175.5\) & \(178.4\pm 48.9\) & \(0.93\pm 0.12\) \\ 16625 & 6.631 & \(13.7\pm 2.0\) & \(32.0\pm 4.8\) & \(21.8\pm 4.8\) & \(51.0\pm 7.4\) & \(244.2\pm 50.9\) & \(0.14\pm 0.02\) \\ 18846 & 6.336 & \(4.2\pm 2.1\) & \(24.1\pm 0.9\) & \(7.7\pm 1.3\) & \(44.5\pm 1.7\) & \(139.4\pm 13.6\) & \(0.31\pm 0.01\) \\ 19342 & 5.974 & \(2.4\pm 0.2\) & \(55.1\pm 6.3\) & \(2.2\pm 0.5\) & \(49.9\pm 9.6\) & \(257.0\pm 37.1\) & \(0.24\pm 0.04\) \\ 9422 & 5.937 & \(9.3\pm 2.7\) & \(109.2\pm 14.7\) & \(10.6\pm 0.9\) & \(124.4\pm 17.2\) & \(147.6\pm 32.1\) & \(0.26\pm 0.01\) \\ 6002 & 5.937 & \(2.0\pm 0.2\) & \(35.3\pm 2.8\) & \(2.9\pm 0.6\) & \(50.5\pm 5.8\) & \(181.0\pm 81.5\) & \(0.43\pm 0.04\) \\ 19606 & 5.889 & 6.3 \(0.4\) & \(111.2\pm 26.3\) & \(-\) & \(-\) & \(-\) & \(0.50\pm 0.03\) \\ 10056849 & 5.814 & \(4.9\pm 0.3\) & \(127.0\pm 10.5\) & \(3.8\pm 0.5\) & \(97.2\pm 15.2\) & \(233.0\pm 75.5\) & \(0.42\pm 0.06\) \\ _Medium Tier_ & & & & & & & \\ 12637 & 7.660 & \(0.6\pm 0.2\) & \(3.5\pm 0.9\) & \(4.2\pm 0.3\) & \(24.0\pm 1.9\) & \(277.2\pm 48.4\) & \(0.13\pm 0.01\) \\ 15362 & 6.794 & \(-\) & \(-\) & \(1.5\pm 0.7\) & \(50.0\pm 28.2\) & \(27.0\pm 98.6\) & \(0.20\pm 0.07\) \\ 13607 & 6.622 & \(0.2\pm 0.2\) & \(2.9\pm 2.1\) & \(2.4\pm 0.9\) & \(29.4\pm 9.0\) & \(116.8\pm 141.1\) & \(0.26\pm 0.08\) \\ 14123 & 6.327 & \(7.0\pm 2.0\) & \(241.2\pm 160.8\) & \(4.3\pm 1.2\) & \(150.1\pm 99.6\) & \(194.2\pm 134.7\) & \(0.35\pm 0.07\) \\ 58850 & 6.263 & \(9.1\pm 0.8\) & \(4.8\pm 2.9\) & \(3.1\pm 0.8\) & \(16.3\pm 3.9\) & \(254.6\pm 145.3\) & \(0.06\pm 0.01\) \\ 17138 & 6.204 & \(3.0\pm 1.1\) & \(65.0\pm 22.8\) & \(4.6\pm 2.1\) & \(93.6\pm 40.9\) & \(0.0\pm 128.9\) & \(0.40\pm 0.10\) \\ 9365 & 5.917 & \(12.2\pm 1.4\) & \(109.5\pm 16.6\) & \(11.2\pm 1.9\) & \(118.4\pm 27.6\) & \(256.7\pm 139.6\) & \(0.28\pm 0.04\) \\ \hline \end{tabular} \end{table} Table 3: Observed Ly\(\alpha\) emission properties of LAEs in this study. The flux units are erg s\({}^{-1}\) cm\({}^{-2}\) (cgs). photons. The goal of this section is to determine the best tracer for LyC leakage when Ly\(\alpha\) emission may not be visible from galaxies in the reionization era. ### Low Lyman-alpha velocity offset leads to high Lyman-alpha escape fraction We begin by demonstrating that the escape fraction of Ly\(\alpha\) photons is strongly anti-correlated with the velocity offset of the Ly\(\alpha\) emission compared to the systemic redshift of a galaxy, as shown in Figure 5. This anti-correlation can be explained using neutral gas column densities - a low column density of neutral gas will lead to less scattering of Ly\(\alpha\) photons out of the line of sight, thereby resulting in both a high observed \(f_{\rm esc}({\rm Ly}\alpha)\) as well as low velocity offsets from systemic. Low neutral gas density environments are also though to be conducive to the escape of LyC photons from a galaxy (at least along the same line of sight as Ly\(\alpha\)). Therefore, both Ly\(\alpha\) velocity offsets and/or escape fractions can be important to ascertain the escape of ionizing photons from galaxies that drive reionization. In the following sections we explore correlations between various galaxy properties and each of Ly\(\alpha\) velocity offset and \(f_{\rm esc}({\rm Ly}\alpha)\) to establish dependencies and/or observational biases that impact the Ly\(\alpha\) strength and line profile in LAEs at \(z\gtrsim 6\). ### Insights from Lyman-alpha velocity offsets In Figure 6 we compare the Ly\(\alpha\) velocity offsets observed for our sample of galaxies with other observables that trace both the underlying stellar populations as well as the state of the ISM. We also include measurements of brighter LAEs in the EoR from the literature to increase the baseline of any trends that may become apparent. We find that the Ly\(\alpha\) velocity offsets anti-correlate with EW(Ly\(\alpha\)) as shown in Figure 6 (top-left), which has previously been reported in the literature across redshifts (e.g. Izotov et al., 2021). This anti-correlation mainly stems from the resonant scattering of Ly\(\alpha\) photons by the neutral gas within the galaxies - a higher velocity offset compared to the systemic redshift is indicative of more resonant scattering of the emergent Ly\(\alpha\) photons, which results in decreased Ly\(\alpha\) flux observed along the line-of-sight. Therefore, the same scattering mechanism is responsible for increased offset from systemic velocity as well as the reduction of EW(Ly\(\alpha\)) across galaxies. With high EW Ly\(\alpha\) emission that peaks close to the systemic redshift likely tracing low covering fractions of neutral gas, galaxies that exhibit such Ly\(\alpha\) profiles and strengths are also likely to be leaking significant amounts of LyC photons (Verhamme et al., 2015, 2017). For the sample of UV-faint LAEs probed by the JADES sample presented in this paper, we find the equivalent widths to be higher and the velocity offsets to be lower compared to UV-bright LAEs in the literature (Figure 6, top-right), which may indicate that UV-fainter LAEs could be more likely to host conditions required for efficient Ly\(\alpha\) as well as LyC escape, which we will explore in detail in the later sections. Particularly within the context of ionized bubbles within which LAEs in the EoR must reside to be able to freely transmit Ly\(\alpha\) photons along the line of sight, smaller velocity offsets are also expected to trace large ionized bubble sizes (see Mason and Gronke, 2020; Saxena et al., 2023, for example), which would lead to considerably less attenuation by the intervening IGM and therefore, higher transmission of Ly\(\alpha\) leading to high equivalent width measurements. Comparing the Ly\(\alpha\) offset with spectroscopic indicators of the ionization parameter (i.e. the ratio of ionizing photons to the hydrogen density) probed by indicators like the O32 ratio, we do not find any strong correlations between the two quantities (Figure 6, bottom-left). A high O32 ratio has been proposed as an indicator for both high Ly\(\alpha\) and LyC escape (e.g. Izotov et al. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline ID & \(z_{\rm spec}\) & \(F^{\rm H\beta}\) & \(F^{\rm H\alpha}\) & \(\log(\xi_{\rm ion}/{\rm Hz\ erg^{-1}})\) & [O iii]/[O ii] & ([O ii]+[O iii])/H\(\beta\) & \(f_{\rm esc}({\rm LyC})\) \\ & & (\(\times 10^{-19}\) cgs) & (\(\times 10^{-19}\) cgs) & & (O32) & (R23) & \\ \hline \multicolumn{7}{l}{_Deep Tier_} \\ 21842 & 7.982 & \(3.2\pm 0.5\) & \(-\) & \(25.59^{+0.05}_{-0.14}\) & \(11.5\pm 1.6\) & \(8.8\pm 1.1\) & \(0.08\pm 0.01\) \\ 10013682 & 7.276 & \(0.8\pm 0.2\) & \(-\) & \(25.66^{+0.07}_{-0.14}\) & \(12.0\pm 2.6\) & \(9.5\pm 1.7\) & \(0.03\pm 0.01\) \\ 16625 & 6.631 & \(5.9\pm 0.3\) & \(16.5\pm 0.2\) & \(25.69^{+0.03}_{-0.01}\) & \(24.5\pm 3.0\) & \(4.9\pm 0.5\) & \(0.04\pm 0.01\) \\ 18846 & 6.336 & \(11.4\pm 0.2\) & \(30.3\pm 0.3\) & \(25.32^{+0.01}_{-0.01}\) & \(25.8\pm 1.3\) & \(6.3\pm 0.2\) & \(0.07\pm 0.01\) \\ 19342 & 5.974 & \(4.1\pm 0.2\) & \(10.8\pm 0.3\) & \(25.42^{+0.01}_{-0.01}\) & \(29.5\pm 1.8\) & \(6.4\pm 0.4\) & \(0.06\pm 0.01\) \\ 9422 & 5.937 & \(1.8\pm 0.1\) & \(5.0\pm 0.7\) & \(25.65^{+0.01}_{-0.01}\) & \(70.6\pm 4.1\) & \(7.7\pm 0.3\) & \(0.01\pm 0.001\) \\ 6002 & 5.937 & \(2.8\pm 0.1\) & \(8.1\pm 0.2\) & \(25.19^{+0.02}_{-0.03}\) & \(10.9\pm 1.1\) & \(7.6\pm 0.6\) & \(0.06\pm 0.01\) \\ 19606 & 5.889 & \(4.6\pm 0.4\) & \(13.5\pm 0.5\) & \(25.51^{+0.04}_{-0.02}\) & \(21.9\pm 2.5\) & \(7.5\pm 0.7\) & \(0.06\pm 0.01\) \\ 10056849 & 5.814 & \(4.0\pm 0.4\) & \(10.9\pm 0.3\) & \(25.65^{+0.02}_{-0.02}\) & \(17.22\pm 1.3\) & \(4.6\pm 0.3\) & \(0.03\pm 0.01\) \\ \multicolumn{7}{l}{_Medium Tier_} \\ 12637 & 7.660 & \(13.9\pm 0.5\) & \(-\) & \(25.45^{+0.02}_{-0.02}\) & \(9.4\pm 0.6\) & \(9.7\pm 0.5\) & \(0.10\pm 0.01\) \\ 15362 & 6.794 & \(4.3\pm 0.4\) & \(11.8\pm 1.2\) & \(25.49^{+0.07}_{-0.02}\) & \(2.7\pm 0.3\) & \(6.2\pm 0.6\) & \(0.16\pm 0.02\) \\ 13607 & 6.622 & \(3.6\pm 0.5\) & \(11.9\pm 0.9\) & \(25.51^{+0.14}_{-0.02}\) & \(1.6\pm 0.1\) & \(11.5\pm 1.1\) & \(0.80\pm 0.11\) \\ 14123 & 6.327 & \(5.6\pm 0.9\) & \(15.1\pm 1.2\) & \(25.61^{+0.16}_{-0.25}\) & \(12.2\pm 2.6\) & \(8.9\pm 2.4\) & \(0.04\pm 0.01\) \\ 58850 & 6.263 & \(26.0\pm 1.7\) & \(65.9\pm 2.0\) & \(25.83^{+0.09}_{-0.12}\) & \(23.6\pm 4.7\) & \(11.2\pm 1.6\) & \(0.02\pm 0.01\) \\ 17138 & 6.204 & \(3.6\pm 3.0^{\dagger}\) & \(10.4\pm 1.9\) & \(25.31^{+0.10}_{-0.13}\) & \(1.8\pm 0.3\) & \(8.1\pm 1.2\) & \(0.38\pm 0.10\) \\ 9365 & 5.917 & \(11.1\pm 1.8\) & \(30.8\pm 1.3\) & \(25.40^{+0.13}_{-0.08}\) & \(12.3\pm 1.7\) & \(10.1\pm 1.2\) & \(0.14\pm 0.02\) \\ \hline \end{tabular} \({}^{\dagger}\) The H\(\beta\) line was affected by a possible cosmic ray in the spectrum, and therefore, H\(\alpha\) flux was used to estimate H\(\beta\) flux assuming no dust. \end{table} Table 4: Chemical enrichment, ionization properties and Lyman continuum escape fractions of Ly\(\alpha\) emitters presented in this study. The flux units are \(\rm erg\,s^{-1}\,cm^{-2}\) (cgs). 2021) and we do find that our UV-faint LAEs with small velocity offsets on average show high O32 ratios. However, given a lack of strong correlation between O32 and Ly\(\alpha\) velocity offset we conclude that high O32 ratios are a necessary but not a sufficient condition for efficient ionizing photon escape (see also Choustikov et al. 2023). We similarly do not find any strong correlation between Ly\(\alpha\) velocity offset and EW(H\(\beta\)), which is a good tracer for star-formation rates and consequently the production rates of Hydrogen ionizing photons (Figure 6, bottom-right) and has been proposed as a robust indicator of \(f_{\rm esc}\)(LyC) (e.g. Zackrisson et al. 2013; Flury et al. 2022b). The requirement of a relatively high EW(H\(\beta\)) is perhaps similar in nature to the requirement of high O32 from LyC leaking galaxies, necessary but insufficient on its own to enable efficient LyC escape. ### Insights from the Lyman-alpha escape fraction We now explore the dependence of \(f_{\rm esc}\)(Ly\(\alpha\)) measured directly from the spectra with other galaxy properties and show the dependence of \(f_{\rm esc}\)(Ly\(\alpha\)) on EW(Ly\(\alpha\)), \(M_{\rm UV}\), O32 and EW(H\(\beta\)) in Figure 7. We find a strong correlation between \(f_{\rm esc}\)(Ly\(\alpha\)) and EW(Ly\(\alpha\)) (Figure 7, top-left). Since \(f_{\rm esc}\)(Ly\(\alpha\)) is calculated using the observed ratio of Ly\(\alpha\) to H\(\alpha\) (or H\(\beta\)) emission, this strong correlation suggests that the H\(\alpha\) (or H\(\beta\)) line fluxes do not scale in proportion with the Ly\(\alpha\) emission in LAEs with with higher EW Ly\(\alpha\) emission. We note that the observed \(f_{\rm esc}\)(Ly\(\alpha\)) (as well as EW) increases consistently with decreasing UV magnitudes (top-right), which may be indicative of decreasing neutral gas covering fractions that potentially play a more important role in dictating the strength of the observed Ly\(\alpha\) emission than the intrinsic production of ionizing photons. However, We note that the lack of low \(f_{\rm esc}\)(Ly\(\alpha\)) (or low EW(Ly\(\alpha\))) detections from the faintest galaxies may also be a consequence of the flux limited nature of the spectroscopic data used in this study. It is also worth noting that the lack of high \(f_{\rm esc}\)(Ly\(\alpha\)) observations from the UV-brightest galaxies is since again indicative of increasing neutral gas fractions in more luminous/massive systems, which likely attenuates and/or scatters the Ly\(\alpha\) flux along the line-of-sight. Comparing with O32 and EW(H\(\beta\)), we find that neither of these quantities correlates strongly with \(f_{\rm esc}\)(Ly\(\alpha\)) across our sample. We note that high O32 ratios and/or EW(H\(\beta\)) are perhaps needed to have a higher chance of observing high \(f_{\rm esc}\)(Ly\(\alpha\)), but there is considerable scatter in the O32 ratios that we measure for our LAEs, with some LAEs showing O32 \(<\) 3. Therefore, the O32 ratio may not be a good predictor of the expected \(f_{\rm esc}\)(Ly\(\alpha\)) (and consequently LyC) from galaxies in the reionization era. ### Role of the IGM in attenuating Lyman-alpha emission at \(z>6\) Finally in this section, we look at the evolution of \(f_{\rm esc}\)(Ly\(\alpha\)) with redshift, focusing particularly on \(z\gtrsim 6\) where the IGM is expected to play a dominant role in attenuating Ly\(\alpha\) emission, unless the LAEs live in large ionized bubbles. In Figure 8 we show \(f_{\rm esc}\)(Ly\(\alpha\)) as a function of redshift, colour-coded by \(M_{\rm UV}\) for our LAEs along with others known at \(z\gtrsim 5.8\). A sharp decrease in the \(f_{\rm esc}\)(Ly\(\alpha\)) is clearly apparent. It is also interesting to note that at any given redshift, UV-fainter galaxies exhibit higher \(f_{\rm esc}\)(Ly\(\alpha\)), as we had previously noted. The question that arises from this is, what is driving the decrease of \(f_{\rm esc}\)(Ly\(\alpha\)) with redshift? Is the increasing neutrality of the IGM with redshift more dominant than the observational biases associated with being able to only observe UV-bright galaxies at high redshifts from flux limited studies? To explore this effect, in Figure 9 we show the \(f_{\rm esc}\)(Ly\(\alpha\)) as a function of redshift for only the UV-brightest galaxies with \(M_{\rm UV}<-19.5\) (arbitrarily chosen) to study the evolution in a more flux complete sample of LAEs across a large redshift baseline. This time around, we colour code the data points with Ly\(\alpha\) velocity offset, which is a good proxy for the size of the ionized bubble around the LAE. Figure 9 clearly shows that for UV-bright galaxies at \(z>6\), the decrease in \(f_{\rm esc}\)(Ly\(\alpha\)) is accompanied by an increasing Ly\(\alpha\) velocity offset at the highest redshifts, indicating that the decline of EW(Ly\(\alpha\)) seen with redshift in UV-bright LAEs is driven by the reduction in the sizes of the ionized bubbles traced by the Ly\(\alpha\) velocity offset from systemic redshift. This demonstrates that increased IGM attenuation at the highest redshifts is playing an important role in the observed evolution of EW(Ly\(\alpha\)), at least within the UV-brightest sample, by attenuating the emergent Ly\(\alpha\) flux and the measured \(f_{\rm esc}\). Therefore, a combination of selection effects as well as increasingly neutral IGM, which manifests itself as smaller ionized bubbles around LAEs at the highest redshifts play an important role in regulating \(f_{\rm esc}\)(Ly\(\alpha\)). Estimates on the sizes of ionized regions around JADES LAEs have been presented in a companion paper (Witstok et al. submitted) and offer a powerful probe of the Figure 3: Comparison of R23 and O32 ratios of LAEs with the general galaxy populations inferred from NIRSpec observations at \(z\gtrsim 6\). Orange circles show JADES Deep galaxies from (Cameron et al. 2023) which are _not_ LAEs. Diamonds and pentagons show measurements from individual lensed galaxies at \(z\gtrsim 5\)(Masci et al. 2023; Nakajima et al. 2023). Measurements from stacked galaxies at \(z\sim 5.6-7.7\) are shown as hollow purple markers (Sanders et al. 2023; Tang et al. 2023). Also shown are measurements from SDSS for \(z<0.1\) star-forming galaxies (Aihara et al. 2011). Overall, we find that our faint LAEs occupy a similar parameter to that occupied by the general star-forming galaxy population at \(z\gtrsim 6\). This also highlights that the observed presence of Ly\(\alpha\) emission in reionization era galaxies is driven by external factors, and not necessarily by the ISM/stellar properties of the galaxies. Interestingly, UV-faint LAEs from our sample tend to show higher O32 and lower R23 ratios than the UV-bright ones. spatial as well as temporal evolution of the IGM neutral fraction in this field. The comparisons we have presented in this section demonstrate that to use Ly\(\alpha\) emission to infer significant LyC photon leakage from galaxies in the reionization era, both high \(f_{\rm esc}({\rm Ly}\alpha)\) and low Ly\(\alpha\) velocity offsets compared to systemic redshift are required. We have found that the dependence of both quantities on other spectroscopic and photometric galaxy properties are filled with complexity and are impacted by observational biases (most importantly the flux limited nature of spectroscopic observations in a field). However, the detection of Ly\(\alpha\) emission from a \(z>6\) galaxy is a powerful probe nonetheless at identifying LyC leakage, as has also been noted at lower redshifts (Verhamme et al., 2017; Izotov et al., 2021; Saxena et al., 2022). In the next section we attempt to move beyond a simple \(f_{\rm esc}({\rm Ly}\alpha)\) and use all of the available spectroscopic and photometric indicators to estimate \(f_{\rm esc}({\rm Ly}{\rm C})\), which is the quantity that is needed to capture the contribution of galaxies to the reionization budget of the Universe at \(z\gtrsim 6\). ## 5 Implications for LyC photon production, escape and reionization Although the presence of strong Ly\(\alpha\) emission peaking close to the systemic velocity has been used to infer high LyC escape fractions (Verhamme et al., 2015; Izotov et al., 2021; Naidu et al., 2022), the physics that control the escape of LyC photons from star-forming galaxies are much more complicated (e.g. Dijkstra, 2014; Barrow et al., 2020; Katz et al., 2020; Garel et al., 2021; Maji et al., 2022; Choustikov et al., 2023). The neutral gas content within a galaxy, in particular, can affect the Ly\(\alpha\) and LyC photons differently, which combined with the line-of-sight dependence of both Ly\(\alpha\) and Ly\(\alpha\) photon escape can often complicate the inference of LyC photon escape from Ly\(\alpha\) alone. For example, one of the most well-studied LyC leakers, _Ion1_ at \(z\approx 3.8\)(Vanzella et al., 2012; Ji et al., 2020) actually does not show any Ly\(\alpha\) emission, which demonstrates the complex relationship between Ly\(\alpha\) and LyC photons. Therefore, in this section we fold in other photometric and spectroscopic properties of our faint LAEs to make a more in Figure 4: _Left:_ Ionizing photon production efficiency (\(\xi_{\rm ion}\)) from LAEs at \(z\gtrsim 6\) as a function of redshift. The dashed line indicates the average \(\xi_{\rm ion}\) value (\(\log(\xi_{\rm ion}/{\rm erg}^{-1}\,{\rm Hz})=25.56\)) measured across the new LAEs reported in this study. The shaded region marks the canonical value of \(\log(\xi_{\rm ion}/{\rm erg}^{-1}\,{\rm Hz})=25.2-25.3\) from Kuhlen & Faucher-Giguère (2012); Robertson et al. (2013, 2015). Overall, we do not find a significant evolution in the \(\xi_{\rm ion}\) of LAEs across redshifts particularly at \(z>6\), indicating that the ionizing properties do not seem to evolve strongly between \(z=6-8.5\). _Right:_\(log(\xi_{\rm ion})\) as a function of EW(Ly\(\alpha\)). Once again, there is no strong correlation between \(\xi_{\rm ion}\) and the strength of Ly\(\alpha\) emission across LAEs at \(z>6\), which one would naively expect from simple Case B recombination in the absence of significant absorption/scattering of Ly\(\alpha\) photons. This lack of correlation indicates that the processes that control the escape of Ly\(\alpha\) photons may not necessarily be dependent on the processes that produce ionizing photons. Figure 5: Ly\(\alpha\) velocity offset vs. \(f_{\rm esc}({\rm Ly}\alpha)\), where a strong anti-correlation between these quantities is observed. High \(f_{\rm esc}({\rm Ly}\alpha)\) and small velocity offsets likely trace relatively dust and gas-free conditions, which neither leads to considerable resonant scattering of Ly\(\alpha\) photons as they travel along a sight line, nor does it attenuate Ly\(\alpha\) emission via absorption and scattering. formed inference on the LyC escape fractions. Several observational studies as well as simulations have attempted to connect the leakage of LyC photons to spectroscopic properties. Some of the most exciting observational results linking LyC leakage to galaxy properties are being delivered by the Low-z Lyman Continuum Survey (LzLCS; Flury et al. 2022a,b). State-of-the-art high-resolution cosmological simulations like srimac20 are also now being used to study the dependence of LyC photon escape on galaxy properties (e.g. Rosdahl et al. 2022; Katz et al. 2023b; Choustikov et al. 2023), which offers much more control on the sample sizes and selection functions when attempting to use observations of galaxy properties to predict \(f_{\rm esc}\)(LyC). Here we use the relationship between \(f_{\rm esc}\)(LyC) and galaxy properties derived by Choustikov et al. (2023) to predict \(f_{\rm esc}\)(LyC) for our sample of LAEs. Choustikov et al. (2023) specifically focused on observables that trace conditions within galaxies that enable both the production as well as escape of LyC photons. Briefly, these conditions mainly require the galaxies to have (i) relatively high star-formation rates (sSFR \(>10^{-9}\) yr\({}^{-1}\)); (ii) stellar ages in the range \(3.5-10\) Myr, a time long enough for the first generation of supernovae to have cleared out channels in the ISM for LyC escape, while short enough that UV photons are still being produced in abundance by the stellar population, and (iii) low dust and neutral gas content. Using these criteria, Choustikov et al. (2023) report a six-parameter equation to predict the angle-averaged (and not sight-line dependent) \(f_{\rm esc}\)(LyC) based on observed galaxy properties. These parameters include the UV slope, \(\beta\), dust attenuation \(E(B-V)\) (typically measured from the Balmer line decrement), H\(\beta\) line luminosity, \(M_{\rm UV}\), R23 and O32. All of these parame Figure 6: Dependence of the Ly\(\alpha\) velocity offset on EW(Ly\(\alpha\)) (top-left), \(M_{\rm UV}\) (top-right), [O iii]/[O ii] ratio (O32) (bottom-left) and EW(H\(\beta\)) (bottom-right). We find an anti-correlation between Ly\(\alpha\) velocity offset and EW(Ly\(\alpha\)), which is likely driven by the neutral gas content in the galaxy, whereby a larger reservoir of neutral gas both attenuates Ly\(\alpha\) flux close to the systemic velocity as well as moves the apparent peak of the emission line away from systemic velocity due to resonant scattering of Ly\(\alpha\) photons. Ly\(\alpha\) emission from UV-fainter LAEs also peaks closer to the systemic redshift, which is indicative of decreasing neutral gas content at fainter luminosities/galaxy masses. No strong correlations exist between Ly\(\alpha\) offset and O32 ratios or EW(H\(\beta\)), interestingly with three low Ly\(\alpha\) velocity offset sources showing low (\(<3\)) O32 ratios. ters have been observed for our faint LAEs from JADES, which makes predicting \(f_{\rm esc}\)(LyC) using Equation (4) from Choustikov et al. (2023) relatively straightforward. We note that to predict the angle-averaged \(f_{\rm esc}\)(LyC), the properties of the Ly\(\alpha\) emission line, which can often be highly sight-line dependent, are not taken into account by Choustikov et al. (2023) when estimating \(f_{\rm esc}\)(LyC) (c.f. Maji et al., 2022). The calculated \(f_{\rm esc}\)(LyC) for our LAEs are given in Table 4. In Figure 10 we show \(f_{\rm esc}\)(LyC) calculated using observed galaxy properties compared with \(f_{\rm esc}\)(Ly\(\alpha\)) measured directly from the spectra (left) and the Ly\(\alpha\) velocity offset from the systemic (right) from our faint LAEs as well as those that we calculate using observed quantities from Tang et al. (2023). We find that the predicted \(f_{\rm esc}\)(LyC) remains below \(f_{\rm esc}\)(Ly\(\alpha\)) for all our LAEs, but two LAEs from Tang et al. (2023) have higher \(f_{\rm esc}\)(LyC) compared to \(f_{\rm esc}\)(Ly\(\alpha\)). In general we do not find any strong correlation between \(f_{\rm esc}\)(LyC) and \(f_{\rm esc}\)(Ly\(\alpha\)). We do find, qualitatively, that \(f_{\rm esc}\)(LyC) anti-correlates with Ly\(\alpha\) velocity offset, but we note that a low Ly\(\alpha\) velocity offset does not guarantee a high \(f_{\rm esc}\)(LyC) and that there is considerable scatter in the plot. From a sample of low redshift LyC leakers, Izotov et al. (2021) also found that \(f_{\rm esc}\)(LyC) tends to always be lower than \(f_{\rm esc}\)(Ly\(\alpha\)), and that \(f_{\rm esc}\)(LyC) weakly anti-correlates with Ly\(\alpha\) velocity offset from systemic, consistent with our findings. We do, however, note the lack of any LAE with high velocity offsets showing high \(f_{\rm esc}\)(LyC), which seems to suggest that low Ly\(\alpha\) velocity offsets are a necessary but not a sufficient condition to enable high LyC photon escape, mainly tracing the absence of high column density neutral gas, and that galaxies that show large Ly\(\alpha\) velocity offsets compared to systemic likely trace highly dense neutral gas conditions, which may not be conducive for significant LyC escape fractions. Figure 7: Dependence of the Ly\(\alpha\) escape fraction on EW(Ly\(\alpha\)) (top-left), \(M_{\rm UV}\) (top-right), [O iii]/[O ii] ratio (O32) (bottom-left) and EW(H\(\beta\)) (bottom-right). Unsurprisingly, we find that \(f_{\rm esc}\)(Ly\(\alpha\)) correlates strongly with EW(Ly\(\alpha\)). We also note that UV-faint galaxies show higher \(f_{\rm esc}\)(Ly\(\alpha\)), although this could be attributed purely to the flux limited nature of our spectroscopic survey. We do not find strong correlations between \(f_{\rm esc}\)(Ly\(\alpha\)) and O32 or H\(\beta\) strength, although as previously noted we do find that our faint LAEs on average show high O32 ratios and H\(\beta\) line strengths. Before assessing the co-dependence of ionizing photon production and escape from our faint LAEs, we note that when calculating \(\xi_{\rm ion}\) for our LAEs using Equation (2) in Section 2.8 we assumed \(f_{\rm esc}({\rm LyC})\) to be zero. However, with \(f_{\rm esc}({\rm LyC})\) predictions for our LAEs, we now calculate \(\xi_{\rm ion}^{\rm corr}\), which is corrected for the fraction of ionizing photons that escape out of the galaxy, thereby not contributing towards line emission. In this section going forward, we use the corrected \(\xi_{\rm ion}\) value, \(\xi_{\rm ion}^{\rm corr}\). We now explore the dependence of \(f_{\rm esc}({\rm LyC})\) on the corrected ionizing photon production efficiencies in Figure 11. Interestingly, we find that sources with the highest \(f_{\rm esc}({\rm LyC})\) do not necessarily show high values of \(\xi_{\rm ion}^{\rm corr}\). Interestingly there is a mild anti-correlation between the two quantities, which may be not be entirely unexpected as when non-negligible fractions of ionizing photons begin escaping from the galaxy, there are fewer photons available to produce the Balmer line (as well as strong nebular line) emission (e.g. Topping et al.2022). This would lead to low \(\xi_{\rm ion}^{\rm corr}\) values inferred when simply assuming Case-B recombination, which can consistently explain this observed mild anti-correlation. Another important effect that may be driving the scatter between \(f_{\rm esc}({\rm LyC})\) and \(\xi_{\rm ion}^{\rm corr}\) could be the expected time delays between significant production of ionizing photons and the emergence of escape channels that facilitate the escape of those photons. As noted in Choutikov et al. (2023) (but see also Barrow et al.2020; Katz et al.2020), for a burst of star-formation it isn't up until \(\sim 3.5\) Myr since the starburst is triggered that supernovae begin to clear channels in the ISM to allow significant LyC photon escape. Very early on in the starburst, there is a very high production rate of ionizing photons, but these photons are unable to escape out of the H ii regions. Therefore, the age of starburst and the time delay between the peak of ionizing photon production and the emergence of escape channels may lead to the observed scatter. Finally, we explore the dependence of the product of the ionizing photon production efficiency and the ionizing photon escape fraction (\(f_{\rm esc}({\rm LyC})\times\xi_{\rm ion}^{\rm corr}\)), which is an important quan Figure 8: Evolution of \(f_{\rm esc}({\rm Ly\alpha})\) with redshift, colour-coded by the UV magnitude. We see a gradual decrease of \(f_{\rm esc}({\rm Ly\alpha})\) at increasing redshifts, and at the same redshifts we see a decreasing \(f_{\rm esc}({\rm Ly\alpha})\) with increasing UV luminosity as was previously noted. The decrease of \(f_{\rm esc}({\rm Ly\alpha})\) with redshift could be driven by both selection biases as well as increasing IGM neutral fraction, and we explore the redshift evolution of only the UV-bright LAEs in Figure 9. Figure 9: \(f_{\rm esc}({\rm Ly\alpha})\) as a function of redshift for galaxies brighter than \(M_{\rm UV}<-19.5\), colour-coded by \({\rm Ly\alpha}\) velocity offsets. The symbols are the same as in Figure 8. We find that the \({\rm Ly\alpha}\) escape fraction also seems to decrease with redshift for brighter galaxies, which is likely driven by the size of the ionized bubbles surrounding each galaxy indicated by the increasing \(\Delta v_{\rm Ly\alpha}\) at the highest redshifts, indicating more neutral bubbles around the LAEs. ity needed to assess the contribution of individual star-forming galaxies to the reionization budget of the Universe. Since this study has been limited only to strong LAEs at \(z\geq 6\), here we explore the dependence of the ionizing photon output into the IGM as a function of UV magnitude, EW(Ly\(\alpha\)) and redshift, which we show in Figure 12. We begin by assessing the dependence of \(\log(f_{\rm esc}({\rm LyC})\times\xi_{\rm ion}^{\rm corr})\) on UV magnitude, finding that it increases very mildly (with a large scatter) with decreasing \(M_{\rm UV}\) as shown in Figure 12 (top-left), following the linear relation: \[\log\left(\frac{f_{\rm esc}\times\xi_{\rm ion}^{\rm corr}}{\rm erg^{-1}\,Hz} \right)=0.03\,(\pm 0.01)\,M_{\rm UV}+25.04\,(\pm 1.87) \tag{3}\] This lack of strong correlation for LAEs clearly disfavours enhanced ionizing photon output from UV-fainter LAEs, and may have important consequences for models of reionization. We further find that the ionizing photon output from LAEs also remains roughly constant over a range of EW(Ly\(\alpha\)), best fit with the linear relation: \[\log\left(\frac{f_{\rm esc}\times\xi_{\rm ion}^{\rm corr}}{\rm erg^ {-1}\,Hz}\right)=3.1\times 10^{-4}\,(\pm 1.7\times 10^{-6})\,\left(\frac{ \rm EW(Ly\alpha)}{\rm A}\right)\\ +24.36\,(\pm 0.02) \tag{4}\] The little to no dependence of \(\log(f_{\rm esc}\times\xi_{\rm ion})\) on the equivalent width of Ly\(\alpha\) emission is shown in Figure 12 (top-right; note that the x-axis is in log scale), implying that purely the observed Ly\(\alpha\) line strength is not a good independent indicator of the ionizing photon output from LAEs in the epoch of reionization. Finally, since our LAEs (combined with those from Tang et al.2023) cover a large redshift baseline, we also derive the dependence of the ionizing photon output of LAEs with redshift, finding the best-fitting relation: \[\log\left(\frac{f_{\rm esc}\times\xi_{\rm ion}}{\rm erg^{-1}\,Hz}\right)=0.04 \,(\pm 0.01)\,z+24.08\,(\pm 0.53) \tag{5}\] Once again, we find that the ionizing photon output of LAEs remains roughly constant at \(z\geq 6\) as shown in Figure 12 (bottom). This is consistent with a picture whereby the the production and escape of ionizing photons may be dictated by physical processes that operate on shorter timescales (i.e. intense star-formation or SNe activity). The best-fitting relations of ionizing photon output with UV magnitude Ly\(\alpha\) EW and redshift can be used to estimate the total number of ionizing photons contributed by LAEs at a given redshift, depending on the space density of LAEs. Since the increasing neutrality of the IGM makes it impossible to obtain a complete Ly\(\alpha\) luminosity function at \(z>6\), certain assumptions about the evolution of the Ly\(\alpha\) luminosity function may need to be made (see Matthee et al.2022, for example). Alternatively, if a good handle on the Ly\(\alpha\) emitter fraction (unaffected by the IGM attenuation) can be obtained at \(z>6\) by extrapolating from fractions measured at lower redshifts (e.g. Santos et al.2020), then our best-fitting relations can also be used to estimate the total ionizing photon contribution towards the reionization budget from LAEs at \(z>6\). Any dependence of the LAE fraction on UV magnitude must also be taken into account for this calculation (see Jones et al.2023). ## 6 Conclusions In this study we have presented detailed properties of 16 faint Ly\(\alpha\) emitting galaxies at \(z>5.8\) from the _JWST_ Advanced Deep Extragalactic Survey (JADES) Deep and Medium Tier NIRSpec MSA surveys. These new Ly\(\alpha\) emitters, spanning absolute UV magnitudes of \(-17.0\) to \(-20.6\), are generally fainter compared to LAEs that were previously known in or near the epoch of reionization, opening up a new window into studying the properties of faint galaxies that reside within ionized bubbles in the reionization era. Using measurements directly from the low resolution (R100) PRISM as well as medium resolution (R1000) grating spectra, Figure 10: Comparison of the calculated \(f_{\rm esc}({\rm LyC})\) using the relation from Choustikov et al. (2023) with the observed \(f_{\rm esc}({\rm Ly\alpha})\) (left, with the one-to-one relation shown as the dashed line) and Ly\(\alpha\) velocity offset (right) for LAEs. With the exception of two LAEs from Tang et al. (2023), the \(f_{\rm esc}({\rm LyC})\) we infer is always lower than \(f_{\rm esc}({\rm Ly\alpha})\), consistent with model predictions (e.g. Maji et al.2022). Interestingly, one of our faint LAEs has \(f_{\rm esc}({\rm LyC})>0.2\). We find that qualitatively, \(f_{\rm esc}({\rm LyC})\) and Ly\(\alpha\) velocity offset anti-correlate, but a low Ly\(\alpha\) velocity offset does not necessarily guarantee high \(f_{\rm esc}({\rm LyC})\). Similar trends were reported for low redshift LyC leaking galaxies by Izotov et al. (2021). we report the detection of other rest-frame optical emission lines such as [O ii], H\(\beta\), [O iii] and H\(\alpha\). The detection of these lines enables a reliable measure of their spectroscopic redshift against which the velocity offsets of Ly\(\alpha\) emission can be accurately measured. In general, these LAEs have blue rest-UV spectral slopes (\(-2.1\) to \(-2.7\)) and little to no dust measured from Balmer decrements. Using rest-optical line ratios, we find that our LAEs appear to be metal poor with high ionization parameters, properties that are typical of _JWST_-detected faint star-forming galaxies at \(z>6\). These properties combined with steep UV slopes and no dust indicate that all of our LAEs are young, star-forming systems. We further measure the ionizing photon production efficiencies (\(\xi_{\rm ion}\)) directly from Balmer line emission and find that our LAEs on average have log(\(\xi_{\rm ion}\)/Hz erg\({}^{-1}\)) \(\approx 25.56\), which does not seem to evolve strongly with redshift. We also do not find a strong dependence of \(\xi_{\rm ion}\) on the strength of Ly\(\alpha\) emission. Using the Ly\(\alpha\) escape fraction (calculated using Balmer line emission) and the velocity offset of the peak of the Ly\(\alpha\) line compared to systemic redshift, we study the galaxy properties that govern the escape of Ly\(\alpha\) photons from reionization era galaxies. We note that the escape fraction of Ly\(\alpha\) photons is anti-correlated with the Ly\(\alpha\) velocity offset as well as Ly\(\alpha\) equivalent width, consistent with expectations from Lycemission models as well as high-resolution galaxy simulations employing radiative transfer to track the escape of Ly\(\alpha\) emission. We also find that LAEs that are fainter in the UV show higher Ly\(\alpha\) escape fractions, although this could be attributed to the flux limited nature of our spectroscopic surveys. We do not find strong correlations between Ly\(\alpha\) escape fraction or velocity offset with key ISM indicators such as [O iii]/[O ii] ratios or H\(\beta\) equivalent widths. We conclude that the escape of Ly\(\alpha\) emission is a complicated process, and may not necessarily depend strongly on the state of the ISM and stellar populations at any given time, especially at \(z>6\) when the IGM attenuation also plays an important role. We find a gradual decrease in Ly\(\alpha\) escape fractions with redshift, indicative of increasing IGM attenuation in diminishing Ly\(\alpha\) strengths at the highest redshifts. By making a UV cut to remove selection effects, we find that the Ly\(\alpha\) escape fraction still evolves weakly with redshift, but the escaping Ly\(\alpha\) emission is considerably more offset compared to systemic velocity at the highest redshifts, indicative of decreasing ionized fractions and sizes of the bubbles that must surround these LAEs. Making use of several photometric and spectroscopic indicators for our LAEs, we then predict escape fractions of hydrogen ionizing Lyman continuum photons. We find that with the exception of one LAE in our sample, the LyC escape fraction is always lower than the Ly\(\alpha\) escape fraction, with no significant correlations between the two. We also do not find any significant correlation between LyC escape fraction and \(\xi_{\rm ion}\), which can likely explained by the reduced Balmer line emission in the presence of significant ionizing photon escape or by time delays between the production and escape of ionizing photons. By combining the production and escape of LyC photons (i.e. \(f_{\rm esc}\times\xi_{\rm ion}\)), we find that the quantity that is actually responsible for delivering ionizing photons from the galaxies to the IGM remains relatively consistent across UV magnitudes and EW(Ly\(\alpha\)), but increases gradually with redshift. Using these dependencies and assumptions about the Ly\(\alpha\) emitter fraction at any given redshift, more realistic models of reionization can be constructed. Deeper and wider spectroscopic surveys in the future will help expand the samples of known LAEs in the reionization era. The availability of other spectroscopic indicators tracing the nature of stellar populations, ISM ionization and chemical conditions would be key to assess the role of Ly\(\alpha\) emitting galaxies in driving cosmic reionization, helping build more realistic models charting the reionization history of the Universe. ###### Acknowledgements. AS thanks Harley Katz, Richard Ellis, Jorryt Matthee and Anne Verhamene for insightful discussions. AS, ABJ, GOCI, AJC and JC acknowledge funding from the "FirstGalaxies" Advanced Grant from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (Grant agreement No. 789056). JW, RM, MC, TL, LS & JS acknowledge support by the Science and Technology Facilities Council (STFC), ERC Advanced Grant 695671 "QUENCH". JW also acknowledges support from the Foundation (METAO: SA acknowledges support from the research project PID2021-2717818-2 10-0 of the Spanish Ministry of Science and Innovation/State Agency of Research (MICIN/AEI). RB acknowledges support from an STFC Ernest Rutherford Fellowship (SNF/10703959/1). KB acknowledges support from the Australian Research Council Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), through project number CE17100013. SC acknowledges support by European Union's HEFC Start-ing Grant No. 101040227 -WNGS5: ECL acknowledges support of an STFC Web Fellowship (ST/W001438/1). DJE is supported as a Simons Investigator and by JWST/INRA contract to the University of Arizona, NAS5- 20015. DJB, JER and MR acknowledge support from the NIPCan Science Team contract to the University of Arizona, NAS5-02015. RS acknowledges support from a STFC Ernest Rutherford Fellowship (SNF/1080431). The work of CCW is supported by NORF-Lab, which is managed by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation. This work is based on observations made with the NASA/ES/ACSA James Webb Space Telescope. The data were obtained from the Mikulski Archive for Space Telescopes at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-03127 for JWST. These observations are associated with program #1180 and #1210. Figure 11: Corrected \(\xi_{\rm ion}\) vs \(f_{\rm esc}\)(LyC) calculated using the multi-parameter fit as described in the text. We find that LAEs that show higher \(f_{\rm esc}\)(LyC) have lower measured \(\xi_{\rm ion}\) compared to the sample average, which implies that a significant fraction of escaping ionizing photons will lead to decreased Balmer (and potentially nebular) line strengths at a given star-formation rate (e.g. Topping et al.2022). Additionally, this lack of correlation may also arise due to a time delay between the production and escape of ionizing photons from galaxies, whereby intense star-formation activity that produces ionizing photons needs time to clear out channels to also facilitate LyC escape, as has been reported from several high-resolution simulations (e.g. Barrow et al.2020; Katz et al.2020; Choustikov et al.2023).
2304.05051
FashionSAP: Symbols and Attributes Prompt for Fine-grained Fashion Vision-Language Pre-training
Fashion vision-language pre-training models have shown efficacy for a wide range of downstream tasks. However, general vision-language pre-training models pay less attention to fine-grained domain features, while these features are important in distinguishing the specific domain tasks from general tasks. We propose a method for fine-grained fashion vision-language pre-training based on fashion Symbols and Attributes Prompt (FashionSAP) to model fine-grained multi-modalities fashion attributes and characteristics. Firstly, we propose the fashion symbols, a novel abstract fashion concept layer, to represent different fashion items and to generalize various kinds of fine-grained fashion features, making modelling fine-grained attributes more effective. Secondly, the attributes prompt method is proposed to make the model learn specific attributes of fashion items explicitly. We design proper prompt templates according to the format of fashion data. Comprehensive experiments are conducted on two public fashion benchmarks, i.e., FashionGen and FashionIQ, and FashionSAP gets SOTA performances for four popular fashion tasks. The ablation study also shows the proposed abstract fashion symbols, and the attribute prompt method enables the model to acquire fine-grained semantics in the fashion domain effectively. The obvious performance gains from FashionSAP provide a new baseline for future fashion task research.
Yunpeng Han, Lisai Zhang, Qingcai Chen, Zhijian Chen, Zhonghua Li, Jianxin Yang, Zhao Cao
2023-04-11T08:20:17Z
http://arxiv.org/abs/2304.05051v1
# FashionSAP: Symbols and Attributes Prompt for Fine-grained Fashion Vision-Language Pre-training ###### Abstract Fashion vision-language pre-training models have shown efficacy for a wide range of downstream tasks. However, general vision-language pre-training models pay less attention to fine-grained domain features, while these features are important in distinguishing the specific domain tasks from general tasks. We propose a method for fine-grained fashion vision-language pre-training based on fashion Symbols and Attributes **P**rompt (FashionSAP) to model fine-grained multi-modalities fashion attributes and characteristics. Firstly, we propose the fashion symbols, a novel abstract fashion concept layer, to represent different fashion items and to generalize various kinds of fine-grained fashion features, making modelling fine-grained attributes more effective. Secondly, the attributes prompt method is proposed to make the model learn specific attributes of fashion items explicitly. We design proper prompt templates according to the format of fashion data. Comprehensive experiments are conducted on two public fashion benchmarks, i.e., FashionGen and FashionIQ, and FashionSAP gets SOTA performances for four popular fashion tasks. The ablation study also shows the proposed abstract fashion symbols, and the attribute prompt method enables the model to acquire fine-grained semantics in the fashion domain effectively. The obvious performance gains from FashionSAP provide a new baseline for future fashion task research.1 Footnote 1: The source code is available at [https://github.com/hssip/FashionSAP](https://github.com/hssip/FashionSAP) ## 1 Introduction Vision-Language pre-training (VLP) attracts wide attention [10, 16, 17, 18, 43] as a foundation for general multi-modal tasks. VLP methods aim at learning multimodal knowledge from large-scale text and image pairs data containing common objects in daily life. For example, MSCOCO [19], a public vision-language benchmark, is introduced with common object labels. The fashion domain is an important application of VLP methods, where the online retail market needs retrieval and recommendation services. To satisfy such requirements, the VLP model needs to learn high-quality representations containing fine-grained attributes from the fashion data. Many works have adapted general VLP models to fashion tasks directly. However, the general pre-training models are not effective for learning fashion knowledge to describe fashion items comprehensively, as the fashion descriptions are usually associated with fine-grained attribute-level features. As illustrated in Fig. 1, the description text of a fashion item (right) refers to fine-grained attributes like long sleeves, while such features are ignored by the de Figure 1: Two text-image instances from general (a) and fashion domain (b). The captions of the general domain only describe object-level (underlined words) image content, while fashion domain captions emphasise attribute-level semantics. scriptions from general vision-language data (left). Moreover, the public fashion producers from fashion platforms attach great importance to some definite attributes (_e.g_., season, gender) of fashion data and especially provide the fashion attributes annotations. However, these high-quality attributes are highly neglected by existing fashion VLP models. It is important for fashion VLP models to focus on these fine-grained attributes and learn fashion-specific knowledge. Fashion attributes describe not only item details but also the overall item features. The category for fashion items is an essential attribute highlighted by many benchmark datasets [31, 42, 50]. We notice that categories have a deep correlation to fine-grained attributes, although they describe the general information of a fashion item. For example, the length is an important attribute for both pants and jeans, while it is rarely mentioned in the description of a pair of shoes. However, most existing fashion VLP methods neglect the importance of the relationship between similar categories. In this paper, we explore the usage of category attributes as a global concept layer during pre-training. According to the human description of a fashion product, we believe categories declare the basic understanding of a fashion product. Therefore, we attach the fashion category to the beginning of captions to guide the representation learning. Since the fashion products are designed for the decoration of people, we summarize nine fashion symbols corresponding to human body parts, as shown in Tab. 1, to unify all the categories of fashion items. We propose a method for the fashion domain to learn fine-grained semantics. This method is able to capture the similarity of fine-grained features based on fashion symbols and learn explicit fine-grained fashion attributes by the prompt. Our method gets the SOTA performance for four popular fashion tasks on the two public datasets, and the obvious performance gains provide new baselines for further research. Our main contributions are summarized below: * An effective fine-grained vision-language pre-training model is proposed to learn the attribute-level fashion knowledge. * An abstract fashion concept layer is proposed, and 9 fashion symbols are summarized to represent various fashion concepts according to their similarities on body parts and product functions. * The attributes prompt method enables the cross-modalities pre-training model to explicitly learn fine-grained fashion characteristics. ## 2 Related Work **Vision-Language Pre-training** The pre-training of the vision-language model has been used in many works [10, 16, 17, 18, 48]. The structure of the VLP model mainly includes two types, single-stream and two-stream. The single-stream models [18, 48] generate the image and text into preliminary representations and concatenate them so that they can interact with each other in a unified model (_e.g_. transformer [38]). Two-stream models [11, 41, 30] try to encode text and image respectively and the features interact with each other through semantic alignment tasks. Some works [16, 47, 43, 17] combine single-stream and two-stream by designing multi-step semantic alignment tasks. The backbones for text and image encoder refer to the stricture of unimodal [2, 3, 9]. The one-steam models usually perform better than two-stream models, while the latter is better than the former in time complexity. We design a model combined with one-stream and two-stream to adapt to the fashion tasks. **Vision-Language Model for Fashion** Tasks in the fashion domain include the retrieval, match and generation of cross-modal [6] similar to the general vision-language. There are also many datasets collected and released for fashion tasks [6, 21, 37, 31, 40, 42]. KaleidoBERT [4] designs multiple stages to refine the salient features of fashion \begin{table} \begin{tabular}{c l l} \hline \hline Fashion Symbols & Categories & Definition Rules \\ \hline _TOPS_ & tops, shirt, polo, sweater,... & upper body \\ _DRESSES_ & dress, suit, shift,... & up-to-lower body \\ _SKIRTS_ & skirt, sarop, slit, kilt,... & lower body \\ _COATS_ & jacket, parka, blazer, duffle,... & associated with others \\ _PANTS_ & jeans, shorts, breeches,... & lower body \\ _SHOES_ & boots, sneakers, pump, loafers,... & feet \\ _BAGS_ & clutches, pouches, wristlet,... & bag \& decorative \\ _ACCESSORIES_ & ring, sunglasses, accessories, & decorative \& optional \\ & hat, necklace,.... & & \\ _OTHERS_ & swim-wear, lingerie, lounge-wear,..., & - \\ \hline \hline \end{tabular} \end{table} Table 1: Fashion symbols and corresponding categories with definition rules. items by utilizing multiple single-task frameworks. FashionViL [7] uses an end-to-end framework to pre-train the model in multiple single-tasks by referring to the general vision-language model. These works try to use attributes of fashion items by attaching all the category attributes to the same classification task. There are also some works aiming at specific fashion tasks [1, 5, 13, 26, 45, 39] by setting a variety of gating and route structures for the latent features of fashion items. The exact representation of each attribute respectively is essential for fashion models. We propose a model that can obtain latent features and knowledge in the fashion domain at the pre-training stage. **Prompt Learning** Prompt learning is an effective method to transfer the pre-training model to accomplish downstream tasks in Natural Language Processing [15, 20, 32, 33, 35]. It can utilize the knowledge from the large-scale pre-training model in low-resource scenarios by appropriate prompts. The expression of the prompting is the crucial aspect in task transfer [14, 23] as proper trigger words can activate the knowledge from the pre-training model better. Prompted-base methods are also used in task-aimed model training [15, 35] for utilizing the resource in the task. These methods diversify a single instance from multiple perspectives into multiple instances. There are also some works applying prompt learning to multi-modal. We design two prompt templates from the text side for the adaptation of different kinds of attributes. [36, 44]. ## 3 Methodology In this section, we first introduce the preliminary of fashion symbols and attributes prompt in Sec. 3.1. Then we describe the architecture of FashionSAP in Sec. 3.2 network. Afterward, we elaborate on five pre-training tasks in Sec. 3.3. ### Preliminary #### 3.1.1 Fashion Symbols Definition The category is an essential attribute of a fashion item. However, the categories terms in different datasets are various. For example, the widely used FashionIQ [40] provides \(3\) kinds of categories while \(48\) in FashionGen [31]. To address the problem, we propose a concept semantic layer to embed similar category terms into the same fashion symbol. The symbols are defined by the following rules: 1. **Body Part**: fashion items that are associated with a specific part of the human body. 2. **Function**: fashion items that are optionally used for decoration and can be dressed on multiple body parts. For the datasets in this paper, we propose nine symbols to summarize different categories of fashion items. As shown in Tab. 1, the fashion symbols _PANTS_, _SKIRTS_, _SHOES_, _BAGS_ have their unique features. _TOPS_ is a kind of upper clothing that can be worn independently. _DRESSES_ can cover the whole body and exist independently. _COATS_ represents the outwear usually worn with other clothing. _CCESORIES_ represents the accessories that aim to enhance the whole outfit but are not necessary for a basic outfit. _OTHERS_ includes fashion items that do not appear in everyday dressing and public occasions. We use an embedding layer to learn the representation of these fashion symbols as shown in Fig. 2. We enumerate all categories and corresponding fashion symbols in practice. #### 3.1.2 Fine-grained Attribute Prompt Existing works suggest that the fashion items are usually annotated from multiple perspectives [50]. Most benchmark datasets [21, 31, 40, 42] focus on fine-grained attributes when annotating fashion items. However, most general vision-language models focus on object-level semantics and seldom pay attention to attribute-level semantics, which contain many fine-grained characteristics for fashion items. Therefore, we propose a method to utilize these fine-grained attributes by prompt. The attribute format of _key-value_ is concordant with the prompt format of _description-value_[35, 33]. According to this schema, we encode fine-grained fashion attributes in sequence format so that our model can capture the inner interaction between name and value. Attributes prompt tells the model precisely the ownership between attribute value and name to utilize the latent semantics from the language model. We design two prompt templates to tackle the diversity of patterns of fashion attributes. The first template covers the enumerable attributes. This kind of attribute has a textual name and the enumerable value from a defined finite set, where each attribute has a unique value. The first template is: \[\mathcal{T}_{e}=\texttt{the\ image\ attribute\ [An]\ is\ [Au]}\] where [An] is the slot to be filled with the attribute name, and [Au] is filled with the attribute value. Another template covers the binary attributes. This attribute annotates a fashion item with a one-hot vector with binary to illustrate whether the fashion item has a certain feature, _e.g_. {red, pure cotton}. For binary attributes, the template is: \[\mathcal{T}_{b}=\texttt{is\ image\ attribute\ [Ab]?\ [As]}\] where [Ab] is filled with binary attribute label, [As] is filled with positive answer word yes or negative no as attribute value. We concatenate \(\mathcal{T}_{e}\) or \(\mathcal{T}_{b}\) to the tail of the caption tokens during pre-training stage. ### Model Architecture As illustrated in Fig. 2, FashionSAP consists of an image encoder, a text encoder and a feature fusion module. An image is encoded to \(\mathbf{I}\), \[\mathbf{I}=\{\mathbf{v}_{img},\mathbf{v}_{io},\mathbf{v}_{i_{1}},\mathbf{v}_{i_{2}},...,\mathbf{v}_{i_{N }}\}\in\mathbb{R}^{(i_{N}+1)\times d}\] where \(\mathbf{v}_{i}\) is a feature vector of a patch of the image generated by IE, \(d\) is the dimension of latent semantic space and \(i_{N}\) is the number of patches of the input image. We concatenate the fashion symbol between BERT token [CLS] and fashion text to form a new text sequence shown in the upper-left of Fig. 2. The text sequence is embedded to \(\mathbf{E}_{t}\), \[\mathbf{E}_{t}=\{\mathbf{e}_{cls},\mathbf{e}_{symbol},\mathbf{e}_{t_{0}},...,\mathbf{e}_{t_{N}}\} \in\mathbb{R}^{(t_{N}+2)\times d_{e}}\] where \(t_{N}\) is the length of fashion text tokens sequence and \(d_{e}\) is the dimension of text embedding space. The embedding \(\mathbf{E}_{t}\) is encoded into \(\mathbf{T}\), \[\mathbf{T}=\{\mathbf{v}_{cls},\mathbf{v}_{symbol},\mathbf{v}_{t_{0}},...,\mathbf{v}_{t_{N}}\}\in \mathbb{R}^{(t_{N}+2)\times d}\] For the case in Fig. 2, the \(\mathbf{e}_{symbol}\) is specific to \(\mathbf{e}_{tops}\) and \(\mathbf{v}_{symbol}\) is specific to \(\mathbf{v}_{tops}\). Then FashionSAP uses a feature fusion module to fuse the features from the text and image into hybrid feature \(\mathbf{H}\). The feature fusion module is implemented as multiple cross-attention layers from transformer [38]. The feature of _k-th_ cross-attention layer is calculated as Eq. (1) \[CA^{k}(\mathbf{T},\mathbf{I})=softmax(\frac{(W_{T}^{k}\mathbf{T})(W_{I_{1}}^{k}\mathbf{I})^{ \top}}{\sqrt{d}})(W_{I_{2}}^{k}\mathbf{I})^{\top} \tag{1}\] where \(W_{T}^{k}\), \(W_{I_{1}}^{k}\) and \(W_{I_{2}}^{k}\in\mathbb{R}^{d\times d}\) are attention parameters in _k-th_ cross-attention layer. ### FashionSAP Pre-training Tasks #### 3.3.1 Fashion Symbol Image Similarity (FSIS) This task makes the model capture the features from both text and image by maximizing the similarity between the image and the fashion symbol. In this task, the fashion symbol is concatenated between [CLS] and the description tokens as shown in the upper-left of Fig. 2. Let \(\mathbf{v}_{symbol}\) denote the feature vector of the fashion symbol in the text side and \(\mathbf{v}_{img}\) denote the feature vector of the image side. We use an adaptive layer \(Adp(\cdot)\) to project the feature vector into adapted latent space. Let \(\hat{\mathbf{v}}_{symbol}=\mathrm{norm}(Adp(\mathbf{v}_{symbol}))\in\mathbb{R}^{d_{1}}\) denote the adapted fashion symbol feature and \(\hat{\mathbf{v}}_{img}=\mathrm{norm}(Adp(\mathbf{v}_{img})\in\mathbb{R}^{d_{1}}\) denote the adapted image feature and \(d_{1}\) is the dimension of adapted latent space. The similarity between the fashion symbols and images is measured by modified vector cosine distance as Eq. (2) \[\mathcal{L}_{fsis}=\frac{1}{B}[1-\sum_{b=1}^{B}\frac{1}{2}[\hat{\mathbf{v}}_{img }^{b}(\hat{\mathbf{v}}_{symbol}^{b})^{\top}+1]] \tag{2}\] where \(\mathrm{norm}\) is the normalize function, \(B\) is the size of minibatch, \(\hat{\mathbf{v}}_{img}^{b}\) is the _b-th_ image adapted feature vector and \(\hat{\mathbf{v}}_{symbol}^{b}\) is the _b-th_ fashion symbol feature vector. Figure 2: An overview of the FashionSAP framework. The fashion symbol(\(fsis\) task), attribute prompt (\(ptp\) task) and token replace(\(trp\) task) are all removed in finetune stage. #### 3.3.2 Prompt Token Prediction (PTP) The goal of the PTP task is to improve the capacity of the model for learning from fine-grained attributes through predicting the correct token under a prompt. In this task, we choose a proper template \(\mathcal{T}\) and use blank tokens to randomly hold the places of the name or value tokens with a probability of 0.5 to generate attribute input. This task minimizes the cross-entropy loss(\(G\)). In addition, we use masked language modeling (MLM) task in model pre-training with loss calculated by \(G\) as well. So we merge these two losses as Eq. (3) \[\mathcal{L}_{ptp}=\mathbb{E}_{(\mathbf{T}_{ptp},\mathbf{I})\sim D}G(\mathbf{y}_{ptp},\mathbf{g }_{ptp}(\mathbf{H}_{ptp})) \tag{3}\] where \(\mathbf{H}_{ptp}=[\mathbf{H}_{mlm}\oplus\mathbf{H}_{pmt}]\) and \(\oplus\) means the concatenation between two sequences, \(\mathbf{y}_{ptp}=[\mathbf{y}_{mlm}\oplus\mathbf{y}_{pmt}]\) and \(\mathbf{H}_{mlm},\mathbf{H}_{pmt}\) are hybrid features generated by feature fusion module with masked tokens and prompt tokens input respectively. \(\mathbf{y}_{ptp}\) is the ground-truth and \(\mathbf{g}_{ptp}(\mathbf{H}_{ptp})\) is the predicted probability distribution of the prompt token prediction task. #### 3.3.3 Token Replace Prediction (TRP) In this task, first, we choose some tokens (ratio of 0.15) from the caption and one of the attribute values. Then, half of the chosen tokens are replaced by the antonyms searched by WordNet [27] like [46] and the other half are replaced by random tokens from the vocabulary. This task aims at predicting whether the input tokens are substituted (labels 0 or 1). The loss is shown in Eq. (4) \[\mathcal{L}_{trp}=\mathbb{E}_{(\mathbf{T}_{trp},I)\sim D}G(\mathbf{y}_{trp},\mathbf{g}_{ trp}(\mathbf{H}_{trp})) \tag{4}\] \(\mathbf{y}_{trp}\) is the ground-truth binary label and \(\mathbf{g}_{trp}(\mathbf{H}_{trp})\) is the predicted probability distribution of the replacement task. #### 3.3.4 Image Text Similarity (ITS) This task aims at measuring the similarity between the text and the image. We use momentum contrastive learning [8, 17, 28] in this task to take full advantage of text-image pairs. As momentum contrastive learning requires mirror encoders for momentum updating, the vector \(\mathbf{v}_{cls}\) denotes the whole semantics from the text and \(\mathbf{v}^{\prime}_{cls}\) is the corresponding vector generated by momentum text encoder. Let vector \(\mathbf{v}_{img}\) denote the whole feature from the image and \(\mathbf{v}^{\prime}_{i}\) is generated by the momentum image encoder. The momentum distillation [16, 17] is also used for label smoothing. For each pair of text and image, the similarities between them are Eq. (5) and Eq. (6) \[\mathrm{sim}(\mathbf{T},\mathbf{I})=\mathrm{norm}(W_{T}\mathbf{v}_{cls})\, \mathrm{norm}(W_{I}\mathbf{v}^{\prime}_{img})^{\top} \tag{5}\] \[\mathrm{sim}(\mathbf{I},\mathbf{T})=\mathrm{norm}(W_{I}\mathbf{v}_{img})\, \mathrm{norm}(W_{T}\mathbf{v}^{\prime}_{cls})^{\top} \tag{6}\] where the \(W_{T}\) and \(W_{I}\in\mathbb{R}^{(d\times d)}\) are transfer weights to unify feature representations. The similarity between images and texts is measured by \(\mathbf{g}_{i2t}\) and \(\mathbf{g}_{t2i}\) and for _k-th_ image and text as Eq. (7) ans Eq. (8) \[g^{k}_{i2t}(\mathbf{I})=\frac{\mathrm{exp}(\mathrm{sim}(\mathbf{I},\mathbf{T}^{k})/\tau)}{ \sum_{m=1}^{M}\mathrm{exp}(\mathrm{sim}(\mathbf{I},\mathbf{T}^{m}))} \tag{7}\] \[g^{k}_{t2i}(\mathbf{T})=\frac{\mathrm{exp}(\mathrm{sim}(\mathbf{T},\mathbf{I}^{k})/\tau)}{ \sum_{m=1}^{M}\mathrm{exp}(\mathrm{sim}(\mathbf{T},\mathbf{I}^{m}))} \tag{8}\] where \(\tau\) is a temperature parameter. The loss of the similarity of image and text is as Eq. (9) \[\mathcal{L}_{its}=\frac{1}{2}\mathbb{E}_{(\mathbf{T},\mathbf{I})\sim D}[G(\mathbf{y}_{i2t} (\mathbf{I}),\mathbf{g}_{i2t}(\mathbf{I}))+ \tag{9}\] \[G(\mathbf{y}_{t2i}(\mathbf{T}),\mathbf{g}_{t2i}(\mathbf{T}))]\] where \(\mathbf{y}_{t2t}(\mathbf{I})\) and \(\mathbf{y}_{t2i}(\mathbf{T})\) denote the label of the similarity between images and texts. #### 3.3.5 Image Text Match (ITM) In the task of image text match, the first vector of hybrid feature \(H_{0}\) is sent to match head to predict the probability of text-image pair. The loss of this task is Eq. (10) \[\mathcal{L}_{itm}=\mathbb{E}_{(T,I)\sim D}G(\mathbf{y}_{itm},\mathbf{g}_{itm}(H_{0})) \tag{10}\] where \(\mathbf{g}_{itm}\) denote the predicted probability distribution by match head and \(\mathbf{y}_{itm}\) denote the label(1 or 0) of image and text matching. The label is positive if the text-image is matched and negative if mismatched. The complete pre-training objective of FashionSAP is the combination of the motioned terms above as Eq. (11), \[\mathcal{L}=\mathcal{L}_{fsis}+\mathcal{L}_{ptp}+\mathcal{L}_{trp}+\mathcal{L }_{its}+\mathcal{L}_{itm} \tag{11}\] The model is optimized end-to-end on the pre-training datasets by minimizing \(\mathcal{L}\). Figure 3: Model structure for TMIR task. ## 4 Experiments ### Datasets We use the FashionGen [31] and FashionIQ [40] datasets for pre-training and downstream tasks. FashionGen [31] includes 320k pairs of text-image and 40k unique fashion items, which are shown as multiple images from multiple views. The detailed description and enumeration attributes are attached to all fashion items. FashionIQ [40] dataset includes 77k unique fashion items and 18k modified text for text modified image retrieval task. We use the train set of FashionGen [31] as pre-training data containing about 260k pairs of text-image. We evaluate downstream tasks text-to-image retrieval, image-to-text retrieval, category recognition and subcategory recognition in FashionGen [31] and text modified image retrieval task in FashionIQ [40]. ### Downstream Tasks and Results **Cross-modal Retrieval** We retrain only two losses, \(\mathcal{L}_{its}\) and \(\mathcal{L}_{itm}\) shown in Fig. 2 (lower-right) in this task. Cross-modal retrieval includes two tasks. One task is Image-to-Text (I2T), aiming to retrieve a matched text given a query image. Another task is Text-to-Image (T2I), which aims to retrieve a target image given a query text. We evaluate the performance of the model only by calculating the similarity between text and image following previous works. FashionSAP gets the SOTA performance as the comparable results shown in Tab. 2. We report the average result of 5 randomly chosen retrieval test sets and each of them contains 1k queries by following previous works. For each query in test sets, only one candidate is matched (positive), while the other 100 candidates are mismatched (negative) and chosen from the same subcategory. For the T2I task, there are 101 candidate images for each query text, and only one image in candidates is matched. In order to test the performance of our model thoroughly, we also evaluate our model in the full test set of FashionGen [31] in Tab. 3 following [7, 26]. Our model also gets the SOTA performance. Moreover, the differences between the results of our model and others are significant. \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline \multirow{2}{*}{Methods} & \multicolumn{3}{c}{I2T} & \multicolumn{3}{c}{T2I} & Mean \\ \cline{2-7} & R@1 & R@5 & R@10 & R@1 & R@5 & R@10 & R@1 \\ \hline VL-BERT [34] & 19.26 & 39.90 & 46.05 & 22.63 & 36.48 & 48.52 & 20.95 \\ ViLBERT [25] & 20.97 & 40.49 & 48.21 & 21.12 & 37.23 & 50.11 & 21.05 \\ Image-BERT [29] & 22.76 & 41.89 & 50.77 & 24.78 & 45.20 & 55.90 & 23.77 \\ OSCAR [18] & 23.39 & 44.67 & 52.55 & 25.10 & 49.14 & 56.68 & 24.25 \\ FashionBERT [4] & 23.96 & 46.31 & 52.12 & 26.75 & 46.48 & 55.74 & 25.36 \\ KaleidoBERT [49] & 27.99 & 60.09 & 68.37 & 33.88 & 60.60 & 68.59 & 30.94 \\ EI-CLIP [26] & 38.70 & 72.20 & 84.25 & 40.06 & 71.99 & 82.90 & 39.38 \\ CommerceMM [45] & 41.60 & 64.00 & 72.80 & 39.60 & 61.50 & 72.70 & 62.75 \\ ALBEF [17] & 63.97 & 88.92 & 94.41 & 60.52 & 84.99 & 91.45 & 62.20 \\ FashionViL [7] & 65.54 & 91.34 & 96.30 & 61.88 & 87.32 & 93.22 & 63.71 \\ \hline FashionSAP(Resnet50) & 67.23 & 91.30 & 96.41 & 64.11 & 88.24 & 94.31 & 65.67 \\ FashionSAP(ViT-B16) & 71.14 & 92.21 & 96.52 & 69.07 & 89.81 & 94.75 & 70.11 \\ FashionSAP & **73.14** & **92.80** & **96.87** & **70.12** & **91.76** & **96.38** & **71.63** \\ \hline \hline \end{tabular} \end{table} Table 2: Cross-modal retrieval result on FashionGen [31] in the sub set of evaluation following previous work. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{Methods} & \multicolumn{3}{c}{I2T} & \multicolumn{3}{c}{T2I} & Mean \\ \cline{2-7} & R@1 & R@5 & R@10 & R@1 & R@5 & R@10 & R@1 \\ \hline EI-CLIP [26] & 25.70 & 54.50 & 66.80 & 28.40 & 57.10 & 69.40 & 27.05 \\ ALBEF [17] & 41.68 & 67.39 & 75.50 & 50.95 & 75.36 & 84.15 & 46.32 \\ FashionViL [7] & 42.88 & 71.57 & 80.55 & 51.34 & 75.42 & 84.57 & 47.11 \\ \hline FashionSAP(Resnet50) & 44.92 & 71.49 & 81.64 & 52.45 & 76.63 & 84.71 & 48.69 \\ FashionSAP(ViT-B16) & 50.34 & 74.34 & 81.67 & 58.43 & 80.06 & 87.02 & 54.39 \\ FashionSAP & **54.43** & **77.30** & **83.15** & **62.82** & **83.96** & **90.16** & **58.63** \\ \hline \hline \end{tabular} \end{table} Table 3: Cross-modal retrieval result on FashionGen [31] with full evaluation We take a fine-tuning stage to the general VLP model (ALBEF) and report the results in Tab. 2 and Tab. 3. We also provide the results of training FashionSAP from scratch with different image encoders following previous works. **Category/Subcategory Recognition (CR&SCR)** In this task, we only use cross-entropy loss for classification Fig. 2 (upper-right). This downstream tries to recognize the category and the subcategory, given the text and image of the fashion item. We extract the first vector of the fusion feature \(H_{0}\) and input it to a linear layer to predict the category and the subcategory as shown in upper-right in Fig. 2. FashionSAP gets the SOTA performance in both accuracy (Acc) and Macro-F as shown in Tab. 5. **Text Modified Image Retrieval (TMIR)** This task aims at retrieving a target image of the fashion item by referring to the semantics of the query containing the features from a pair of candidate text-image while the text modifies some elements in the candidate image. As the original pre-training model can not be applied to this task directly, we design a new model structure for this task, shown in Fig. 3. The modified text is encoded into \(\mathbf{T}_{m}\) meanwhile candidate image and target image are encoded into \(\mathbf{I}_{can}\) and \(\mathbf{I}_{tar}\). Then \(\mathbf{T}_{m}\) and \(\mathbf{I}_{can}\) are blended into hybrid feature \(\mathbf{H}_{f}\). The cosine similarity between \(\mathbf{H}_{f}\) and \(\mathbf{I}_{tar}\) is the score between query and target and our model optimizes the similarity between them. Our model gets the SOTA performance compared with previous models, shown in Tab. 4. ### Ablation Study We evaluate the effectiveness of the proposed pre-training tasks in the section. For comparability, the settings in the same series of ablation are consistent. Considering the ITM task and ITS task are similar to general vision-language pre-training, we set the two tasks as basic ones and evaluate the three tasks proposed by this paper in downstream tasks Tab. 6. For conciseness, we list only the index R@1 for both image-to-text and text-to-image tasks, index Macro-F for category (subcategory) recognition and index mean R@10 of three sets in FashionIQ [40] for text modified image retrieval (TMIR). As we can see from the results of the ablation study in Tab. 6, the loss \(fsis\) brings a distinct improvement for T2I task as the fashion symbol is an essential structure capturing implicit semantics from the text side to the image side. The loss \(ptp\) brings a distinct improvement for I2T task because the prompted fine-grained attributes are encoded as text tokens and share the same embedding layer with text. The loss \(trp\) also brings an improvement in downstream tasks as the model learns synonym characteristics through this task. ### Fine-grained Alignment Analysis We choose two instances from FashionGen [31] and show the cross-attention map in the T2I task using the Grad-CAM method Fig. 4 to visualize the improvement of attention score. For each instance, we list the Grad-CAM visualizations from FashionSAP and FashionSAP without \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline \multirow{2}{*}{\(ptp\)} & \multirow{2}{*}{\(trp\)} & \multirow{2}{*}{\(fsis\)} & I2T & T2I & CR & SCR & TMIR \\ & & & R@1 & R@1 & Macro-F & Macro-F & R@10 \\ \hline \hline & & 43.84 & 53.24 & 84.50 & 84.42 & 30.02 \\ ✓ & & & 51.99 & 53.78 & 86.32 & 86.03 & 34.40 \\ ✓ & ✓ & & 52.09 & 55.54 & 86.51 & 86.65 & 35.01 \\ ✓ & ✓ & ✓ & **54.43** & **62.82** & **89.84** & **87.67** & **36.26** \\ \hline \hline \end{tabular} \end{table} Table 6: Ablation study results for proposed tasks(\(ptp\), \(fsis\), \(trp\)) on five downstream tasks. \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline \multirow{2}{*}{Methods} & \multicolumn{2}{c}{Dress} & \multicolumn{2}{c}{Tpotree} & \multicolumn{2}{c}{Shirt} & \multicolumn{2}{c}{Mean} \\ \cline{2-9} & R@10 & R@50 & R@10 & R@50 & R@10 & R@50 & R@10 & R@50 \\ \hline CIRR [22] & 17.45 & 40.41 & 21.64 & 45.38 & 17.53 & 38.81 & 18.87 & 41.53 \\ VAL [1] & 22.53 & 44.00 & 27.53 & 51.68 & 22.38 & 44.15 & 24.15 & 46.61 \\ CosMo [13] & 25.64 & 50.30 & 29.21 & 57.46 & 24.90 & 49.18 & 26.58 & 52.31 \\ DCNet [12] & 28.95 & 56.7 & 30.44 & 58.29 & 23.95 & 47.3 & 27.78 & 54.10 \\ FashionVLP [5] & 32.42 & 60.29 & 38.51 & 68.79 & 31.89 & 58.44 & 34.27 & 62.51 \\ FashionViL [7] & 33.47 & 59.94 & 34.98 & 60.79 & 25.17 & 50.39 & 31.21 & 57.04 \\ \hline FashionSAP & **33.71** & **60.43** & **41.91** & **70.93** & **33.17** & **61.33** & **36.26** & **64.23** \\ \hline \hline \end{tabular} \end{table} Table 4: Text modified image retrieval performance in FashionIQ [40] \begin{table} \begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{Methods} & \multicolumn{2}{c}{CR} & \multicolumn{2}{c}{SCR} \\ \cline{2-5} & Acc & Macro-F & Acc & Macro-F \\ \hline F-BERT [4] & 91.25 & 70.50 & 85.27 & 62.00 \\ K-BERT [49] & 95.07 & 71.40 & 88.07 & 63.60 \\ F-ViL [7] & 97.48 & 88.60 & 92.23 & 83.02 \\ \hline FashionSAP & **98.34** & **89.84** & **94.33** & **87.67** \\ \hline \hline \end{tabular} \end{table} Table 5: CR and SCR results on FashionGen [31]. losses \(ptp,fsis,trp\). Compared with the instances without proposed methods, FashionSAP concentrates on the corresponding region precisely. The two instances show that FashionSAP pays proper attention to the whole region of the object(_e.g._ trousers, leg) rather than the sub-region. FashionSAP can also find all positions of pockets in the attention maps rather than only one. ### Implementation Details The text encoder is the front 6-layer transformer of BERT-base [2], the image encoder is ViT-B16 [3]. The feature fusion module is a 6-layer transformer. The feed-forward neural network implements the adapters, both on the text and image side. The FashionSAP is initialized by the checkpoint from ALBEF [17] except for the results trained from scratch. The prompt predictor is a multi-layer feed-forward neural network. An AdamW [24] optimizer is adopted with a learning rate \(6e-5\). The batch size is 16 with momentum queue size 65535. The size of input images is \(256\times 256\). For training costs, we perform the pre-training stage in 8 Tesla V100*32G GPUs for 20 hours and fine-tuning stage for 10 hours. We randomly choose the attribute name or attribute value and replace them with their synonyms searched by WordNet [27] for raw data preprocessing. ## 5 Conclusion This paper introduced a fine-grained fashion VLP model for based on fashion symbols and attributes prompt. We used nine fashion symbols and attributes prompt to enhance the model to capture multi-modal fine-grained semantics. The comparative results and ablation study demonstrated that the FashionSAP was effective in learning fashion representation and outperforms SOTA models significantly. Several future directions could be considered. Our main goal was to show the potential of the attribute prompt framework to learn fine-grained fashion representation. The fashion symbol only considered category attributes and diversified symbols could be proposed. ## 6 Acknowledgeements We thank the reviewers for their thoughtful and constructive comments. This work was supported in part by the National Key R&D Program of China(2022ZD0116002), Natural Science Foundation of China (62276075, 61872113), Science and Technology Planning Project of Shenzhen (JCYJ20190806112210067), Huawei Technologies Co., Ltd. and Guangdong Provincial Key Laboratory of Novel Security Intelligence Technologies, China (2022B1212010005). Figure 4: Instances of the comparison of Grad-CAM cross-attention maps for the 1st layer of the feature fusion module from FashionSAP (upper) and FashionSAP without three tasks (lower), Prompt Token Prediction task(\(ptp\)), Fashion Symbol Image Similarity task(\(fsis\)) and Token Replace Prediction task(\(trp\)).
2308.14289
Heterogeneous integration of spin-photon interfaces with a scalable CMOS platform
Color centers in diamonds have emerged as a leading solid-state platform for advancing quantum technologies, satisfying the DiVincenzo criteria and recently achieving a quantum advantage in secret key distribution. Recent theoretical works estimate that general-purpose quantum computing using local quantum communication networks will require millions of physical qubits to encode thousands of logical qubits, which presents a substantial challenge to the hardware architecture at this scale. To address the unanswered scaling problem, in this work, we first introduce a scalable hardware modular architecture "Quantum System-on-Chip" (QSoC) that features compact two-dimensional arrays "quantum microchiplets" (QMCs) containing tin-vacancy (SnV-) spin qubits integrated on a cryogenic application-specific integrated circuit (ASIC). We demonstrate crucial architectural subcomponents, including (1) QSoC fabrication via a lock-and-release method for large-scale heterogeneous integration; (2) a high-throughput calibration of the QSoC for spin qubit spectral inhomogenous registration; (3) spin qubit spectral tuning functionality for inhomogenous compensation; (4) efficient spin-state preparation and measurement for improved spin and optical properties. QSoC architecture supports full connectivity for quantum memory arrays in a set of different resonant frequencies and offers the possibility for further scaling the number of solid-state physical qubits via larger and denser QMC arrays and optical frequency multiplexing networking.
Linsen Li, Lorenzo De Santis, Isaac Harris, Kevin C. Chen, Yihuai Gao, Ian Christen, Matthew Trusheim, Hyeongrak Choi, Yixuan Song, Carlos Errando-Herranz, Jiahui Du, Yong Hu, Genevieve Clark, Mohamed I. Ibrahim, Gerald Gilbert, Ruonan Han, Dirk Englund
2023-08-28T04:06:11Z
http://arxiv.org/abs/2308.14289v2
# Heterogeneous integration of spin-photon interfaces with a scalable CMOS platform ###### Abstract Color centers in diamonds have emerged as a leading solid-state platform for advancing quantum technologies, satisfying the DiVincenzo criteria [1] and recently achieving a quantum advantage in secret key distribution [2]. Recent theoretical works [3, 4, 5] estimate that general-purpose quantum computing using local quantum communication networks will require millions of physical qubits to encode thousands of logical qubits, which presents a substantial challenge to the hardware architecture at this scale. To address the unanswered scaling problem, in this work, we first introduce a scalable hardware modular architecture "Quantum System-on-Chip" (QSoC) that features compact two-dimensional arrays "quantum microchiplets" (QMCs) containing tin-vacancy (SnV\({}^{-}\)) spin qubits integrated on a cryogenic application-specific integrated circuit (ASIC). We demonstrate crucial architectural subcomponents, including **(1)** QSoC fabrication via a lock-and-release method for large-scale heterogeneous integration; **(2)** a high-throughput calibration of the QSoC for spin qubit spectral inhomogenous registration; **(3)** spin qubit spectral tuning functionality for inhomogenous compensation; **(4)** efficient spin-state preparation and measurement for improved spin and optical properties. QSoC architecture supports full connectivity for quantum memory arrays in a set of different resonant frequencies and offers the possibility for further scaling the number of solid-state physical qubits via larger and denser QMC arrays and optical frequency multiplexing networking. Corresponding authors: [email protected]\({}^{*}\); [email protected]\({}^{\dagger}\); **Introduction -** Modularity plays a critical role in computing architectures, allowing the segregation and combination of diverse system components. This principle has been applied to quantum information processing, resulting in quantum networks consisting of multiple processing units interconnected through coherent channels. Such networks have been proposed for trapped ion [6], neutral atom [7], and spin-based systems [3, 4, 5], with the aim of achieving scalable distributed quantum processing. However, it is essential for the scalability of the qubit layer to be used for building a large-scale quantum system. In this study, we introduce a scalable hardware modular architecture "Quantum System-on-Chip" (QSoC) that leverages the fabrication of the qubit layer with modern mass microfabrication processes for scalability. The central qubit platform in our proposed architecture utilizes electron-nuclear spin systems of diamond color centers which can be generated with the foundry ion implantation process for large-scale fabrication compared with the individual fine-operated ion and atom. Diamond color centers have emerged as promising solid-state qubits, demonstrating deterministic remote entanglement [8], minute-long coherence times with more than ten auxiliary qubits [9], and large-scale heterogeneous integration into photonic integrated circuits using diamond quantum microchiplets (QMCs) [10]. The implanted diamond color center provides benefits in scalability, but will suffer from naturally inhomogeneous spectral broadening [10]. To address that, we integrate the implanted diamond qubit layer on a commercially processed complementary metal-oxide-semiconductor (CMOS) backplane, allowing for local spectral tuning. The system's co-design with CMOS electronics allows the compact two-dimensional array of qubit arrangements and substantially minimizes the size of the system's control elements. CMOS electronics have been used in gate-defined quantum dot control [11], low-power cryogenic microwave control [12], and integration with nitrogen vacancy (NV) centers for sensing applications [13, 14]. The QSoC demonstrated here contains 64 QMCs with a co-designed CMOS application-specific integrated circuit (ASIC). QSoC not only facilitates qubit scaling but also enables qubit inhomogeneous compensation for full connectivity, which benefits the cluster state computational power that is related to the size of the largest connected qubit graph size [3]. The connected qubit graph can be mapped to a regular qubit square Figure 1: **Comprehensive architectural design.****a,** illustration of the architecture goal for building a connected qubit graph with qubits resonant with a set of frequencies (red lines: \(f(\mathrm{k}_{\mathrm{fch}})=v_{0}+\mathrm{k}_{\mathrm{fch}}\Delta v\)). The gray dot represents the artificial atom qubit, where the index indicates the \(i_{\mathrm{th}}\) qubit (marked by location) that can be tuned to a specific frequency \(\mathrm{k}_{\mathrm{fch}}\) at the minimum, mean, or maximum tuning voltage. The horizontal black line represents the spectral tunability of this qubit. Multiple gray dots in one horizontal line mean that it is the same spin qubit at different tuning voltages, demonstrating its wide tuning range across various frequencies. The gray line between the gray dots means that those two qubits can be entangled, as they can tune to the same frequency. **b,** A comprehensive architecture diagram, illustrating the optical interface (including optical excitation, routing, and detection) and the QSoC (including the CMOS ASIC chip and spin memories). The red dots indicate the electronic spin of the qubit (quantum emitter), and the gray dots represent the corresponding nuclear spin (quantum memory). **c,** A cross-section diagram of three quantum emitters (\(i_{1}\), \(i_{2}\), and \(i_{3}\)) located in a waveguide. The optical transition from ground to excited state has a transition frequency \(f\), which is tunable with the system tuning voltage bias \(\mathrm{V}_{\mathrm{b}}\). The \(f\) versus \(\mathrm{V}_{\mathrm{b}}\) of different emitters is shown here indicating the different behaviors for the voltage tuning response of \(f\) as a function of \(\mathrm{V}_{\mathrm{b}}\) that can be applied to the emitters. Here, we can align the emitters to a set of frequencies \(\mathrm{f}(\mathrm{k}_{\mathrm{fch}})\). **d,** An expanded view illustrating quantum channels, numbered from 1 to 16, within a QMC (scale bar: 10\(\upmu\)m). Illustration indicates that an emitter can interact with all other emitters of the same resonant frequency through free-space optical routing and detection. The gray box region provides a practical example of the diagram in **a**, but much more resonant quantum emitters a certain quantum emitter can interact across the entire chip. lattice layout in the quantum circuit representation for error correction [3]. In this paper, we show the architectural design concept first, followed by the QSoC fabrication, characterization for spin qubit spectral inhomogenous registration, and spin qubit tuning for inhomogeneous compensation to demonstrate the QSoC capability can achieve the architecture required function. Here we report an unprecedented scale comprising over 10,000 individual resolved diamond spin-photon interfaces in the QSoC architecture designed for the rapid generation of fully connected qubit graphs. We also demonstrate that we can maintain the quantum emitter's spin and optical properties with efficient spin-state preparation and measurement on such a novel platform and analyze the QSoC tunability requirement as we further scale up the system size in the future. **Architecture goal for building fully connected qubit graph -** The largest connected qubit graph determines the quantum computational power of the cluster state [3] so we would like to build a fully connected qubit graph to utilize as much as qubit resource we have in the system. For any two qubits that can be tuned to the same zero phonon line frequency, we consider those two qubits to be connectable in the architecture by herald entanglement [15]. The implanted diamond color center will have a broad range distribution in the spectrum so we utilize a set of frequencies labeled with frequency channels number \(\mathrm{k_{fch}}\) from 1 to \(\mathrm{k_{max}}\) for herald entanglement as shown in Figure 1a. Each frequency has multiple emitters (labeled with the index number) whose transitions can be tuned to resonate at \(\mathrm{f(k_{fch})}\) with the tunability of the QSoC. Figure 1a shows an example of using a uniformly distributed frequency channel with spacing \(\Delta v\), but its distribution can be optimized based on different QSoC samples and not necessarily be uniform. The black horizontal line indicates the tuning range of \(f\) for the corresponding emitters in the QSoC system. The relation between the ratio of the fully connected qubit graph against the QSoC tunability is analyzed in the Methods. The fully connected qubit graph can be programmed to a regular square lattice of physical qubit arrangement as shown in the right part of Figure 1a, so the error correction code like the surface code can be implemented with such an architecture [3]. **Comprehensive system architecture -** Figure 1b presents a comprehensive architecture diagram that includes the optical interface and QSoC. The optical interface encompasses optical excitation, routing, and detection (see Appendix C). The QSoC consists of a CMOS ASIC cooled to 4 K in a cryostat, which is heterogeneously integrated with a 64 diamond QMC array. The spin qubit chosen for this demonstration in QMC is \(\mathrm{SnV^{-}}\), offering high quantum efficiency [16] and spin performance that is compatible with cryogenic temperatures above 1 K [17]. **QSoC module detail -** Figure 1c illustrates the basic function of the QSoC module. The ASIC provides a voltage bias \(V_{\mathrm{b}}\) to tune the zero-phonon line (ZPL) transition frequency \(f\) of the quantum emitters. These can be tuned to a predefined set of frequency channels labeled as \(\mathrm{f(k_{fch})}\). An example cross section in the QSoC showcases the tuning response behavior of different quantum emitters (\(i_{1}\), \(i_{2}\), and \(i_{3}\)) with varying \(f\) as a function of \(\mathrm{V_{b}}\). Some quantum emitters, such as \(i_{2}\), can couple to a resonant dielectric antenna to enhance free space coupling (see Methods). Figure 1 d shows a 3D representation of the CMOS circuit layout co-integrated with diamond QMCs, each featuring \(\mathrm{N_{ch}}=16\) channels. Each quantum channel is integrated with the diamond-resonant dielectric antenna that designs an efficient optical interface based on a 1D photonic crystal cavity design. The antenna incorporates a cointegrated vertically radiating grating coupler, resulting in a greater efficiency of 96% free space collection efficiency within \(\mathrm{NA}=0.9\) in simulation (see Methods for details). A metal layer on the CMOS backplane chip facilitates electronic signal routing from the external electronic source to each QMC. Future iterations of the CMOS chip can incorporate built-in digital logic and analog pulse sequence for routing of quantum control signals with external sources [18]. **Fabrication with lock-and-release integration -** We introduced \(\mathrm{SnV^{-}}\) centers through ion implantation and high temperature annealing in diamond, followed by QMC nanofabrication to define the transferable diamond nanostructure on the bulk parent diamond surface (see Methods). The lock-and-release scalable heterogeneous integration technology, a crucial step in QSoC fabrication, is illustrated in Fig. 2a (the detail of the cross section in Fig. S3b). This procedure enables the parallel transfer of a quantum memory matrix consisting of 8 columns (C1-C8) and 8 rows (R1-R8) to the central region (500 \(\upmu\)m \(\times\) 500 \(\upmu\)m) of the CMOS chip socket, which includes N\({}_{\text{sys}}\) = 1024 of quantum channels in total (see Methods for details). Here, we flip the diamond parent chip and align it with a locking structure that has been post-fabricated on the TSMC 180 nm CMOS chip, followed by fine adjustment of the QMCs with Figure 2: **Fabrication.****a,** Schematic representation of the lock-and-release heterogeneous integration process employed to transfer the quantum microchiplet (QMC) array, which is heterogeneously integrated on the post-processed CMOS chip, to the diamond parent chip. This process involves alignment, locking, releasing, and retracting. **b,** A scanning electron microscope (SEM) image of a quantum channel (light blue) within the QMC, utilizing a dielectric antenna design optimized for photon collection in free space (scale bar 1\(\upmu\)m). Some quantum emitter may be coupled to the antenna like the central one here. **c,** An enlarged SEM image of **d** highlighting the central region of a single QMC (scale bar 10\(\upmu\)m). The orange color indicates the visible CMOS individual backplane electrodes (BE.i). **d,** An optical microscope image of 1024 diamond resonant dielectric antennas integrated on the CMOS control chip, providing a broader view of the QMC array with rows 1-8 and columns 1-8 (scale bar 200\(\upmu\)m). two probes (see Methods and Appendix A). After alignment, we move the parent bulk diamond vertically to lock the QMC and then horizontally, as depicted in Fig. 2a, to break the bridges between the QMC and the bulk diamond for release, followed by retraction of the bulk diamond. Figure 2b shows an SEM image of a single central quantum channel. Figure 2c presents an SEM image of a single QMC region of the chip, with the orange region indicating the individual CMOS backplane electrode region beneath the QMCs. Figure 2d displays an optical microscope image of the 1024 quantum channels integrated into the CMOS control chip. For each quantum channel, we expect to have around 3 resonant quantum emitters on average at a certain optical frequency (see the discussion later). The number of quantum channels in this design can be readily scaled by using a larger CMOS chip corresponding to the diamond parent chip size. **System parameters -** In the following sections, we discuss the measurement of essential parameters of the QSoC to estimate our scaling benefit while keeping the diamond color center optical and spin performance in the QSoC. Essential parameters include **(1)** System size N\({}_{\text{sys}}\), determined by the target number of quantum channels achievable through lock-and-release heterogeneous integration; **(2)** Emitter number per quantum channel n\({}_{\text{emitter}}\), representing the maximum proportion of the quantum emitter in each quantum channel that can be tuned to the same ZPL frequency (see Methods); **(3)** Spin qubit state preparation and measurement error e\({}_{\text{span}}\); **(4)** The spin-photon interface efficiency, characterized by the coherent photon detection probability p\({}_{\text{det}}\) after spin state initialization. We also consider the potential nanocavity enhancement (Purcell factor F\({}_{\text{p}}\)) of a resonant dielectric antenna, which can boost the ZPL photon collection rate. Figure 3: **QSoC characterization.****a,** An overlay of the optical microscope image (scale bar 100 \(\upmu\)m), SEM image (scale bar 5 \(\upmu\)m), and superposition EMCCD image of the emitters’ bright frames (scale bar 3 \(\upmu\)m). These images show an optically bright SnV\({}^{-}\) under a resonant laser excitation frequency ranging from 484.123 to 484.153 THz, with corresponding index mappings indicated. The location of this example on the CMOS chip is also marked. **b,** The ZPL frequency f\({}_{\text{ci}}\) of the emitter with index \(i\) in **a.** The indices are sorted according to the peak ZPL frequency from low to high, where f\({}_{\text{ci}}\) consistently represents the lowest frequency value of the double peaks in the PLE spectrum. **c,** The PLE spectrum of the marked emitters in **a** with ZPL frequency shifted by f\({}_{\text{ci}}\). **d,** An example PLE spectra of SnV\({}^{-}\) vacancy for emitter \(i=16\) in **c.** Two spin-state transitions correspond to two peaks in the PLE with splitting \(\Delta\)E. The inset illustrates the energy diagram of the SnV\({}^{-}\) with and without the magnetic field. **e,** Autocorrelation measurements of the single SnV\({}^{-}\) in **d**. **Scalable SnV\({}^{-}\) characterization -** We perform a high-throughput characterization of our SnV\({}^{-}\) qubits using optical excitation with wide-field illumination and readout from an electron-multiplying charge-coupled device (EMCCD) [19] (see Methods). We show an example of this measurement on a QMC in Fig. 3. Figure 3a presents the spatial locations of SnV\({}^{-}\) in the central QMC region. Figure 3 b shows the ZPL frequency of the color centers fci, and their normalized photoluminescence excitation (PLE) spectra are reported in Figure 3c. Figure 3d shows an example PLE of an SnV\({}^{-}\) with an external magnetic field B = 0.13 T along the diamond axis [001], revealing the two spin-conserving transitions utilized for spin initialization and readout. The spin state can be controlled by an external microwave signal [17, 20] or a modulated laser [21]. The ZPL frequency of SnV\({}^{-}\) can be tuned to overcome the inhomogeneous distribution via strain [10, 22]. To confirm the presence of a single quantum emitter, we performed a second-order autocorrelation (g(\({}^{2}\)) measurement using resonant laser excitation and PSB collection. The collected light is split into two avalanche photodiode (APD) collection paths. The g(\({}^{2}\)) measurement of SnV\({}^{-}\) in Fig. 3d is illustrated in Fig. 3e. Here, g(\({}^{2}\))(0) is 0.07 without background correction, indicating the presence of a single emitter as g(\({}^{2}\))(0)\(<\)0.5. **SnV\({}^{-}\) spectral tuning -** Individual SnV\({}^{-}\) can typically be spectrally resolved due to the inherent inhomogeneous distribution of their optical transitions [23]. The SnV\({}^{-}\) optical frequency can be tuned by means of a capacitive actuator controlled by voltage [10, 24] (detailed in Figure S7). Based on the statistical result in Appendix B, we estimate that the average tuning range is around 2 GHz within the applicable voltage range. Here, we chose a set of uniform 11 frequency channels with spacing \(\Delta v\) = 2 GHz to utilize inhomogeneously distributed quantum emitters within the mode-hop laser tuning range (see Appendix C). The QSoC can tune a quantum emitter's ZPL within the laser tuning range into one of the resonant frequency channels set on average. Figure 4 a presents a wide-field image of a QMC region at one of the 11 frequency channels (labeled by kfch), where each emitter is labeled by a circle whose color indicates the tuning voltage at which it is the brightest. The emitter data within the FOV can be summarized as the point Figure 4: **Quantum emitter spectral tuning and spin state preparation and measurement.****a,** Widefield images of the QMC region at kfch = 1. Each emitter is labeled with a circle whose color indicates the voltage tuning \(V_{\text{b}}\) at which it is brightest. **b,** An example of voltage-induced tuning for an emitter crossing various frequency channels. The statistics of the emitter bright spots number are shown underneath for 11 frequency channels connected by the lines, representing both shallow gray (in one field of view) and deep gray (among all the targeted 1024 quantum channels). **c,** The spin state preparation and measurement pulse sequence, involving four programmable channels: green repump laser (G), resonant laser 1 (R\({}_{\downarrow}\))resonating with the lower frequency transition (\(\ket{\downarrow}\) to \(\ket{\downarrow}^{\prime}\)), resonant laser 2 (R\({}_{\uparrow}\)) resonating with the other higher frequency transition (\(\ket{\uparrow}\) to \(\ket{\uparrow}^{\prime}\)), and APD. We have three collection time bins (each of duration T\({}_{\text{M}}\)). The APD time bin 1 serves as the spin state preparation signal for post-selection. We show histogram measurement counts, post-selected with state preparation threshold counts C\({}_{\text{th}}\)=18 (APD time-bin 1 readout) with T\({}_{\text{M}}\)= 50 \(\upmu\)s. Following post-selection, APD time-bin 2 measures the dark state count, while APD time-bin 3 measures the bright state counts. **d,** The relationship between state preparation and measurement error \(e_{\text{spam}}\) (black line with left axis) and successful post-selection probabilities, p (gray line with right axis) with C\({}_{\text{th}}\). cloud where each black dot represents the position of the qubit and the connected gray line shows that two qubits can be tuned to the same frequency. Figure 4b illustrates the tuning of an emitter from frequency channel k\({}_{\text{fch}}\) = 7 to k\({}_{\text{fch}}\) = 8 at varying voltages. The spectral tuning range from the min tuning voltage to the max tuning voltage, defined as \(\Delta v_{\text{m}}\), is shown in the figure. Appendix B includes a simulation of the spectral tuning effect from voltage-induced strain and statistics on the emitter spectrum tuning range. The histograms underneath represent the number of emitters found in each frequency channel with light gray for a single field of view (FOV) and dark gray for all 1024 quantum channels measured in the entire CMOS chip. The number of bright spots in a single FOV at a specific frequency corresponds to the potential direct connections per qubit enabled by the all-to-all optical connection (see Appendix C). We estimate that we have approximately 2400 resonant emitters in the whole central region of the CMOS chip socket to calculate n\({}_{\text{emitter}}\) at a specific laser frequency with a maximum of 40 V CMOS backplane tuning. **Spin state preparation and measurement -** Figure 4c demonstrates spin state preparation and measurement of the SnV\({}^{-}\) presented in Fig. 3d. We collect the phonon sideband (PSB) emissions confocally using an APD. The inset reveals the pulse time sequence used for state preparation and measurement (see Methods). Initially, a green laser resets the SnV\({}^{-}\) to the negative charge state [25]. Then, we herald the correct spin state of SnV\({}^{-}\) using a laser pulse resonant with the R\({}_{\downarrow}\) transition. We set a signal threshold count C\({}_{\text{th}}\), and when the APD time-bin 1 counts exceed C\({}_{\text{th}}\), we consider the SnV\({}^{-}\) spin state to be successfully prepared. We illustrate the count histogram of C2 and C3 (inset of Fig. 4c) with post-selection on successful state preparation events. Based on the histogram result, we can calculate the spin state preparation and measurement error e\({}_{\text{spam}}\) (see Method). Figure 4d reveals the relationship between e\({}_{\text{spam}}\) and C\({}_{\text{th}}\) on the black line, with the probabilities of successful events for different C\({}_{\text{th}}\) values by the gray line on the logarithmic scale. By selecting C\({}_{\text{th}}\) = 18 as an example, we can reduce e\({}_{\text{spam}}\) to 3% after post-selection. Although the probability of successful initialization is about 3% in this case, initialization can be attempted multiple times until it is successful. We expect the spin state preparation will have a successful event on average within 2 ms. The statistical result without post-selection is displayed in Appendix B. **Efficient spin-photon interface -** A dielectric antenna structure optimizes the photon emission of the quantum emitter for efficient free space collection. Based on the average PSB photon readout counts from the bright state of the SnV\({}^{-}\) in Fig. 4b, we estimate that the lower bound of p\({}_{\text{det}}\) is above \(2.4\times 10^{-3}\) (see Methods), surpassing the previously reported p\({}_{\text{det}}\) of 4\(\times 10^{-4}\)[8, 15, 26]. This indicates that our potential entanglement generation rate for an emitter pair can be several times higher than the previously reported value. The incorporation of a nanocavity Purcell effect by the resonant dielectric antenna can further enhance the ZPL photon emission rate. With an extract Purcell factor of F\({}_{\text{p}}\) = 2.9 for a typical SnV\({}^{-}\), the potential improvement in the ZPL photon collection rate is significant through such an efficient spin-photon interface (see Appendix B and Methods). Enhanced fabrication and alignment of the location and orientation of the emitter can further improve the Purcell factor, as simulations have shown a quality factor ten times higher compared to current devices [27, 28]. **QSoC enables large scale fully connected qubit graph -** Figure 5a illustrates how a connected qubit graph can be built using experimental single FOV data at 11 frequency channels (k\({}_{\text{fch}}\) from 1 to 11). In a FOV, the emitters have all-to-all connectivity within frequency channels with optical routing [29, 30, 31], and some of the emitters can be tuned to connect the neighbor frequency channels. Figure 5 b presents the quantum circuit of equivalence with the cluster in Fig. 5a, assuming each frequency channel k\({}_{\text{fch}}\), it has m\({}_{\text{k}_{\text{fch}}}\) quantum emitters. A desired quantum algorithm can be compiled for the connectivity of the system. The red line labeled here corresponds to the red line in Fig. 5a here. Figure 5c indicates the ratio of the fully connected qubit graph to the total number of qubit nodes (p\({}_{\text{c}}\)) as a function of the average frequency tunability ratio (\(\overline{\Delta v_{\text{m}}}/v_{\text{inh}}\)) across the entire inhomogeneous range (\(v_{\text{inh}}\)) under different sizes of the qubit system (N\({}_{\text{qubit}}\)) (see Methods). This plot shows that for larger system scales, the tunability requirement for achieving a fully connected qubit graph is reduced, allowing the system to operate with lower tuning voltages for better energy efficiency. **Further scaling -** The QSoC module presented here highlighted the advantages of scalability in terms of qubit numbers and connectivity. Connectivity refers to the number of distinct qubits that a single qubit can interact with in an entanglement trial. Figure 5d illustrates the scaling potential of the QSoC platform, considering the number of qubits (N\({}_{\rm qubit}\)) and the number of direct connections per qubit (N\({}_{\rm link}\)). The black line, N\({}_{\rm link}\) = N\({}_{\rm qubit}\), corresponds to all-to-all connectivity, and the shaded region below represents the physically accessible region on the hardware. The light gray box, including the smaller scattered points, represents the data from a single FOV, while the dark gray box, including the larger scattered points, represents the data from the entire sample area. Two gray boxes include the results of a single frequency channel (k\({}_{\rm max}\) = 1) and 11 frequency channels (k\({}_{\rm max}\) = 11). To achieve further scaling, a larger field-of-view objective can be used (e.g. a commercial 10\(\times\) Olympus plan objective with 0.3 NA and a 2.65 mm diameter FOV). Coupled with a denser QMC design (2.52 \(\upmu\)m spacing for diffraction-limited spots at SnV\({}^{-}\) ZPL with the given objective NA), this imaging system achieves up to 8.7 \(\times 10^{5}\) directly resolvable spots using commercial-off-the-shelf (COTS) lenses, as shown in Fig. 5. When a custom-designed lens is employed, the number of diffraction-limited spot sites can be expanded to over ten million, with each spot capable of hosting hundreds of quantum emitters. This level of qubit density of the QSoC is close to the transistor density of the most advanced semiconductor processes. (3\(\times 10^{8}\) transistors/mm\({}^{2}\) in TSMC N3 process [32]). With a broader inhomogeneous distribution range, our qubit density will not be limited by the transistor density, although the transistor density would determine the number of individual voltages the system can apply simultaneously. \begin{table} \begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline Parameter & Reported work & This work \\ \hline N\({}_{\rm sys}\) & 128 [10] & 1024 \\ nemitter & 1 [10] & 2.3 \\ e\({}_{\rm spam}\) (SnV) & 25\% [25] & 3\% \\ p\({}_{\rm det}\) (F\({}_{\rm p}\)) & 4\(\times 10^{-4}\)[8] (1) & 2.4\(\times 10^{-3}\) (2.9) \\ \hline \end{tabular} \end{table} Table 1: QSoC parameters. Figure 5: QSoC enables large-scale fully connected qubit graph with further scaling. a, Widefield images of the emitter spot region across 11 frequency channels (k\({}_{\rm fch}\) from 1 to 11) as well as the illustration of connected cluster using the widefield data from a single field of view.b, The quantum circuit representations of the connected qubit graph. c, The ratio of the fully connected qubit graph p\({}_{\rm c}\) against \(\overline{\Delta v_{\rm m}}/v_{\rm inh}\) under various N\({}_{\rm qubit}\). d, The scaling potential of the QSoC platform considering N\({}_{\rm qubit}\) and N\({}_{\rm link}\). The shading indicates the potential scaling of the corresponding system with the all-to-all connectivity line. **Conclusion -** In summary, we have demonstrated the fabrication and characterization of the core QSoC module through a scalable transfer method for high-yield, large-scale integration of artificial atom arrays with CMOS chips, along with a high-throughput characterization approach. The summarized parameters of the characterized QSoC module are presented in Table 1. The QSoC module exhibits significant advantages in terms of the parameters N\({}_{\text{sys}}\) and n\({}_{\text{emitter}}\) parameters. Other parameters, such as e\({}_{\text{spam}}\), p\({}_{\text{det}}\), and F\({}_{\text{p}}\) can be further improved through refined material processes and control sequences. The architecture can be expanded by incorporating a superconducting nanowire for efficient single-photon detection [33], large-scale CMOS chip control for low-latency solid-state spin control [13, 34], reconfigurable qubit connectivity [35], and heralded spin entanglement [36]. This architecture can be readily extended to other promising solid-state quantum memory platforms. Particularly suitable for heterogeneous integration would be the thin film Si waveguide and a cavity containing isolated color centers [37, 38, 39]; focused-ion-beam-fabricated yttrium orthovanate crystal comprising rare-earth ion [40]; thin film SiC membrane with 4H-C color centers [41, 42]; semiconductor quantum dot in a film [43]; and other emerging material platforms [44, 45].
2306.05628
Quantifying the Knowledge in GNNs for Reliable Distillation into MLPs
To bridge the gaps between topology-aware Graph Neural Networks (GNNs) and inference-efficient Multi-Layer Perceptron (MLPs), GLNN proposes to distill knowledge from a well-trained teacher GNN into a student MLP. Despite their great progress, comparatively little work has been done to explore the reliability of different knowledge points (nodes) in GNNs, especially their roles played during distillation. In this paper, we first quantify the knowledge reliability in GNN by measuring the invariance of their information entropy to noise perturbations, from which we observe that different knowledge points (1) show different distillation speeds (temporally); (2) are differentially distributed in the graph (spatially). To achieve reliable distillation, we propose an effective approach, namely Knowledge-inspired Reliable Distillation (KRD), that models the probability of each node being an informative and reliable knowledge point, based on which we sample a set of additional reliable knowledge points as supervision for training student MLPs. Extensive experiments show that KRD improves over the vanilla MLPs by 12.62% and outperforms its corresponding teacher GNNs by 2.16% averaged over 7 datasets and 3 GNN architectures.
Lirong Wu, Haitao Lin, Yufei Huang, Stan Z. Li
2023-06-09T02:23:37Z
http://arxiv.org/abs/2306.05628v1
# Quantifying the Knowledge in GNNs for Reliable Distillation into MLPs ###### Abstract To bridge the gaps between topology-aware Graph Neural Networks (GNNs) and inference-efficient Multi-Layer Perceptron (MLPs), GLNN (Zhang et al., 2021) proposes to distill knowledge from a well-trained teacher GNN into a student MLP. Despite their great progress, comparatively little work has been done to explore _the reliability of different knowledge points (nodes) in GNNs, especially their roles played during distillation._ In this paper, we first quantify the knowledge reliability in GNN by measuring the invariance of their information entropy to noise perturbations, from which we observe that different knowledge points _(1)_ show different distillation speeds (_temporally_); _(2)_ are differentially distributed in the graph (_spatially_). To achieve reliable distillation, we propose an effective approach, namely _Knowledge-inspired Reliable Distillation_ (KRD), that models the probability of each node being an informative and reliable knowledge point, based on which we sample a set of additional reliable knowledge points as supervision for training student MLPs. Extensive experiments show that KRD improves over the vanilla MLPs by 12.62% and outperforms its corresponding teacher GNNs by 2.16% averaged over 7 datasets and 3 GNN architectures. Codes are publicly available at: [https://github.com/LirongWu/RKD](https://github.com/LirongWu/RKD). Machine Learning, Knowledge-Learning, Knowledge-Learning, Knowledge-Learning, Knowledge-Learning ## 1 Introduction Recent years have witnessed the great success of Graph Neural Networks (GNNs) (Hamilton et al., 2017; Wu et al., 2023; Velickovic et al., 2017; Liu et al., 2020; Wu et al., 2020; Zhou et al., 2020; Wu et al., 2021; 2) in handling graph-related tasks. Despite their great _academic success_, Multi-Layer Perceptrons (MLPs) remain the primary workhorse for practical _industrial applications_. One reason for such academic-industrial gap is the neighborhood-fetching latency incurred by data dependency in GNNs (Jia et al., 2020; Zhang et al., 2021), which makes it hard to deploy for latency-sensitive applications. Conversely, Multi-Layer Perceptrons (MLPs) involve no data dependence between data pairs and infer much faster than GNNs, but their performance is less competitive. Motivated by these complementary strengths and weaknesses, one solution to reduce their gaps is to perform GNN-to-MLP knowledge distillation (Yang et al., 2021; Zhang et al., 2021; Gou et al., 2021), which extracts the knowledge from a well-trained teacher GNN and then distills the knowledge into a student MLP. Despite the great progress, most previous works have simply treated all knowledge points (nodes) in GNNs as equally important, and few efforts are made to explore _the reliability of different knowledge points in GNNs and the diversity of the roles they play in the distillation process_. From the motivational experiment in Fig. 1, we can make two important observations about knowledge points: _(1) More is better:_ the performance of distilled MLPs can be improved as the number of knowledge points \(N_{KP}\) increases; and _(2) Reliable is better:_ the performance variances (e.g., standard deviation and best/worst performance gap) of different knowledge combinations are enlarged as \(N_{KP}\) decreases. The above two observations suggest that different knowledge points may play different roles in the distillation process and that distilled MLPs can consistently benefit from _more reliable_ knowledge points, while those uninformative and unreliable knowledge points may contribute little to the distillation. Figure 1: Mean, standard deviation, and minimum/maximum classification accuracy of student MLPs trained with different combinations of (randomly sampled) GNN knowledge points on Cora. **Present Work.** In this paper, we identify a potential _under-confidence_ problem for GNN-to-MLP distillation, i.e., the distilled MLPs may not be able to make predictions as confidently as teacher GNNs. Furthermore, we conduct extensive theoretical and experimental analysis on this problem and find that it is mainly caused by the lack of reliable supervision from teacher GNNs. To provide more supervision for reliable distillation into student MLPs, we propose to quantify the knowledge in GNNs by measuring the invariance of their information entropy to noise perturbations, from which we find that different knowledge points _(1)_ show different distillation speeds (_temporally_); _(2)_ are differentially distributed in the graph (_spatially_). Finally, we propose an effective approach, namely _Knowledge-inspired Reliable Distillation_ (KRD), for filtering out unreliable knowledge points and making full use of those with informative knowledge. The proposed KRD framework models the probability of each node being an information-reliable knowledge point, based on which we sample a set of additional reliable knowledge points as supervision for training student MLPs. Our main contributions can be summarized as follows: * We are the first to identify a potential _under-confidence_ problem for GNN-to-MLP distillation, and more importantly, we described in detail what it represents, how it arises, what impact it has, and how to deal with it. * We propose a perturbation invariance-based metric to quantify the reliability of knowledge in GNNs and analyze the roles played by different knowledge nodes _temporally_ and _spatially_ in the distillation process. * We propose a _Knowledge-inspired Reliable Distillation_ (KRD) framework based on the quantified GNN knowledge to make full use of those reliable knowledge points as additional supervision for training MLPs. ## 2 Related Work **GNN-to-GNN Knowledge Distillation.** Despite the great progress, most existing GNNs share the de facto design that relies on message passing to aggregate features from neighborhoods, which may be one major source of latency in GNN inference. To address this problem, there are previous works that attempt to distill knowledge from large teacher GNNs to smaller student GNNs, termed as GNN-to-GNN distillation (Lassance et al., 2020; Zhang et al., 2020; Ren et al., 2021; Joshi et al., 2021; Wu et al., 2022;b). For example, the student model in RDD (Zhang et al., 2020) and TinyGNN (Yan et al., 2020) is a GNN with fewer parameters but not necessarily fewer layers than the teacher GNN. Besides, LSP (Yang et al., 2020) transfers the topological structure (rather than feature) knowledge from a pre-trained teacher GNN to a shallower student GNN. In addition, GNN-SD (Chen et al., 2020) directly distills knowledge across different GNN layers, mainly aiming to solve the over-smoothing problem but with unobvious performance improvement at shallow layers. Moreover, FreeKD (Feng et al., 2022) studies a free-direction knowledge distillation architecture, with the purpose of dynamically exchanging knowledge between two shallower GNNs. Note that both teacher and student models in the above works are GNNs, making it still suffer from neighborhood-fetching latency. **GNN-to-MLP Knowledge Distillation.** To enjoy the topology awareness of GNNs and inference-efficient of MLPs, the other branch of graph knowledge distillation is to directly distill from teacher GNNs to lightweight student MLPs, termed as GNN-to-MLP distillation. For example, CPF (Yang et al., 2021)_directly_ improves the performance of student MLPs by adopting deeper/wider network architectures and incorporating label propagation in MLPs, both of which burden the inference latency. Instead, GLNN (Zhang et al., 2021) distills knowledge from teacher GNNs to vanilla MLPs without other computing-consuming operations; while the performance of their distilled MLPs can be _indirectly_ improved by employing more powerful GNNs, they still cannot match GNN-to-GNN distillation in terms of classification performance. To further improve GLNN, RKD-MLP (Anonymous, 2023) adopts a meta-policy to filter out unreliable soft labels, but this is essentially a downsampling-style strategy that will further reduce the already limited supervision. In contrast, this paper aims to provide more reliable supervision for training student MLPs, which can be considered as an up-sampling-style strategy. ## 3 Preliminaries **Notions and Problem Statement.** Let \(\mathcal{G}=(\mathbf{A},\mathbf{X})\) be a graph with the node set \(\mathcal{V}\) and edge set \(\mathcal{E}\), where \(\mathcal{V}\) is the set of \(N\) nodes with features \(\mathbf{X}=[\mathbf{x}_{1},\mathbf{x}_{2},\cdots,\mathbf{x}_{N}]\in\mathbb{R} ^{N\times d}\). The graph structure is denoted by an adjacency matrix \(\mathbf{A}\in[0,1]^{N\times N}\) with \(\mathbf{A}_{i,j}=1\) if \(e_{i,j}\in\mathcal{E}\) and \(\mathbf{A}_{i,j}=0\) if \(e_{i,j}\notin\mathcal{E}\). Considering a semi-supervised node classification task where only a subset of node \(\mathcal{V}_{L}\) with labels \(\mathcal{Y}_{L}\) are known, we denote the labeled set as \(\mathcal{D}_{L}=(\mathcal{V}_{L},\mathcal{Y}_{L})\) and unlabeled set as \(\mathcal{D}_{U}=(\mathcal{V}_{U},\mathcal{Y}_{U})\), where \(\mathcal{V}_{U}=\mathcal{V}\backslash\mathcal{V}_{L}\). The node classification aims to learn a mapping \(\Phi:\mathcal{V}\rightarrow\mathcal{Y}\) so that it can be used to infer the ground-truth label \(y_{i}\in\mathcal{Y}_{U}\). **Graph Neural Networks (GNNs).** A general GNN framework consists of two key operations for each node \(v_{i}\): (1) \(\mathrm{AGGREGATE}\): aggregating messages from neighborhood \(\mathcal{N}_{i}\); (2) \(\mathrm{UPDATE}\): updating node representations. For an \(L\)-layer GNN, the formulation of the \(l\)-th layer is as \[\begin{split}\mathbf{m}_{i}^{(l)}=&\,\mathrm{AGGREGATE }^{(l)}\left(\{\mathbf{h}_{j}^{(l-1)}:v_{j}\in\mathcal{N}_{i}\}\right)\\ \mathbf{h}_{i}^{(l)}=&\,\mathrm{UPDATE}^{(l)}\left( \mathbf{h}_{i}^{(l-1)},\mathbf{m}_{i}^{(l)}\right)\end{split} \tag{1}\] where \(1\leq l\leq L\), \(\mathbf{h}_{i}^{(0)}=\mathbf{x}_{i}\) is the input node feature, and \(\mathbf{h}_{i}^{(l)}\) is the node representation of node \(v_{i}\) in the \(l\)-th layer. **Multi-Layer Perceptrons (MLPs).** To achieve efficient inference, the vanilla MLPs are used as the student model by default in this paper. For a \(L\)-layer MLP, the \(l\)-th layer is composed of a linear transformation, an activation function \(\mathrm{ReLu}(\cdot)\), and a dropout function \(\mathrm{Dropout}(\cdot)\), as follows \[\mathbf{z}_{i}^{(l)}=\mathrm{Dropout}\left(\mathrm{ReLu}\big{(}\mathbf{z}_{i}^{ (l-1)}\mathbf{W}^{(l-1)}\big{)}\right) \tag{2}\] where \(\mathbf{z}_{i}^{(0)}=\mathbf{x}_{i}\) is the input feature, and \(\{\mathbf{W}^{(l)}\}_{l=0}^{L-1}\) are weight matrices with the hidden dimension \(F\). In this paper, the network architecture of MLPs, such as the layer number \(L\) and layer size \(F\), is set the same as that of teacher GNNs. **GNN-to-MLP Knowledge Distillation.** The knowledge distillation is first introduced in (Hinton et al., 2015) to mainly handle image data. However, recent works on GNN-to-MLP distillation (Yang et al., 2021; Zhang et al., 2021) extend it to the graph domain by imposing KL-divergence constraint \(\mathcal{D}_{KL}(\cdot,\cdot)\) between the label distributions generated by teacher GNNs and student MLPs, as follows \[\mathcal{L}_{\mathrm{KD}}=\frac{1}{|\mathcal{V}|}\sum_{i\in\mathcal{V}} \mathcal{D}_{KL}\left(\sigma\big{(}\mathbf{z}_{i}^{(L)}\big{)},\sigma\big{(} \mathbf{h}_{i}^{(L)}\big{)}\right) \tag{3}\] where \(\sigma(\cdot)=\mathrm{softmax}(\cdot)\), and all nodes (knowledge points) in the set \(\mathcal{V}\) are indiscriminately used as supervisions. ## 4 Methodology ### What Gets in the Way of Better Distillation? **Potential Under-confident Problem**. The GNN-to-MLP distillation can be achieved by directly optimizing the objective function \(\mathcal{L}_{\mathrm{KD}}\) defined in Eq. (3). However, such a straightforward distillation completely ignores the differences between knowledge points in GNNs and may suffer from a potential under-confident problem, i.e., the distilled MLP may fail to make predictions as confidently as teacher GNNs. To illustrate this problem, we report in Fig. 2(a) the confidences of teacher GCNs and student MLPs for those correct predictions by the UMAP (McInnes et al., 2018) algorithm on the Cora dataset. It can be seen that there exists a significant distribution shift between the confidence distribution of teacher GCNs and student MLPs, which confirms the existence of the under-confident problem. The direct hazard of such an under-confident problem is that it may push those samples located near the class boundaries into incorrect predictions, as shown in Fig. 3(a) and Fig. 3(c), which hinders the performance of student MLPs. To go deeper into the under-confident problem and explore what exactly stands in the way of better GNN-to-MLP distillation, we conducted extensive theoretical and experimental analysis and found that one of the main causes could be due to the lack of reliable supervision from teacher GNNs. **Theoretical Analysis**. The main strength of teacher GNNs over student MLPs is their excellent topology-awareness capability, which is mainly enabled by message passing. There have been a number of works exploring the roles of message passing in GNNs. For example, (Yang et al., 2020) have proved that message passing (architecture design) in GNNs is equivalent to performing Laplacian smoothing (supervision design) on node embeddings in MLPs. In essence, message-passing-based GNNs implicitly take the objective of Dirichlet energy minimization (Belkin and Niyogi, 2001) as graph-based regularization, which is defined as follows \[\mathcal{L}_{reg}=\mathrm{Tr}\left(\mathbf{Y}^{\top}\Delta\mathbf{Y}\right)= \sum_{i}\sum_{j\in\mathcal{N}_{i}}\left\|\frac{\mathbf{Y}_{i}}{\sqrt{d_{i}}}- \frac{\mathbf{Y}_{j}}{\sqrt{d_{j}}}\right\|_{2}^{2} \tag{4}\] where \(\Delta=\mathbf{I}-\mathbf{D}^{-\frac{1}{2}}\mathbf{A}\mathbf{D}^{-\frac{1}{2}}\) is the normalized Laplacian operator, \(\mathbf{D}\) is the degree matrix with \(\mathbf{D}_{i,i}=d_{i}=\sum_{j}\mathbf{A}_{i,j}\), and \(\mathbf{Y}=\mathrm{softmax}\left(\mathbf{H}^{(L)}\right)\) is the label distribution matrix. Apart from the supervision of cross-entropy on the labeled set, message passing in GNNs implicitly provides a special kind of _self-supervision_, which imposes regularization constraints on the label distributions between neighboring nodes. We conjecture that it is exactly such additional self-supervision that enables GNNs to make highly confident predictions. In contrast, student MLPs are trained in a way that cannot capture the fine-grained dependencies between neighboring nodes; instead, they only learn the overall contextual information about their neighborhood from teacher GNNs, resulting in undesirable under-confident predictions. Figure 2: _(a)_ Histograms of the confidence distributions of teacher GCNs and students MLP for those correct predictions on the Cora dataset. _(b)_ Distribution of _“False Negative”_ samples w.r.t the information entropy of teacher’s predictions on the Cora dataset. _(c)_ Scatter curve of confidence (student MLP) and information entropy (teacher GCN) for those _“True Positive”_ samples on the Cora dataset. **Experimental Analysis**. To see why the (distilled) student MLPs tend to make low-confidence predictions, we conducted an in-depth statistical analysis on two types of special samples. _(1)_ The distribution of _"False Negative"_ samples (predicted correctly by GNNs but incorrectly by MLPs) w.r.t the information entropy of teacher's predictions is reported in Fig. 2(b), from which we observe that most of the _"False Negative"_ samples are distributed in the region of higher entropy. _(2)_ For those _"True Positive"_ samples (predicted correctly by both GNNs and MLPs), the scatter of confidence and information entropy from student MLPs and teacher GNNs is plotted in Fig. 2(c), which shows that GNN knowledge with high uncertainty (low reliability) may undermine the capability of student MLPs to make sufficiently confident predictions. Based on these two observations, it is reasonable to hypothesize that one cause of the under-confident problem suffered by student MLPs is be the lack of sufficiently reliable supervision from teacher GNNs. ### How to Quantify the Knowledge in GNNs? Based on the above experimental and theoretical analysis, a key issue in GNN-to-MLP distillation may be to provide _more and reliable_ supervision for training student MLPs. Next, we first describe how to quantify the reliability of knowledge in GNNs, and then propose how to sample more reliable supervision through a knowledge-inspired manner. **Knowledge Quantification.** Given a graph \(\mathcal{G}=(\mathbf{A},\mathbf{X})\) and a pre-trained teacher GNN \(f_{\theta}(\cdot,\cdot)\), we propose to quantify the reliability of a knowledge point (node) \(v_{i}\in\mathcal{V}\) in GNNs by measuring the invariance of its information entropy to noise perturbations, which is defined as follows \[\begin{split}&\rho_{i}=\frac{1}{\delta^{2}}\underset{\mathbf{X}^{ \prime}\sim\mathcal{N}(\mathbf{X},\mathbf{\Sigma}(\mathbf{\mathcal{G}}))}{ \mathbb{E}}\left[\left\|\mathcal{H}(\mathbf{Y}_{i}^{\prime})-\mathcal{H}( \mathbf{Y}_{i})\right\|^{2}\right],\\ &\text{where}\quad\mathbf{Y}^{\prime}=f_{\theta}(\mathbf{A}, \mathbf{X}^{\prime})\ \ \text{and}\ \ \mathbf{Y}=f_{\theta}(\mathbf{A},\mathbf{X})\end{split} \tag{5}\] where \(\delta\) is the variance of Gaussian noise and \(\mathcal{H}(\cdot)\) denote the information entropy. The smaller the metric \(\rho_{i}\) is, the higher the reliability of knowledge point \(v_{i}\) is. The quantification of GNN knowledge defined in Eq. (5) has the following three strengths: _(1)_ It measures the robustness of knowledge in teacher GNNs to noise perturbations, and thus more truly reflects the reliability of different knowledge points, which is very important for reliable distillation. _(2)_ The message passing is what makes GNNs special over MLPs, so the key to quantify GNN knowledge is to measure its topology-awareness capability. Compared with node-wise information entropy, Eq. (5) not only reflects the node uncertainty, but also takes into account the contextual information from the neighborhood. _(3)_ As will be analyzed next, the knowledge quantified by Eq. (5) shows the roles played by different knowledge points _spatially_ and _temporally_. **Spatial Distribution of Knowledge Points**. To explore the spatial distribution of different knowledge points in the graph, we first visualize the embeddings of teacher GNNs and student MLPs in Fig. 3(a) and Fig. 3(c), and then we mark the knowledge points with the reliability ranked in the top 20% and bottom 10% as green and orange in Fig. 3(b) and Fig. 3(d). To make it clearer, we only report the results for two classes on the Cora dataset; more visualizations can be found in **Appendix C**. We find that different knowledge points are differentially distributed in the graph, where most reliable knowledge points are distributed around the class centers regardless of being in teacher GNNs or student MLPs, while those unreliable ones are distributed at the class boundaries. The spatial distribution of knowledge points explains well why most of the _False Negative_ samples are located in regions with high uncertainty in Fig. 2(c). **Temporal Distribution of Knowledge Points**. To see the distillation speed of different knowledge points, we explore which knowledge points the student MLPs will be fitted to first during the training process. We considered those knowledge points that are correctly predicted by student MLPs Figure 4: Percentage of highly reliable knowledge points on Cora to show the distillation speeds of different knowledge points. Figure 3: _(a)(c)_ Visualizations of the embeddings of teacher GNNs and student MLPs for two classes on Cora. _(b)(d)_ Spatial distribution of knowledge points with the reliability ranked in the top 20% and bottom 10%, which are marked in green and orange, respectively. and ranked in the top 50% of reliability, among which we calculate the percentage of points with the top 20% of reliability in Fig. 4. It can be seen that student MLPs will quickly fit to those highly reliable knowledge points first as the training proceeds, and then gradually learn from those relatively less reliable knowledge points. This indicates that different knowledge points play different roles in the distillation process, which inspires us to sample some reliable knowledge points from teacher GNNs _in a dynamic manner_ to provide more additional supervision for training MLPs. ### Knowledge-inspired Reliable Distillation In this subsection, we first model the probability of each node being an informative and reliable knowledge point based on the knowledge quantification defined by Eq. (5). Next, we propose a knowledge-based sampling strategy to make full use of those reliable knowledge points as additional supervision for more reliable distillation into MLPs. A high-level overview of the proposed _Knowledge-inspired Reliable Distillation_ (KRD) framework is shown in Fig. 5. **Sampling Probability Modeling**. We aim to estimate the sampling probability of a knowledge point based on its quantified reliability. As shown in Fig. 6, we plot the histograms of _"True Positive"_ sample density w.r.t the reliability metric \(\rho\) on two datasets (see **Appendix D** for more results), where the density has been min/max normalized. We model the sampling probability \(s_{i}\) of node \(v_{i}\) based on the metric \(\rho_{i}\) by a learnable power distribution (with power \(\alpha\)), as follows: \[p(s_{i}\mid\rho_{i},\alpha)=1-(\frac{\rho_{i}}{\rho_{M}})^{\alpha},\quad \forall v_{i}\in\mathcal{V} \tag{6}\] where \(\rho_{M}=\operatorname*{argmax}_{j}\rho_{j}\). When the ground-truth labels are available, an optimal power \(\alpha_{opt}\) can be directly fitted from histograms. However, the ground-truth labels are often unknown in practice, so we propose to combine the student MLPs \(g_{\psi^{(t)}}(\cdot)\) with the pre-trained teacher GNNs \(f_{\theta_{pre}}(\cdot)\) to model \(p\left(\alpha^{(t)}\mid f_{\theta_{pre}}(\mathbf{A},\mathbf{X}),g_{\psi^{(t)} }(\mathbf{A},\mathbf{X})\right)\) at \(t\)-th epoch, which can be implemented by the following four steps: (1) initializing the power \(\alpha^{(0)}=1.0\); (2) constructing a histogram of sample density (predicted to be the same by both teacher GNNs and student MLPs) w.r.t the knowledge reliability metric \(\rho\); (3) inferring a new power \(\alpha^{(t)}_{new}\) by fitting the histogram; (4) updating power \(\alpha^{(t-1)}\) in a dynamic momentum manner, which can be formulated as follows \[\alpha^{(t)}\leftarrow\eta\alpha^{(t-1)}+(1-\eta)*\alpha^{(t)}_{new} \tag{7}\] where \(\eta\) is the momentum updating rate. We provide the fitted curves with _fixed_ and _learnable_ powers in Fig. 6, which shows that the fitted distributions of learnable powers are more in line with the histogram. Moreover, we also include the results of fitting by Gaussian and exponential distributions as comparisons, but it shows that they do not work better. A quantitative comparison of different distribution fitting schemes has been provided in Table. 2, and the fitted results on more datasets are available in **Appendix D**. **Knowledge-based Sampling**. Next, we describe how to sample a set of reliable knowledge points as additional supervision for training student MLPs. Given any target node \(v_{i}\), we first sample some highly reliable knowledge points \(v_{j}\in\mathcal{N}_{i}\) from its neighborhood according to the sampling probability \(p(s_{j}\mid\rho_{i},\alpha^{(t)})\). Then, we take sampled knowledge points as multiple teachers and distill their knowledge into student MLPs as additional supervision through a _multi-teacher distillation_ objective, which is defined as follows \[\mathcal{L}_{\mathrm{KRD}}\!=\!\!\mathop{\mathbb{E}}_{i}\mathop{\mathbb{E}}_{ j\in\mathcal{N}_{i}}\mathcal{D}_{KL}\!\left(\sigma(\mathbf{z}_{j}^{(L)}/\tau), \sigma(\mathbf{h}_{i}^{(L)}/\tau)\right) \tag{8}\] where \(\tau\) is the distillation temperature coefficient. ### Training Strategy The pseudo-code of the KRD framework is summarized in Algorithm 1. To achieve GNN-to-MLP knowledge distillation, we first pre-train the teacher GNNs with the classification loss \(\mathcal{L}_{\mathrm{label}}=\frac{1}{|\mathcal{V}_{L}|}\sum_{i\in\mathcal{V}_ {L}}\mathrm{CE}\left(y_{i},\sigma(\mathbf{h}_{i}^{(L)})\right)\), where \(\mathrm{CE}(\cdot)\) denotes the cross-entropy loss. Finally, the total objective function to distill reliable knowledge from the teacher GNNs into the student MLPs is defined as follows \[\mathcal{L}_{\mathrm{total}}=\frac{\lambda}{|\mathcal{V}_{L}|}\sum_{i\in \mathcal{V}_{L}}\mathcal{H}\!\left(y_{i},\sigma(\mathbf{z}_{i}^{(L)})\right)\! +\!\left(1\!-\!\lambda\right)\!\left(\mathcal{L}_{\mathrm{KD}}\!+\!\mathcal{L }_{\mathrm{KRD}}\right)\] where \(\lambda\) is the weight to balance the influence of the classification loss and knowledge distillation losses. ### Time Complexity Analysis It is noteworthy that the main computational burden introduced in this paper comes from additional reliable supervision as defined in Eq. (8). However, we sample reliable Figure 5: A high-level overview of the proposed KRD framework. Figure 6: Histograms of _“True Positive”_ sample density w.r.t the reliability metric \(\rho\), as well as five distribution fitting schemes for modeling the sampling probability on the Cora and Citeseer datasets, where the density has been min/max normalized. knowledge points _in the neighborhood_ instead of the entire set of nodes \(\mathcal{V}\), which has reduced the time complexity from \(\mathcal{O}(|\mathcal{V}^{2}|F)\) to less than \(\mathcal{O}(|\mathcal{E}|F)\). The training time complexity of the KRD framework mainly comes from two parts: (1) GNN training \(\mathcal{O}(|\mathcal{V}|dF+|\mathcal{E}|F)\) and (2) knowledge distillation \(\mathcal{O}(|\mathcal{E}|F)\), where \(d\) and \(F\) are the dimensions of input and hidden spaces. The total time complexity \(\mathcal{O}(|\mathcal{V}|dF+|\mathcal{E}|F)\) is linear w.r.t the number of nodes \(|\mathcal{V}|\) and edges \(|\mathcal{E}|\), which is in the same order as GCNs and GLNN. ``` 0: Graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\), Node Features: \(\mathbf{X}\), # Epoch: \(E\). 0: Predicted labels \(\mathcal{Y}_{U}\), MLP parameters \(\{\mathbf{W}^{l}\}_{l=0}^{L-1}\). 1: Randomly initialize the parameters of GNNs and MLPs. 2: Pre-train the teacher GNNs until convergence by \(\mathcal{L}_{\mathrm{label}}\). 3: Quantify the reliability of knowledge points by Eq. (5). 4:for\(t\in\{1,2,\cdots,E+1\}\)do 5: Calculate the node representations from teacher GNNs and student MLPs by Eq. (1) and Eq. (2); 6: Estimate the sampling probability by Eq. (6); 7: Sample reliable knowledge points and calculate the multi-teacher distillation loss \(\mathcal{L}_{\mathrm{KRD}}\) by Eq. (8); 8: Calculate the total loss \(\mathcal{L}_{\mathrm{total}}\) and update the parameters of student MLPs \(\{\mathbf{W}^{l}\}_{l=0}^{L-1}\) by back propagation. 9: Momentum updating the power \(\alpha^{(t)}\) by Eq. (7). 10:endfor 11: Predicted labels \(y_{i}\in\mathcal{Y}_{U}\) for those unlabeled nodes \(\mathcal{Y}_{U}\). 12:return Predicted labels \(\mathcal{Y}_{U}\) and Parameters \(\{\mathbf{W}^{l}\}_{l=0}^{L-1}\). ``` **Algorithm 1** Algorithm for KRD framework (Transductive) ## 5 Experiments In this section, we evaluate KRD on seven real-world datasets by answering the following six questions. **Q1**: How effective is KRD in the transductive and inductive settings? Is KRD applicable to different teacher GNNs? **Q2:** How does KRD compare to other leading baselines on graph knowledge distillation? **Q3:** What happens if we model the sampling probability using other distribution functions? **Q4:** How does KRD perform by applying other heuristic knowledge sampling approach? **Q5:** Can KRD improve the predictive confidence of distilled MLPs? **Q6:** How do the two key hyperparameters \(\lambda\) and \(\eta\) influence the performance of KRD? **Dataset.** The effectiveness of the KRD framework is evaluated on seven real-world datasets, including Cora (Sen et al., 2008), Citeseer (Giles et al., 1998), Pubmed (McCallum et al., 2000), Coauthor-CS, Coauthor-Physics, Amazon-Photo (Shchur et al., 2018), and ogbn-arxiv (Hu et al., 2020). A statistical overview of datasets is placed in **Appendix A**. Besides, each set of experiments is run five times with different random seeds, and the average accuracy and standard deviation are reported. Due to space limitations, we defer the implementation details and hyperparameter settings for each dataset to **Appendix C** and supplementary materials. **Baselines.** Three basic components in knowledge distillation are (1) teacher model, (2) student model, and (3) distillation objective. As a model-agnostic framework, KRD can be combined with any teacher GNN architecture. In this paper, we consider three types of teacher GNNs, including GCN (Kipf and Welling, 2016), GraphSAGE (Hamilton et al., 2017), and GAT (Velickovic et al., 2017). Besides, we adopt pure MLPs (with the same layer number \(L\) and size \(F\) as teacher GNNs) as the student model for a fair comparison. The focus of this paper is to provide more reliable self-supervision for GNN-to-MLP distillation. Thus, we only take GLNN (Zhang et al., 2021) as an important benchmark to demonstrate the necessity and effectiveness of additional supervision. Besides, we also compare KRD with some state-of-the-art graph distillation baselines in Table. 2, including CPF (Yang et al., 2021), RKD-MLP (Anonymous, 2023), FF-G2M (Wu et al., 2023b), RDD (Zhang et al., 2020b), TinyGNN (Yan et al., 2020), LSP (Yang et al., 2020b), etc. ### Classification Performance Comparison (Q1) The reliable knowledge of three teahcer GNNs is distilled into student MLPs in the transductive and inductive settings. The experimental results on seven datasets are reported in Table. 1, from which we can make three observations: _(1)_ Compared to the vanilla MLPs and intuitive KD baseline - GLNN, KRD performs significantly better than them in all cases, regardless of the datasets, teacher GNNs and evaluation settings. For example, KRD outperforms GLNN by 2.03% (GCN), 1.92% (SAGE), and 2.03% (GAT) averaged over seven datasets in the transductive setting, respectively. The superior performance of KRD demonstrates the effectiveness of providing more reliable self-supervision for GNN-to-MLP distillation. _(2)_ The performance gain of KRD over GLNN is higher on the large-scale ogbn-arxiv dataset. We speculate that this is because the reliability of different knowledge points probably differ more in large-scale datasets, making those reliable knowledge points play a more important role. _(3)_ It can be seen that KRD works much better in the transductive setting than in the inductive one, since there are more node features that can be used for training in the transductive setting, providing more reliable knowledge points to serve as additional self-supervision. ### Comparision with Representative Baselines (Q2) To answer **Q2**, we compare KRD with several representative graph knowledge distillation baselines, including both GNN-to-GNN and GNN-to-MLP distillation. As can be seen from the results reported in Table 2, KRD outperforms all other GNN-to-MLP baselines by a wide margin. More importantly, we are the first work to demonstrate _the promising potential of distilled MLPs to surpass distilled GNNs_. Even when compared with those state-of-the-art GNN-to-GNN distillation methods, KRD still shows competitive performance, ranking in the top two on 6 out of 7 datasets. ### Evaluation on Distribution Fitting Function (Q3) To evaluate the effectiveness of different distribution fitting functions and the momentum updating defined in Eq. (7), we compare the learnable power distribution defined in Eq. (6) with the other four schemes: (A) exponential distribution \(p(s_{i}\mid\rho_{i},\alpha)=\alpha\exp^{-\alpha\cdot\frac{\rho_{i}}{\rho_{M}}}\) with learnable rate \(\alpha\); (B) Gaussian distribution \(p(s_{i}\mid\rho_{i},\alpha)=\mathcal{N}(0,\alpha)\) with learnable variance \(\alpha\); (C) power distribution with fixed power \(\alpha=1\); and (D) power distribution with fixed power \(\alpha=3\). From the results reported in Table. 3, it can be seen that (1) when modeling the sampling probability with power distribution, the _learnable_ power is consistently better than the _fixed_ power on all datasets, and (2) the exponential, \begin{table} \begin{tabular}{c c|c c c c c c c|c} \hline \hline **Teacher** & **Student** & \multicolumn{2}{c}{Cora} & \multicolumn{2}{c}{Citeseer} & \multicolumn{2}{c}{Pubmed} & \multicolumn{2}{c}{Photo} & \multicolumn{2}{c}{CS} & \multicolumn{2}{c}{Physics} & \multicolumn{1}{c|}{ogbn-arxiv} & \multicolumn{1}{c}{_Average_} \\ \hline \multicolumn{11}{c}{**Transductive Setting**} \\ \hline MLPs & - & \(59.58_{\pm 0.97}\) & \(60.32_{\pm 0.61}\) & \(73.40_{\pm 0.68}\) & \(78.65_{\pm 1.68}\) & \(87.82_{\pm 0.64}\) & \(88.81_{\pm 1.08}\) & \(54.63_{\pm 0.84}\) & - \\ \hline \multirow{4}{*}{GCN} & - & \(81.70_{\pm 0.96}\) & \(71.64_{\pm 0.34}\) & \(79.48_{\pm 0.21}\) & \(90.63_{\pm 1.53}\) & \(90.00_{\pm 0.58}\) & \(92.45_{\pm 0.53}\) & \(\mathbf{71.20}_{\pm 0.17}\) & - \\ & GLNN & \(82.20_{\pm 0.73}\) & \(71.72_{\pm 0.30}\) & \(80.16_{\pm 0.20}\) & \(91.42_{\pm 1.61}\) & \(92.20_{\pm 0.72}\) & \(93.11_{\pm 0.39}\) & \(67.76_{\pm 0.23}\) & - \\ & KRD (ours) & \(\mathbf{84.42}_{\pm 0.57}\) & \(\mathbf{74.86}_{\pm 0.58}\) & \(\mathbf{81.98}_{\pm 0.41}\) & \(\mathbf{92.21}_{\pm 1.44}\) & \(\mathbf{94.08}_{\pm 0.34}\) & \(\mathbf{94.30}_{\pm 0.46}\) & \(70.92_{\pm 0.21}\) & - \\ & _Improv._ & 2.22 & 3.14 & 1.82 & 0.79 & 1.86 & 1.19 & 3.16 & 2.03 \\ \hline \multirow{4}{*}{GraphSAGE} & - & \(82.02_{\pm 0.94}\) & \(71.76_{\pm 0.49}\) & \(79.36_{\pm 0.45}\) & \(90.56_{\pm 1.69}\) & \(89.29_{\pm 0.77}\) & \(91.97_{\pm 0.91}\) & \(71.06_{\pm 0.27}\) & - \\ & GLNN & \(81.86_{\pm 0.88}\) & \(71.52_{\pm 0.54}\) & \(80.32_{\pm 0.38}\) & \(91.34_{\pm 1.46}\) & \(92.00_{\pm 0.57}\) & \(92.82_{\pm 0.93}\) & \(68.30_{\pm 0.19}\) & - \\ & KRD (ours) & \(\mathbf{84.60}_{\pm 0.76}\) & \(\mathbf{73.68}_{\pm 0.68}\) & \(\mathbf{81.60}_{\pm 0.33}\) & \(\mathbf{92.12}_{\pm 1.50}\) & \(\mathbf{93.93}_{\pm 0.40}\) & \(\mathbf{94.18}_{\pm 0.58}\) & \(\mathbf{71.50}_{\pm 0.25}\) & - \\ & _Improv._ & 2.74 & 2.16 & 1.28 & 0.78 & 1.93 & 1.36 & 3.20 & 1.92 \\ \hline \multirow{4}{*}{GAT} & - & \(81.66_{\pm 1.04}\) & \(70.78_{\pm 0.60}\) & \(79.88_{\pm 0.85}\) & \(90.06_{\pm 1.38}\) & \(90.90_{\pm 0.37}\) & \(91.97_{\pm 0.58}\) & \(71.08_{\pm 0.19}\) & - \\ & GLNN & \(81.78_{\pm 0.75}\) & \(70.96_{\pm 0.86}\) & \(80.48_{\pm 0.47}\) & \(91.22_{\pm 1.45}\) & \(92.44_{\pm 0.41}\) & \(92.70_{\pm 0.56}\) & \(68.56_{\pm 0.22}\) & - \\ & KRD (ours) & \(\mathbf{84.12}_{\pm 0.39}\) & \(\mathbf{73.60}_{\pm 0.59}\) & \(\mathbf{82.02}_{\pm 0.56}\) & \(\mathbf{92.13}_{\pm 1.48}\) & \(\mathbf{94.35}_{\pm 0.29}\) & \(\mathbf{94.19}_{\pm 0.50}\) & \(\mathbf{71.45}_{\pm 0.26}\) & - \\ & _Improv._ & 2.34 & 2.10 & 1.54 & 0.91 & 1.91 & 1.49 & 2.89 & 1.88 \\ \hline \multicolumn{11}{c}{**Inductive Setting**} \\ \hline MLPs & - & \(59.20_{\pm 1.26}\) & \(60.16_{\pm 0.87}\) & \(73.26_{\pm 0.83}\) & \(79.02_{\pm 1.42}\) & \(87.90_{\pm 0.58}\) & \(89.10_{\pm 0.90}\) & \(54.46_{\pm 0.52}\) & - \\ \hline \multirow{4}{*}{GCN} & - & \(\mathbf{79.30}_{\pm 0.49}\) & \(71.46_{\pm 0.36}\) & \(78.10_{\pm 0.51}\) & \(89.32_{\pm 1.63}\) & \(90.07_{\pm 0.60}\) & \(92.05_{\pm 0.78}\) & \(\mathbf{70.88}_{\pm 0.35}\) & - \\ & GLNN & \(71.24_{\pm 0.45}\) & \(70.76_{\pm 0.30}\) & \(80.16_{\pm 0.73}\) & \(89.92_{\pm 1.34}\) & \(92.08_{\pm 0.98}\) & \(92.89_{\pm 0.88}\) & \(60.92_{\pm 0.31}\) & - \\ & KRD (ours) & \(73.78_{\pm 0.55}\) & \(\mathbf{71.80}_{\pm 0.41}\) & \(\mathbf{81.48}_{\pm 0.29}\) & \(\mathbf{90.37}_{\pm 1.79}\) & \(\mathbf{93.15}_{\pm 0.43}\) & \(\mathbf{93.86}_{\pm 0.55}\) & \(62.85_{\pm 0.32}\) & - \\ & _Improv._ & 2.54 & 1.04 & 1.32 & 0.45 & 1.07 & 0.97 & 2.93 & 1.47 \\ \hline \multirow{4}{*}{GraphSAGE} & - & \(\mathbf{79.56}_{\pm 0.47}\) & \(70.24_{\pm 0.62}\) & \(79.40_{\pm 0.48}\) & \(89.76_{\pm 1.51}\) & \(89.96_{\pm 0.56}\) & \(91.79_{\pm 0.69}\) & \(\mathbf{71.13}_{\pm 0.32}\) & - \\ & GLNN & \(71.82_{\pm 0.35}\) & \(70.26_{\pm 0.71}\) & \(80.46_{\pm 0.34}\) & \(89.49_{\pm 1.70}\) & \(92.0 Gaussian and power distributions perform differently on different datasets, but the power distribution can achieve better overall performance than the other two distributions. ### Evaluation on Knowledge Sampling Strategy (Q4) To explore how different sampling strategies influence the performance of distillation, we compare our knowledge-inspired sampling with other three schemes: (A) _Non-sampling_: directly takes all nodes in the neighborhood as additional supervision and distills their knowledge into the student MLPs; (B) _Random Sampling_: randomly sampling knowledge points with 50% probability in the neighborhood for distillation; (C) _Entropy-based Sampling_: performing min/max normalization on the information entropy of each knowledge point to [0-1], and then sampling by taking entropy as sampling probability. Besides, we also include the performance of vanilla GCN and GLNN as a comparison. We can observe from Table. 3 that (1) Both non-sampling and random sampling help to significantly improve the performance of GLNN, again demonstrating the importance of providing additional supervision for training student MLPs. (2) Entropy- and knowledge-based sampling performs much better than non-sampling and random sampling, suggesting that different knowledge plays different roles during distillation. (3) Compared with entropy-based sampling, knowledge-based sampling fully takes into account the contextual information of the neighborhood as explained in Sec. 4.2, and thus shows better overall performance. ### Evaluation on Confidence Distribution (Q5) To explore whether providing additional reliable supervision can improve the predictive confidence of distilled MLPs, we compare the confidence distribution of KRD with that of GLNN in Fig. 7 on four datasets. It can be seen that the predictive confidence of student MLPs in GLNN (optimized with only the distillation term defined by Eq. (3) is indeed not very high. Instead, KRD provides additional reliable self-supervision defined in Eq. (8), which helps to greatly improve the predictive confidence of student MLPs. ### Evaluation on Hyperparameter Sensitivity (Q6) We provide sensitivity analysis for two hyperparameters, loss weights \(\lambda\) and momentum updating rate \(\eta\) in Fig. 8(a) and Fig. 8(b), from which we observe that (1) setting the loss weight \(\lambda\) too large weakens the contribution of the distillation term, leading to poor performance; (2) too large or small \(\eta\) are both detrimental to modeling sampling probability and extracting informative knowledge. In practice, \(\eta=0.9,0.99\) often yields pretty good performance. In practice, we can determine \(\lambda\) and \(\eta\) by selecting the model with the highest accuracy on the validation set by the grid search. ## 6 Conclusion In this paper, we identified a potential _under-confidence_ problem for GNN-to-MLP distillation, and more importantly, we described in detail what it represents, how it arises, what impact it has, and how to deal with it. To address this problem, we design a perturbation invariance-based metric to quantify the reliability of knowledge in GNNs, based on which we propose a _Knowledge-inspired Reliable Distillation_ (KRD) framework to make full use of those reliable knowledge points as additional supervision for training MLPs. Limitations still exist; for example, combining our work with other more powerful and expressive teacher/student models may be another promising direction. \begin{table} \begin{tabular}{l|c c c c c} \hline \hline **Methods** & Cora & Citeseer & Pubmed & Photo & CS & Physics \\ \hline Vanilla GCN & 81.70 & 71.64 & 79.48 & 90.63 & 90.00 & 92.45 \\ GLNN & 82.54 & 71.92 & 80.16 & 90.48 & 91.48 & 92.81 \\ Non-sampling & 83.26 & 73.58 & 80.74 & 91.45 & 93.04 & 93.42 \\ Random & 82.42 & 73.10 & 81.08 & 91.28 & 92.57 & 93.74 \\ Entropy-based & 83.64 & 73.74 & 81.32 & 91.58 & 93.35 & 93.63 \\ \hline Knowledge-based & **84.42** & **74.86** & **81.98** & **92.21** & **94.08** & **94.30** \\ \hline \hline \end{tabular} \end{table} Table 4: Performance comparison of different sampling strategies, where the best/second metrics are marked in **bold** and underline. Figure 8: Hyperparameter sensitivity analysis on \(\lambda\) and \(\eta\). \begin{table} \begin{tabular}{l|c c c c c c} \hline \hline **Methods** & Cora & Citeseer & Pubmed & Photo & CS & Physics \\ \hline Exponential & 83.30 & 73.84 & 81.10 & 92.12 & 93.80 & 93.63 \\ Gaussian & 84.12 & 74.52 & 81.56 & 92.10 & **94.15** & 94.08 \\ Power (fixed \(\alpha\)=1) & 83.84 & 74.18 & 81.44 & 92.04 & 93.93 & 93.93 \\ Power (fixed \(\alpha\)=3) & 83.54 & 74.32 & 81.34 & 91.95 & 94.01 & 93.75 \\ Power (learnble) & **84.42** & **74.86** & **81.98** & **92.21** & 94.08 & **94.30** \\ \hline \hline \end{tabular} \end{table} Table 3: Performance comparison of different distribution fitting functions and the momentum updating of Eq. (7), where **bold** and underline denote the best and second metrics on each dataset. Figure 7: Confidence distribution of the distilled MLPs in GLNN and KRD on four datasets, where GCN is adopted as the teacher. ## 7 Acknowledgement This work was supported by National Key R&D Program of China (No. 2022ZD0115100), National Natural Science Foundation of China Project (No. U21A20427), and Project (No. WU2022A009) from the Center of Synthetic Biology and Integrated Bioengineering of Westlake University.
2308.15197
Where Would I Go Next? Large Language Models as Human Mobility Predictors
Accurate human mobility prediction underpins many important applications across a variety of domains, including epidemic modelling, transport planning, and emergency responses. Due to the sparsity of mobility data and the stochastic nature of people's daily activities, achieving precise predictions of people's locations remains a challenge. While recently developed large language models (LLMs) have demonstrated superior performance across numerous language-related tasks, their applicability to human mobility studies remains unexplored. Addressing this gap, this article delves into the potential of LLMs for human mobility prediction tasks. We introduce a novel method, LLM-Mob, which leverages the language understanding and reasoning capabilities of LLMs for analysing human mobility data. We present concepts of historical stays and context stays to capture both long-term and short-term dependencies in human movement and enable time-aware prediction by using time information of the prediction target. Additionally, we design context-inclusive prompts that enable LLMs to generate more accurate predictions. Comprehensive evaluations of our method reveal that LLM-Mob excels in providing accurate and interpretable predictions, highlighting the untapped potential of LLMs in advancing human mobility prediction techniques. We posit that our research marks a significant paradigm shift in human mobility modelling, transitioning from building complex domain-specific models to harnessing general-purpose LLMs that yield accurate predictions through language instructions. The code for this work is available at https://github.com/xlwang233/LLM-Mob.
Xinglei Wang, Meng Fang, Zichao Zeng, Tao Cheng
2023-08-29T10:24:23Z
http://arxiv.org/abs/2308.15197v2
# Where Would I Go Next? Large Language Models as Human Mobility Predictors ###### Abstract Accurate human mobility prediction underpins many important applications across a variety of domains, including epidemic modelling, transport planning, and emergency responses. Due to the sparsity of mobility data and the stochastic nature of people's daily activities, achieving precise predictions of people's locations remains a challenge. While recently developed large language models (LLMs) have demonstrated superior performance across numerous language-related tasks, their applicability to human mobility studies remains unexplored. Addressing this gap, this article delves into the potential of LLMs for human mobility prediction tasks. We introduce a novel method, _LLM-Mob_, which leverages the language understanding and reasoning capabilities of LLMs for analysing human mobility data. We present concepts of _historical stays_ and _context stays_ to capture both long-term and short-term dependencies in human movement and enable time-aware prediction by using time information of the prediction target. Additionally, we design context-inclusive prompts that enable LLMs to generate more accurate predictions. Comprehensive evaluations of our method reveal that _LLM-Mob_ excels in providing accurate and interpretable predictions, highlighting the unapped potential of LLMs in advancing human mobility prediction techniques. We posit that our research marks a significant paradigm shift in human mobility modelling, transitioning from building complex domain-specific models to harnessing general-purpose LLMs that yield accurate predictions through language instructions. The code for this work is available at [https://github.com/xlwang233/LLM-Mob](https://github.com/xlwang233/LLM-Mob). Human Mobility Large Language Models Prediction Artificial Intelligence ## 1 Introduction Human mobility refers to the movement of people from one place to another, typically within a geographic area such as a city, region, or country. The prediction of human mobility has seen immediate benefits in many downstream tasks such as points of interest recommendation [20], communication networks traffic prediction [17], road traffic optimisation [14] and many more. Moreover, studying the mobility of people is crucial for tackling numerous prominent societal challenges, including urbanisation [1], segregation [13], and the spread of epidemics [15], to name a few. The unique characteristics of human mobility manifest in its inherent regularity, stochasticity (Song et al., 2010a) and complex spatiotemporal dependencies, making it hard to accurately predict people's whereabouts. Recent studies exploit the spatiotemporal modelling capabilities of deep learning models to achieve better predictive performance (Feng et al., 2018; Xue et al., 2021; Hong et al., 2023), but the accuracy is still not sufficient and the produced results cannot be directly and fully explained, which hinders the interpretability and applicability of such models in real world applications. The recently launched large language model (LLM) - ChatGPT (OpenAI, 2022) has shown impressive performance, outperforming many models in a range of NLP tasks even in zero-shot settings (Qin et al., 2023). It has not only spawned many interesting researches in NLP field and beyond, but also profoundly changed people's daily lives. However, whether LLMs are able to model human mobility data remains unknown. To answer this question, this work explores the potential of LLMs for modelling human mobility data, as shown in Figure 1. Utilising LLMs to model human mobility is not a trivial task because LLMs are trained and optimised for language processing and understanding, which means they cannot be directly used for location prediction. To address this issue, we propose a framework called _LLM-Mob_ that seamlessly combines human mobility prediction with language modelling. Specifically, we organise the mobility data into _historical stays_ and _context stays_ to account for long-term and short-term dependencies in people's movements and utilise the time information from the _target stay_ to make time-aware prediction. Moreover, we design effective prompting strategies to help the LLMs understand the mobility data, maximise their reasoning abilities, and enable the interpretation of the prediction results. To validate the proposed _LLM-Mob_, we conduct comprehensive experiments on two public human mobility datasets, in which we compare our method with the state-of-the-art models and analyse the prediction results. Furthermore, we discuss the capabilities of LLMs in predicting human mobility and point out the limitations of our method. Experiment results show that _LLM-Mob_ achieves superior predictive performance and interpretability. In summary, we state the main contributions of our work as follows: * We establish an effective framework to model mobility data using accessible LLMs; to the best of our knowledge, it is the first time that LLMs are applied to modelling human mobility. * We verify the effectiveness of our method by extensive experiments on two public datasets. We analyse the results quantitatively and qualitatively, and probe into the capabilities of the LLMs. The analysis can serve as the empirical evidence that supports utilising LLMs in mobility prediction. * We argue that we have introduced and made the very first step towards an important paradigm shift to modelling human mobility, i.e., from domain specific deep neural networks to a general purpose LLM-based approach. We envision our work could inspire a line of research utilising LLMs in the future. Figure 1: LLM-based human mobility prediction. ## 2 Related Work ### Human Mobility Prediction Human mobility prediction is the study of forecasting human movements either at individual or collective levels, using various data sources and predictive modelling techniques. Several pioneering studies focused on uncovering spatiotemporal regularity and statistical properties of individuals' movements (Gonzalez et al., 2008; Song et al., 2010). They revealed the predictability of human mobility and laid the theoretical foundation of this research field. In recent years, with the availability of large-scale mobility data and advances in machine learning, data-driven approaches have gained popularity in human mobility prediction. Early studies assumed the Markovian property for individuals' location visits (Ashbrook and Starner, 2002), based on which, Mobility Markov Chain was first proposed by Gambs et al. (2010) and was later used in their work on the next place prediction (Gambs et al., 2012). Although more recent studies (Huang, 2017; Wang et al., 2020) further enhanced the predictive performance of this type of models, they failed to consider the long-range spatiotemporal dependencies. Therefore, deep learning models capable of learning complex sequential dependencies from vast amount of data were introduced to tackle the problem. For example, recurrent neural network (RNN) and its variants (e.g., LSTM (Hochreiter and Schmidhuber, 1997) and GRU (Cho et al., 2014)) have been proven to outperform Markov models in predicting the next location (Feng et al., 2018; Krishna et al., 2018). Moreover, most recent works (Xue et al., 2021; Hong et al., 2023) have utilised Transformer architecture (Vaswani et al., 2017) to mitigate the limited ability of RNNs to model long-term dependencies and have incorporated various contextual information such as semantic, social, and geographical contexts to further improve the performance. Despite the increasing complexity of these deep neural architectures, the performance enhancements have been somewhat incremental and the prediction results cannot be fully explained. ### Large Language Models Large language models (LLMs) refer to Transformer-based pre-trained language models that contain tens or even hundreds of billions of parameters and are trained on massive text data (Zhao et al., 2023). Existing LLMs include GPT-3 (Brown et al., 2020), GPT-4 (OpenAI, 2023), PaLM (Chowdhery et al., 2022), and Llama (Touvron et al., 2023), etc. They have not only achieved excellent performance on various language-related tasks but have also found applications outside the NLP fields such as biology, law, social sciences and so on. The success of LLMs is closely related to their emergent abilities (Wei et al., 2022) that are not found in small models, such as in-context learning which is briefly described below. The in-context learning (ICL) ability is formally introduced by GPT-3 (Brown et al., 2020): assuming that the language model has been provided with a natural language instruction and/or several task demonstrations, it can generate the expected output without requiring additional training or gradient update. Constructing effective prompts is essential to ICL. Wei et al. (2022) proposed a prompting strategy called chain-of-thought (CoT) that can bring out the strong reasoning ability of LLMs by utilising the intermediate reasoning steps for deriving the final answer. Extensions of CoT prompting include zero-shot variants (Kojima et al., 2022), automatically generated series of reasoning steps (Zhang et al., 2022; Wang et al., 2023), sampling and selecting the most consistent answer via a majority vote (Wang et al., 2022), and a tree of thoughts that enables the LLMs to self-evaluate the intermediate thoughts and incorporate search algorithms (Yao et al., 2023). Compared to supervised learning, ICL does not require any training process, greatly reducing the computational costs for adapting the model to new tasks. Despite the successful application in various areas, no research has touched on the possibility of using LLMs in human mobility studies. To fill the gap, we make the first attempt to leverage the cutting-edge LLMs for modelling and predicting human mobility by establishing an in-context learning process detailed in the following sections. ## 3 Preliminaries We introduce the notations used in this article and formulate the next location prediction problem. ### Terms and Notations Mobility data are typically collected through electronic devices and stored as spatiotemporal trajectories. Each track point in a user's trajectory comprises a pair of spatial coordinates and a timestamp. After preprocessing, a user's trajectory is represented as a sequence of _stays_ where people remain stationary for a certain amount of time. More specifically, a stay is denoted as (_st_, _dow_, _dur_, _pid_), where _st_ indicates the time when the stay starts, _dow_ denotes the day of the week, _dur_ denotes the duration of the stay, and _pid_ denotes the unique identifier of the place where the stay occurred. An example of a stay could be (_17:30_, _Tuesday_, _35 minutes_, _place_1_), which basically means that the user stayed at _place_1_ from 17:30 till 18:05 on Tuesday. ### Next Location Prediction Given a user's sequence of stays up until time \(n\): \(\textbf{{S}}=(S_{n-Q+1},\ldots,S_{n})\), the goal is to predict the next location/place (i.e., _pid\({}_{n+1}\)_) that the user will visit in the next time step. The length of the sequence \(Q\) determines how much past information is considered in the predictive model. ## 4 Methodology In this section, we elaborate on the framework of our proposed _LLM-Mob_. The whole workflow is shown in Figure 2. There are two main steps: data formatting and prompt designing. In constructing the data, the pre-processed mobility data will be formatted into _historical stays_ and _context stays_ to accommodate both long-term and short-term dependencies and the time information contained in target stay, i.e., \((st_{n+1},\mathit{low}_{n+1})\), is utilised to facilitate time-aware prediction. Then we use the formatted data to form _Prompt_, which is fed into LLMs to retrieve the _Answer_ that contains both the prediction results and the corresponding reasons. The details are explained below. ### Data Formatting It is challenging for LLMs to extract useful information from raw sequential stays, and hence we propose to format the data as different kinds of stays to help LLMs better capture the dependencies of people's mobility. #### 4.1.1 Capturing Long-term and Short-term Dependencies In the context of the individuals' movements, long-term dependencies are associated with how people move in space over extended periods, often weeks to months or even years, which exhibit relatively stable patterns. For example, a Figure 2: The workflow of _LLM-Mob_. person might always visit a certain workplace in the morning. In comparison, short-term dependencies appear more stochastic as they can be influenced by immediate and sometimes unpredictable factors like sudden changes in weather or an unplanned event, leading to variations in mobility patterns. To capture these dependencies, we propose to construct _historical stays_ as stays which are relatively more distant from the current time and span over a longer period, while _context stays_ are a few most recent stays that are less in number (as shown in Figure 3). The number of _historical stays_ and _context stays_ are \(M\) and \(N\), respectively. #### 4.1.2 Target Stay Time Information for Time-Aware Prediction Figure 3 presents an illustration of the _time-aware prediction_ scenario, where the time information of the target stay is considered when predicting its corresponding place. This means that, instead of guessing the next place without knowing any temporal information (the _time-unknown prediction_ as shown in Figure 3), we predict which place the user will be at the next time \((st_{n+1},dow_{n+1})\). We argue that this is a more realistic implementation of the next location prediction problem because in real life, we would always like to know the situation at a specific time in the future. For example, a traffic forecasting system would predict the traffic condition at certain times in the future like 1 or 2 hours later; a user might ask an intelligent personal assistant (e.g., Alexa, Siri, etc.) a question like "Which leisure place do you recommend me to visit at 3 pm on Saturday?". ### Context-Inclusive Prompting Building upon existing prompting strategies like Chain-of-Thought (CoT) [Wei et al., 2022b] and Plan-and-Solve (PS) [Wang et al., 2023], we carefully develop context-inclusive prompts that incorporate relevant contextual information (such as the description for the data) to enhance next location prediction by LLMs. After iterative trials and refinements, we end up with the final prompt shown in Figure 4. The prompt consists of two main aspects, i.e., instruction and data. Each aspect comprises several sub-parts which are annotated by their purpose as shown in Figure 4. In the prompt, <history>, <context> and <next_place_id> correspond to _historical stays, context stays_ and \(pid_{n+1}\), respectively. Apart from general steps like _Specify the task_ and _Provide the data_, we devise particular instructions and provide relevant context to make the most of the comprehension and reasoning capabilities of the LLMs, which are explained as follows. #### 4.2.1 Describe the Data We provide a detailed description of the three kinds of formatted data so that LLMs can easily digest the information in the context of human mobility prediction. #### 4.2.2 Specify the Number of Output Places We ask the model to output \(k\) most probable places in descending order of probability so that we can evaluate the accuracy and cumulative gain up to rank position \(k\) (refer to section 5.2.3 for details on evaluation metrics). We hypothesise that the number of output places could potentially influence the model's performance, which is corroborated by the experiment results in Table 2. #### 4.2.3 Guide the Model to "Think" To ensure that the model considers both long-term (historical activity pattern) and short-term (recent activity pattern) dependencies as well as the target time information in its reasoning process, we not only ask the model to consider Figure 3: Illustration of historical, context and target stays. these three aspects explicitly but also provide context to justify these considerations (e.g., inform the LLMs of the fact that people's activities vary during different times and on different days). We argue that this operation serves the same function as CoT [20], i.e., guiding the model to "think" logically. However, CoT achieves this by providing demonstration examples, while we attain it using clear instructions and relevant context. #### 4.2.4 Ask for Explanation Moreover, we design our prompt to ask for both the prediction and the reason that supports the prediction. The intentions of this operation are: (1) Make the results interpretable, increasing the interpretability and reliability of the model and (2) Further enhance the reasoning ability of the model. We argue that asking for explanations is essentially the reasoning generation step proposed in PS prompting [21]. ## 5 Experiments In this section, we evaluate the performance of our method both quantitatively and qualitatively. ### Datasets and Preprocessing We conduct extensive experiments on two types of human mobility datasets: A GNSS tracking dataset - Geolife [19] and a check-in dataset - Foursquare New York City (FSQ-NYC) [21]. We strictly follow Figure 4: Prompts used in _LLM-Mob_. Each subpart in the prompt is annotated by its function which is highlighted in grey box. Please note that the annotations are not included in the prompt and are solely used for illustration. the data preprocessing steps proposed in (Hong et al., 2023), including filtering the datasets to only consider users observed for plenty of days, processing raw trajectories into stays and splitting the dataset into training and testing sets. Considering the cost of the OpenAI API, we further down-sample FSQ-NYC by randomly selecting 50 users (around 10\(\%\) of the total users). The statistics of the final datasets we used for evaluating our method are described in Table 1. ### Experimental Settings #### 5.2.1 Implementation Details The specific LLM employed in the experiment is GPT-3.51, which is one of the most advanced and widely used LLMs with open APIs2. We set the temperature to 0 to avoid randomness in the output. Moreover, the length of historical stays \(M\) and context stays \(N\) are set as 40 and 5, respectively. These are empirical values that can be adapted to specific needs. Footnote 1: [https://platform.openai.com/docs/models/gpt-3-5](https://platform.openai.com/docs/models/gpt-3-5) Footnote 2: The specific version being used was gpt-3.5-turbo-0601. #### 5.2.2 Baselines We compare our model with classical prediction models and recently published state-of-the-art deep learning models in the next location prediction domain. * 1-MMC. First-order Mobility Markov Chain (Gambs et al., 2012), a classical location prediction model which assumes the Markovian property for individual location visits. * LSTM. A classic sequence modelling neural architecture that has shown great performance for the next location prediction (Krishna et al., 2018; Solomon et al., 2021). * LSTM-Attn. An LSTM with a (masked) self-attention between the hidden states (Li et al., 2020). The attention results are combined with the actual output of the LSTM through a small feedforward network. * DeepMove. A framework comprising two separate recurrent networks; one for learning the periodicity from historical visits and the other for mining transition patterns from the current trajectory (Feng et al., 2018). * MobTcast. Based on the transformer encoder network, MobTCast leverages temporal, semantic, social, and geographical contexts in the history place sequence to predict the next place (Xue et al., 2021). * MHSA. Multi-Head Self-Attentional (MHSA) neural network that builds on transformer architecture, considering historical location visits, visit time, activity duration, as well as their surrounding land use functions, to infer an individual's next location (Hong et al., 2023). #### 5.2.3 Evaluation Metrics We use the following commonly used metrics to quantify the predictive performance of compared models: * _Accuracy_. It measures the correctness of the predicted location compared to the ground truth next visited location. More specifically, we sort the location in descending order in terms of their probability of being the next location and check whether the ground truth location appears within the top-k predictions. Acc@k measures the proportion of times this is true in the test dataset. We report Acc@1, Acc@5, and Acc@10 for comparison. * _Weighted F1_. Individual's visits to locations are highly unbalanced, with some locations occurring more often than others. We use the F1 score weighted by the number of visits to emphasise the model's performance in the more important locations. \begin{table} \begin{tabular}{l c c} \hline \hline & Geolife & FSQ-NYC-sampled \\ \hline \# Users & 45 & 50 \\ \# Days tracked & \(345\pm 413\) & \(85\pm 34\) \\ \# Stays per user & \(369\pm 456\) & \(173\pm 130\) \\ \# Unique places per user & \(77\pm 108\) & \(33\pm 18\) \\ \# Test samples & \(3459\) & \(977\) \\ \hline \hline \end{tabular} \end{table} Table 1: Basic statistics of the datasets. The mean and standard deviation across users are reported. * _nDCG@k_. Normalized discounted cumulative gain (with rank position \(k\)) measures the quality of the prediction vector by the ratio between the discounted cumulative gain (DCG) and the ideal discounted cumulative gain (IDCG). The calculation of _nDCG@k_ is given below: \[nDCG@k=\frac{DCG_{k}}{IDCG_{k}},\] (1) \[DCG_{k}=\sum_{j=1}^{k}\frac{r_{j}}{log_{2}(j+1)},\] (2) where \(r_{j}\) denotes the relevance value at position \(j\). In the context of location prediction, \(r_{j}\in\{0,1\}\), and \(r_{j}=1\) if and only if the \(j\)-th item in the ranked prediction vector corresponds to the ground truth next location. In our experiment, we report the average _nDCG@10_ over all test samples. ### Main Results The predictive performance of all considered methods is presented in Table 2. We can observe that our method significantly outperforms all baseline models. When asking the model to output the most likely place instead of top 10 probable places, the _Accuracy@1_ and _F1 score_ improves even further by a large margin. The results suggest that _LLM-Mob_ is a promising approach for predicting human mobility. ### Ablation Study To evaluate the effects of the three components proposed in the data formatting step, we build three variants of the full model (_LLM-Mob_), i.e., _NoHistory_, _NoContext_ and _NoTime_, which exclude historical stays, context stays and target stay time information as well as relevant contents in the prompt, respectively. To evaluate the effects of specific prompting strategies, two more variants are designed. _NoGuide_ removes the "_guide the model to 'think'_" part in the prompt; _NoReason_ deletes the requirement for _LLM-Mob_ to output reasons that support the prediction. \begin{table} \begin{tabular}{c|c|c c c c c c|c c} \hline \hline Dataset & Metric & 1-MMC & LSTM & LSTM Attn & Deepmove & MobTcast & MHSA & \begin{tabular}{c} **LLM-Mob** \\ \(k=10\) \\ \end{tabular} \\ \hline \multirow{4}{*}{Geolife} & Acc@1 (\%) & 24.1 & 28.4 & 29.8 & 26.1 & 29.5 & 31.4 & 36.6 & **45.1** \\ & Acc@5 (\%) & 38.1 & 55.8 & 54.6 & 54.2 & 51.3 & 56.4 & **82.5** & - \\ & Acc@10 (\%) & 39.5 & 59.1 & 58.2 & 58.7 & 56.2 & 60.8 & **87.4** & - \\ & Weighted F1 & 0.227 & 0.193 & 0.213 & 0.189 & 0.173 & 0.218 & 0.259 & **0.404** \\ & nDCG@10 & 0.327 & 0.447 & 0.450 & 0.426 & 0.434 & 0.465 & **0.645** & - \\ \hline \multirow{4}{*}{FSQ-NYC (sampled)} & Acc@1 (\%) & 13.2 & 15.6 & 17.1 & 15.7 & 15.8 & 17.1 & 29.1 & **32.2** \\ & Acc@5 (\%) & 28.6 & 49.0 & 48.5 & 46.8 & 44.7 & 46.1 & **64.5** & - \\ \cline{1-1} & Acc@10 (\%) & 31.9 & 56.5 & 56.6 & 57.2 & 52.5 & 56.5 & **72.4** & - \\ \cline{1-1} & Weighted F1 & 0.121 & 0.106 & 0.124 & 0.140 & 0.108 & 0.115 & 0.195 & **0.270** \\ \cline{1-1} & nDCG@10 & 0.130 & 0.197 & 0.213 & 0.199 & 0.202 & 0.200 & **0.508** & - \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison of predictive performance. \(k=10\) and \(k=1\) mean that the output length of the prediction results are set to 10 and 1 respectively. The best results are marked in **bold**. Figure 5: Ablation results on FSQ-NYC. We perform an ablation study with these five variants (under \(k=1\) setting) and report the performance on FSQ-NY dataset in Figure 5. We can observe that the full model consistently outperforms all the variants, indicating that each designed component has a positive effect on performance improvement. Moreover, the performance drop with _NoHistory_ is particularly higher than others, further underscoring the importance of considering long-term dependencies in predicting people's location. \begin{table} \begin{tabular}{l l} \hline \hline **Case 1: User 1.** & [\({}^{\text{O}}\)08:45 PM\({}^{\text{,}}\) Monday\({}^{\text{,}}\) 400, 10), (03:31 AM\({}^{\text{,}}\) 400, 17), (04:16 AM\({}^{\text{,}}\) 16AM\({}^{\text{,}}\) 221, 10), \\ & (08:48 AM\({}^{\text{,}}\) 52, 17), (\({}^{\text{,}}\)106 AM\({}^{\text{,}}\) 715day\({}^{\text{,}}\) 586, 17), (08:02 PM\({}^{\text{,}}\) 715day\({}^{\text{,}}\) 1018, 10), \\ & (01:03 PM\({}^{\text{,}}\) Wedhesday\({}^{\text{,}}\) 325, 17), (06:34 PM\({}^{\text{,}}\) Wedhesday\({}^{\text{,}}\) 489, 10), (02:47 AM\({}^{\text{,}}\) Thursday\({}^{\text{,}}\) 34, \\ & (03:52 AM\({}^{\text{,}}\) 601, 70), (05:22 AM\({}^{\text{,}}\) Thursday\({}^{\text{,}}\) 733, 10), (06:64 AM\({}^{\text{,}}\) Thursday\({}^{\text{,}}\) 198, 17), (10:51 AM\({}^{\text{,}}\) Thursday\({}^{\text{,}}\) 586, 17), (07:38 PM\({}^{\text{,}}\) Thursday\({}^{\text{,}}\) 36, 18), (02:08 AM\({}^{\text{,}}\) Friday\({}^{\text{,}}\) 477, \\ & (10:153 AM\({}^{\text{,}}\) 774), (1088), (10:566 AM\({}^{\text{,}}\) Saturday\({}^{\text{,}}\) 353, 17), (11:08 AM\({}^{\text{,}}\) Saturday\({}^{\text{,}}\) 34, \\ & (38:11) 49 AM\({}^{\text{,}}\) Saturday\({}^{\text{,}}\) 388, 17), (06:22 PM\({}^{\text{,}}\) Saturday\({}^{\text{,}}\) 454, 18), (01:02 PM\({}^{\text{,}}\) Sunday\({}^{\text{,}}\) 193, \\ & (02:02 AM\({}^{\text{,}}\) Sunday\({}^{\text{,}}\) 961, 17), (02:15 PM\({}^{\text{,}}\) Sunday\({}^{\text{,}}\) 497, 10), (06:47 AM\({}^{\text{,}}\) Monday\({}^{\text{,}}\) 169, \\ & (17), (10:07 AM\({}^{\text{,}}\) Monday\({}^{\text{,}}\) 36, 34), (10:49 AM\({}^{\text{,}}\) Monday\({}^{\text{,}}\) 481, 17), (05:57 PM\({}^{\text{,}}\) Monday\({}^{\text{,}}\) 454, 18), (01:53 AM\({}^{\text{,}}\) 138, 10), \\ & (18), (01:53 AM\({}^{\text{,}}\) Monday\({}^{\text{,}}\) 134, 17), (05:31 AM\({}^{\text{,}}\) Monday\({}^{\text{,}}\) 91, 4), (06:09 AM\({}^{\text{,}}\) Tuesday\({}^{\text{,}}\) 60, 10), \\ & (08:18 AM\({}^{\text{,}}\) Wedlesday\({}^{\text{,}}\) 96, 17), (10:01 AM\({}^{\text{,}}\) Thuesday\({}^{\text{,}}\) 594, 4), (08:02 PM\({}^{\text{,}}\) Tuesday\({}^{\text{,}}\) 581, 10), \\ & (05:49 AM\({}^{\text{,}}\) Wedlesday\({}^{\text{,}}\) 97, (09:04 AM\({}^{\text{,}}\) Wedlesday\({}^{\text{,}}\) 89, 39), (11:37 AM\({}^{\text{,}}\) Wedlesday\({}^{\text{,}}\) 305, \\ & (17), (04:45 PM\({}^{\text{,}}\) Wedlesday\({}^{\text{,}}\) 551, 18), (02:16 AM\({}^{\text{,}}\) Thursday\({}^{\text{,}}\) 95, 17), (04:25 AM\({}^{\text{,}}\) Thursday\({}^{\text{,}}\) 354, 10), (10:102 AM\({}^{\text{,}}\) Thursday\({}^{\text{,}}\) 913, 17) \\ & (01:54 PM\({}^{\text{,}}\) Thursday\({}^{\text{,}}\) 556, 10), (12:36 AM\({}^{\text{,}}\) Friday\({}^{\text{,}}\) 546, 42), (11:27 AM\({}^{\text{,}}\) Friday\({}^{\text{,}}\) 455, 1), (01:37 PM\({}^{\text{,}}\) Friday\({}^{\text{,}}\) 30, 1), (02:33 PM\({}^{\text{,}}\) Friday\({}^{\text{,}}\) 33, 1)] \\ **Contest stays:** & [\({}^{\text{,}}\) 513; Friday\({}^{\text{,}}\) 55; 10), (02:13 PM\({}^{\text{,}}\) Friday\({}^{\text{,}}\) 546, 42), (11:27 AM\({}^{\text{,}}\) Friday\({}^{\text{,}}\) 455, 1), (01:37 PM\({}^{\text{,}}\) Friday\({}^{\text{,}}\) 30, 1), (02:33 PM\({}^{\text{,}}\) Friday\({}^{\text{,}}\) 33, 1)] \\ **Ground truth:** & [\({}^{\text{,}}\) 53; Friday\({}^{\text{,}}\) 55; 10), (02:23 PM\({}^{\text{,}}\) Friday\({}^{\text{,}}\) 33, 1)] \\ **Ground truth:** & [\({}^{\text{,}}\) 55; 10), (02:33 PM\({}^{\text{,}}\) Friday\({}^{\text{,}}\) 33, 1)] \\ **Target time information:** & [\({}^{\text{,}}\) 55; 13; Friday\({}^{\text{,}}\) 55; 10), (02:33 PM\({}^{\text{,}}\) Friday\({}^{\text{,}}\) 33, 1)] \\ **MHSA** & [\({}^{\text{,}}\) 55; 10), (02:33 PM\({}^{\text{,}}\) Friday\({}^{\text{,}}\) 30, 1), (02:33 PM\({}^{\text{,}}\) Friday\({}^{\text{,}}\) 33, 1)] \\ **HLM-Mob** & [\({}^{\text{,}}\) 55; 10), (02:33 PM\({}^{\text{,}}\) Friday\({}^{\text{,}}\) 33, 1)] \\ **HLM-Mob** & [\({}^{\text{,}}\) 55; 10), (02:33 PM\({}^{\text{,}}\) Friday\({}^{\text{,}}\) 33, 1)] \\ **HLM-Mob** & [\({}^{\text{,}}\) 55; 10), (02:33 PM\({}^{\text{,}}\) Friday\({}^{\text{,}}\) 33, 1)] \\ **LHM-Mob** & [\({}^{\text{,}}\) 55; 10), (02:33 PM\({}^{\text{,}}\) Friday\({}^{\text{,}}\) 33, 1)] \\ **LHM-Mob** & [\({}^{\text{,}}\) 55; 10), (02:33 PM\({}^{\text{,}}\) Friday\({}^{\text{,}}\) 33, 1)] \\ **Ground truth:** & [\({}^{\text{,}}\) 55; 10), (02:33 PM\({}^{\text{,}}\) Friday\({}^{\text{, ### Case Study We select and analyse some example results from Geolife, as shown in Table 3, to gain a deeper understanding of _LLM-Mob_'s reasoning ability and interpretability. From case 1, we can see that _LLM-Mob_ correctly predicted place 1 as the next location under \(k=1\) setting, and it also predicted it to be among the "top 10" list of predictions under \(k=10\) setting, while MHSA failed to do so. _LLM-Mob_ provided detailed and rational reasons for the predictions it made, while MHSA was unable to explain the results. Moreover, we can observe that the model can take the target time information (Friday afternoon in this case) into account to make more accurate prediction. However, _LLM-Mob_ also has its weaknesses. From case 2, we can see that _LLM-Mob_ generated duplicated place IDs within the top 10 list, and we argue this was mainly due to the limited number of unique places user 13 had been to. Also, it is not as flexible as MHSA, as can be seen in the \(k=10\) setting where it assumed the most frequently visited place (445 in this case) as the most likely next location, whereas MHSA could give more varied responses based on the input. There are also "hallucination" problems seen in the reasons generated by _LLM-Mob_, which will be discussed in section 6.2. ## 6 Discussion ### Why does _LLM-Mob_ perform so well? In this section, we test LLM's performance on three additional tasks to demonstrate different aspects of its capabilities that we argue are essential for successful human mobility prediction. These tasks are designed to test the abilities of LLMs progressively, from understanding the structured numerical input to extracting and summarising patterns and ultimately, reasoning and making high-level inference such as inferring the home and workplace of the user. From the results presented in Table 4, we can see that the LLM has done a great job in all three tasks. This has shown three major capabilities: (1) The ability to understand not only natural language but also structured numerical input like a list of numbers representing trajectories. (2) The capability of memorising and summarising historical mobility/activity patterns so that the model can effectively draw from past information to inform the future situation. (3) The powerful reasoning ability, meaning that the model can "think" like a human and make rational inferences. Apart from the inherent capabilities of LLM, the success of _LLM-Mob_ also lies in the effective formatting of data and practical prompt engineering where the prompt has been carefully designed and improved through iterative testing. \begin{table} \begin{tabular}{l p{284.5pt}} \hline \hline **Task 1: Converting a structured numerical input to its natural language description.** & \\ \hline **Prompt:** & Your task is to convert a raw \(<\)input\(>\) containing human mobility information to its natural language description. The \(<\)input\(>\) is a list containing a user’s historical stays\(>\). Omitted description story \(>\). Please be concise and only describe when and where the user went and how long heshe stayed there. \(<\)input\(>\): [4, 289, 127, 335, (5, 614, 1332, 444), (6, 599, 1398, 444),...] \\ **Answer:** & The user’s historical stays are as follows: On Thursday, the user stayed at place ID 335 from 4:49 AM to 7:36 AM for a duration of 2 hours and 7 minutes. On Friday, the user stayed at place ID 444 from 10:14 AM to 8:26 PM for a duration of 2 hours and 8 minutes. On Saturday, the user stayed at place ID 444 from 9:59 AM to 9:37 PM for a duration of 2 hours and 38 minutes. \\ \hline \hline **Task 2: Summarising the activity patterns.** & \\ \hline **Prompt:** & Based on the information provided, please summarize the activity patterns of this user. Please consider both place and temporal information like weekday and start time. \\ **Answer:** & Based on the information provided, we can summarize the activity patterns of this user as follows: Weekday Preference: The user’s most frequent stays occurred on Mondays and Tuesdays, with a total of 7 stays on these days. Morning Activity: The user tended to significant activity on Thursday and Fridays, with the stars on each of these days. Morning Activity: The user visited to smallest utility places, including IDs 335, 444, 445, 446, 447, 448, and 9. They had the highest number of stays at place ID 445, with a total of 12 stays. Long Stays. The user had some longer stays, with durations ranging from a few minutes to as long as 23 hours and 18 minutes. The average duration of their stays was relatively high, indicating they spent considerable time at each place. Weeked Activity: The user had a notable stay on Saturday, but their activity was relatively lower on Sundays, with only one stay recorded. Overall, the user exhibited a regular pattern of activity, with preferences for specific weekdays and morning hours. They frequented multiple places, spending significant time at each location. \\ \hline \hline **Task 3: Inferring the home and workplace.** & \\ \hline **Prompt:** & Based on our analysis, can you further infer the home and workplace.** & \\ **Answer:** & Workplace Inference: We can observe that the user frequently stayed at place ID 445 on weekdays, with a total of 7 stays during Monday to Friday. This place ID occurs consistently during morning hours, suggesting a potential workplace. Home Inference: There are no specific stays that occur consistently during evening and nights, indicating that determining the home location solely based on the provided data is challenging. Therefore, based on the information provided, we can speculate that place ID 445 could potentially be the user’s workplace. However, without additional data or specific knowledge about the user’s routines and preferences, it is difficult to determine the exact home location. It’s important to note that this inference is based on assumptions and may not accurately represent the user’s actual home and workplace. \\ \hline \hline \end{tabular} \end{table} Table 4: Test results demonstrating the abilities of LLMs. ### Limitations There are several limitations: (1) Efficiency issue. Currently, our method calls OpenAI API for each test sample independently, which is inefficient and is hard to apply to large-scale predictions. (2) Hallucination [1]. This is a common problem faced by LLMs. As can be seen from the statement marked in blue in Table 3, the model stated that place 1 is a restaurant, which is fabricated and can be misleading to the user. (3) Limitations from the proprietary LLMs. We used one of the most advanced LLMs - GPT-3.5 in carrying out the experiment. This brings up several issues. Firstly, calling the OpenAI API costs money, and it can be a lot when the data volume is big. Moreover, OpenAI is constantly updating the GPT model family, resulting in performance drift of the newest models [3]. Consequently, the prompts that perform well on old models may not work on new ones, requiring extra work on prompt engineering. We argue that all these limitations necessitate the training of open-source LLMs (or rather, foundation models) fine-tuned for modelling human mobility, which can not only have better predictive performance, but also give researchers and practitioners full control of their model that can be adapted to their needs. ## 7 Conclusion In this study, we established a novel framework that adopted LLMs as its backbone for human mobility prediction. In this framework, mobility data were formatted to account for both long-term and short-term dependencies and to facilitate time-aware predictions. Furthermore, context-inclusive prompts were formed and engineered to enable the reasoning process performed by LLMs to generate accurate predictions and logical explanations. Extensive experiments were conducted on two real-world publicly-available human mobility datasets and the results demonstrated state-of-the-art predictive performance of the proposed model as well as its superior interpretability. However, it should be noted that the predictive performance of our method still has room for improvement and limitations like hallucination should be further addressed. We hope the findings in this work shed light on human mobility research domain and provide insights into future applications of large language models.
2307.05501
HIVA: Holographic Intellectual Voice Assistant
Holographic Intellectual Voice Assistant (HIVA) aims to facilitate human computer interaction using audiovisual effects and 3D avatar. HIVA provides complete information about the university, including requests of various nature: admission, study issues, fees, departments, university structure and history, canteen, human resources, library, student life and events, information about the country and the city, etc. There are other ways for receiving the data listed above: the university's official website and other supporting apps, HEI (Higher Education Institution) official social media, directly asking the HEI staff, and other channels. However, HIVA provides the unique experience of "face-to-face" interaction with an animated 3D mascot, helping to get a sense of 'real-life' communication. The system includes many sub-modules and connects a family of applications such as mobile applications, Telegram chatbot, suggestion categorization, and entertainment services. The Voice assistant uses Russian language NLP models and tools, which are pipelined for the best user experience.
Ruslan Isaev, Radmir Gumerov, Gulzada Esenalieva, Remudin Reshid Mekuria, Ermek Doszhanov
2023-06-28T03:29:32Z
http://arxiv.org/abs/2307.05501v1
# HIVA: Holographic Intellectual Voice Assistant ###### Abstract Holographic Intellectual Voice Assistant (HIVA) aims to facilitate human computer interaction using audiovisual effects and 3D avatar. HIVA provides complete information about the university, including requests of various nature: admission, study issues, fees, departments, university structure and history, canteen, human resources, library, student life and events, information about the country and the city, etc. There are other ways for receiving the data listed above: the university's official website and other supporting apps, HEI (Higher Education Institution) official social media, directly asking the HEI staff, and other channels. However, HIVA provides the unique experience of "face-to-face" interaction with an animated 3D mascot, helping to get a sense of'real-life' communication. The system includes many sub-modules and connects a family of applications such as mobile applications, Telegram chatbot, suggestion categorization, and entertainment services. The Voice assistant uses Russian language NLP models and tools, which are pipelined for the best user experience. voice assistant, natural language processing, software engineering, visual question answering, text mining + Footnote †: publicationid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid:id: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid:id: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid:id: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid:id: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid:id: pubid: pubid: pubid:id: pubid: pubid: pubid: pubid: pubid: pubid:id: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid:id: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid:id: pubid: pubid:id: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid:id: pubid: pubid: pubid:id: pubid:id: pubid: pubid: pubid:id: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid:id: pubid: pubid: pubid:id: pubid: pubid: pubid: pubid: pubid: pubid: pubid:id: pubid:id: pubid: pubid: pubid:id: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid:id: pubid: pubid: pubid: pubid: pubid: pubid: pubid:id: pubid: pubid:id: pubid: pubid:id: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid:id: pubid: pubid:id: pubid: pubid: pubid: pubid: pubid:id: pubid:id: pubid: pubid: pubid: pubid:id: pubid:id: pubid: pubid:id: pubid: pubid: pubid: pubid:id: pubid: pubid:id: pubid: pubid:id: pubid: pubid: pubid:id: pubid: pubid: pubid:id: pubid: pubid:id: pubid:id: pubid: pubid:id: pubid:id: pubid: pubid:id: pubid: pubid: pubid:id: pubid:id: pubid: pubid:id: pubid: pubid: pubid:id: pubid: pubid: pubid:id: pubid:id: pubid: pubid:id: pubid: pubid:id: pubid: pubid:id: pubid:id: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid:id: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid: pubid:id: pubid: pubid: pubid: pubid:id: pubid:id: pubid: pubid: pubid: pubid:id: pubid: pubid: pubid:id: pubid: pubid: pubid: pubid: pubid: pubid: pubid:id: pubid:id: pubid: pubid:id: pubid: pubid: pubid: pubid:id: pubid:id: pubid: pubid:id: pubid: pubid: pubid: pubid: pubid: pubid: pubid: 1. Human-Computer Interaction: 3D holography as an immersive virtual environment for human-computer interaction, where hand gestures allow users to interact with virtual objects in real-time [3]. 2. Medical Visualization: realistic, interactive visualizations of medical data, such as 3D models of the human anatomy, modeling 3D organs based on individual patient's medical image data is indispensable for simulation, navigation, and education for accurate and safe surgery [14]. 3. AR/VR: augmented and virtual reality experiences, where users can interact with virtual objects in a real-world environment [3]. 4. Marketing and Advertising: unique and engaging advertisements, such as holographic product displays in retail environments [12]. It's worth noting that these are still early applications of 3D holography, and much more research and development is needed to realize its potential fully. We must mention, that almost all the current AI system lacks interactive part, and scope to visualize the output is limited due to the complexity of technology [12]. ### _What holographic effect is used in HIVA, and how does it work?_ Holography is a technique for recording and reproducing three-dimensional images. Some of us may be familiar with effect from science fiction movies such as "Star Wars" or "Blade Runner" where video message is reproduced as a 3D image. Modern artists use holographic effects to create audio-visual installations [20]. The real holographic effect involves using laser beams and other optical devices to record an interference pattern of light on a photographic plate or other medium. The current research state shows us that we are still at the beginning of the way to real-time animated holography, which may be used to see objects with volume and presence effects. Building a real working device based on actual 3D holography is complicated and will not provide the desired effect. In the HIVA project, we focused on building a holographic effect known as Pepper's ghost [5]. Pepper's Ghost is an illusion technique used in theatre, concerts, and haunted attractions. It was originally developed in the 19th century by John H. Pepper and involves using a sheet of glass or clear plastic to reflect an object or person, making it appear as if they are floating in mid-air. The technique works by reflecting a hidden object or person onto the transparent surface while carefully positioning the audience's line of sight to create the illusion of a ghostly presence [5]. It is widely used today, often in combination with modern technology such as projection mapping and LED lighting. ## II Methodology The study involved potential applicants seeking information about the AIU. Data Collection: Data was collected through the use of the HIVA. The HIVA provided information on the AIU and answered applicants' questions. Participants were allowed to access information about AIU through traditional methods, such as the website and social media channels, or by contacting the AIU staff directly or using the HIVA. Participants who chose to use the HIVA were provided with a user manual and given time to familiarize themselves with the system before proceeding with the study. Data collected from participants were analyzed using descriptive statistics, including means and percentages. Participants were informed of the purpose of the study, and their consent was obtained before proceeding with the study. All data collected was kept confidential and anonymous. The combination of 3D holographic pyramid technology and voice assistant in an HEI-used device makes this project unique for Kyrgyzstan and Kyrgyzstan HEIs. The visual effect that the pseudo-hologram provides is an additional channel of interaction with the user [5]. This project has the capacity for extension: HIVA can be modified and used to automate issuing certificates and receiving requests. Today the most common information and reference systems provide information in the form of static information stands or interactive screens. Some research prototypes of interactive holographic pyramids exist and are widely used in museums, training centers, etc. However, interaction with a hologram via voice has not yet been implemented in any project in Kyrgyzstan. According to project objectives we planned to efficiently receive and process requests of various types from stakeholders. Provide consultation services and general information about AIU in multiple languages. Maintain an up-to-date FAQ section about the admission process and student activities for applicants, students, and their parents. Offer a virtual tour of the university. The primary performance indicator for the project was the successful completion of the outlined objectives using Agile and Scrum principles. To effectively develop system within Agile software development (ASD) is based on the Agile Manifesto, a group of developers created in 2001. The Agile Manifesto consists of four values, and twelve principles, which serve as the foundation for ASD methods [1]. ASD is an iterative and flexible approach that values collaboration, responsiveness to change, and building working software as early as possible. Whereas Agile is a general term based on the Agile Manifesto values and principles, Scrum is just one of several Agile frameworks. Scrum is a specific ASD framework that provides a structured approach for managing and completing complex projects. It was originally designed for software development but has since been adapted for various projects and industries. Scrum emphasizes adaptability, collaboration, and delivering working software in small, incremental "sprints" [13]. We chose Agile in work over HIVA because it provides a flexible framework (Scrum) for planning, executing, and adjusting the development process and helps respond to changing requirements and regularly arising obstacles and errors. The first step (sprint) was dedicated to designing and assembling the device case. The first prototype design is seen in images 1. We used Oculus Quest 2 VR Headset to see the virtual prototype immersed in an environment with a not yet manufactured device to see it from all angles and even "touch" it. During the second step, the project team designed and ordered a holographic pyramid and purchased electronic and computing components. During step three, they embedded a cooling system and an automatic on/off system inside the HIVA. To receive the effect of holography, they have installed and programmed a backlight system. As the first animated avatar for the first version of HIVA, we used the "EVE" model with simple animations (credits to [21]). The first HIVA prototype was launched on July 1, 2021. That was the day of the start of admission to AIU. During the tests, project members conducted an anonymous, depersonalized data collection of requests. The collected results were further processed and used as a model for training a dialog system for the next prototype software versions. The question-answer model was manually prepared using a knowledge base from the Frequently Asked Questions section on the official website. The work on the project is still in progress, and after releasing the HIVA version, \(2.0\) our team already has plans and backlog tasks for HIVA version \(3.0\) development. Currently, HIVA works without any Internet access 2. The new version can recognize and synthesize Russian language speech, search in the database, and provide information about news, weather, and time in Kyryszstan. It can play some music as well. You can add this feature to your voice assistant, adding pafy and youtubesearchpython Python libraries as dependencies. Those two provide API to YouTube. Important to know that some dependencies may crash your project since services such as YouTube may change the data structure. The service removed them on the 13th of December 2021, which caused a system failure when a user attempted to execute "music play" using voice command. We fixed this bug by commenting out the dislikes variable from the source code of pafy library. Fig. 1: Early HIVA prototype design Fig. 3: Barsik says ’Salam!’ Fig. 2: HIVA as a working device All answers are given by an animated 3D character, a new visualized assistant - **Barsik**. The model of Barsik 3 and animations were developed at the Computer Science Department of Ala-Too International University. Barsik has her own unique design and represents the mascot of AIU. Additionally, we created a holography-style model of the university campus. To see the campus and information about buildings, HIVA user should give the voice command "studencheskiy gorodok" (transliterated translation of word campus). Additionally, numeric and short responses to commands like "weather" or "news" is provided by displaying in the form of texts on a screen. It was designed for a better perception of the answer. ### _Software design and architecture_ We needed to pick specific architecture to combine voice assistant and visual avatar in one system. As we all may know, there are many proposed designs and architectural patterns (cite patterns here). To connect two or more services as applications, we should refer to either a microservice pattern or an event-driven one [18]. We can assume users' voice commands as an event to pass them to be processed by the front-end part, where events can be handled and execute 3D avatar animations and display any information on boards and panels. The described process is shown schematically on 5, where you can see examples of two different events. ### _Data mining and data augmentation_ Every voice assistant design faces the problem of forming a huge but efficient dataset for question answering. Suggestion categorization subsystem required at least 1000 supervised entries. Manual data collection, which included only voice suggestions from university students and professors, gave more than 200 entries. Voice data was batched through STT (Speech-to-text) tool. For those purposes, we used the Silero chatbot. Texts were manually corrected and labeled to form the base dataset. To enlarge that number of data samples, a data augmentation script was composed by us to suggest semantically similar terms by provided categories. Those terms were used to form the most frequent 3-grams. ### _Suggestions classification_ In HIVA text preprocessing we have used 1000 sentences (including augmented data), for feature extraction (building a simple BoW). In training a model, we have used the Multinomial Naive Bayes (MNB) algorithm, due to its simplicity in contrast to other sophisticated ML/NN models. There are three most popular Naive Bayes algorithms: Gaussian (for prediction of continuous variables), Bernoulli (for prediction of boolean values), Multinomial (for classification of categorical variables). As our aim was to classify requests into different categories, the MNB was the best matching algorithm among those three. We can see, that both metrics provide similar outputs: about 69.5%. Thus, we assume that the initial model provides at least stable results on the data of our BoW. Further work will include expansion of the dataset, and implementation of other complex ML/NN algorithms and models. ### _Short answers subsystem_ Another functional part of the HIVA application we designed was the short answers subsystem, which aimed to give short answers to user requests (for instance, question: how many buttons does the piano have? answer: 88). This interesting application feature was suggested to our students as Hackathon task. We provided a sample code with various test cases. Test cases included questions, URL pages to find answers in, and short answers. Sample code was able to pass 50% of tests. The group of students managed to increase that result to 80%. As we all know, modern Search engines, such as Google, Bing, and Yandex, can already provide answers by showing paragraphs with main keywords highlighted before Fig. 4: HIVA workflow design Fig. 5: Inter service interaction architecture listing of websites. Yet another problem was to guess which URL would probably have an answer to the user's question. By now, we decided to use "[email protected]", a Russian analog of Quora question-answering service, as a data mining source. Since all information is open to everyone, we could run the script for extracting questions and applying a short answer extraction algorithm to the best answer text. Version two of the HIVA project has been working continuously for one year, starting from September 2021 till now. Within this period, 7230 requests were made, with a mean daily usage value of 56.92, including weekends. You can observe total requests by weekdays on bar graph 6, where 0 by the x-axis represents Monday. According to request body analysis, the most popular requests by now are "hello", "music" and "how are you?". Most frequent requests related to the university about medical and humanities faculties, vice-rector, and tuition fees. ## III Results and discussion This is one of the first attempts at NLP application in Kyrgyzstan. By now, HIVA NLP model performs relatively high accuracy in a wide range of 74-97% on various tests. Therefore the efficiency of the model still has to be improved. There are several potential points of growth: * Increase of the number of elements (rows) in the input dataset; * Reduction of features (columns) of the input dataset; * Work with different noises and accent recognition in a speech2text system; * Other code-related issues might be optimized ## IV Further work The HIVA as a typical virtual 3D assistant will provide the possibility for the expansion of the edge of the future for humanity. Combining such technologies as data science, machine learning, and 3D holography brings us to the next level of person-to-machine interaction. Perhaps, virtual assistants such as Alexa, Siri, Cortana, and others - will receive their own 3D representations in the future. Also, there are other ways for HIVA extension, for example, the 6-DoF technologies: "Holographic video provides users with an immersive six degrees of freedom (6-DoF) viewing experience rather than traditional virtual reality (VR), 360 degrees, and other 3-DoF videos" [10]. These technologies increase the immersive experience because 6-DoF - "allow users to walk around an object in a circle and view it from the top and the bottom" [10]. In the case of virtual assistants for HEIs, there might be other specific student-life-related needs: for example, issuing certificates on request. Also, the embedded model of requests' classification in HIVA can be modified by increasing its accuracy. Today, the project team works on the model improvement and supports the HIVA. Finally, they are open to suggestions, partnerships, and collaborations with other HEIs, in work over HIVA and other projects, locally and abroad. ## Acknowledgments We would like to express our sincere gratitude to Ala-Too International University for providing the grant that supported this research. Without their financial assistance, this study would not have been possible. Additionally, we would like to acknowledge the invaluable support and resources provided by the university, including access to laboratory facilities and equipment, and the expertise of their faculty members.
2305.14275
Variational Inference with Coverage Guarantees in Simulation-Based Inference
Amortized variational inference is an often employed framework in simulation-based inference that produces a posterior approximation that can be rapidly computed given any new observation. Unfortunately, there are few guarantees about the quality of these approximate posteriors. We propose Conformalized Amortized Neural Variational Inference (CANVI), a procedure that is scalable, easily implemented, and provides guaranteed marginal coverage. Given a collection of candidate amortized posterior approximators, CANVI constructs conformalized predictors based on each candidate, compares the predictors using a metric known as predictive efficiency, and returns the most efficient predictor. CANVI ensures that the resulting predictor constructs regions that contain the truth with a user-specified level of probability. CANVI is agnostic to design decisions in formulating the candidate approximators and only requires access to samples from the forward model, permitting its use in likelihood-free settings. We prove lower bounds on the predictive efficiency of the regions produced by CANVI and explore how the quality of a posterior approximation relates to the predictive efficiency of prediction regions based on that approximation. Finally, we demonstrate the accurate calibration and high predictive efficiency of CANVI on a suite of simulation-based inference benchmark tasks and an important scientific task: analyzing galaxy emission spectra.
Yash Patel, Declan McNamara, Jackson Loper, Jeffrey Regier, Ambuj Tewari
2023-05-23T17:24:04Z
http://arxiv.org/abs/2305.14275v3
# Variational Inference with Coverage Guarantees ###### Abstract Amortized variational inference produces a posterior approximator that can compute a posterior approximation given any new observation. Unfortunately, there are few guarantees about the quality of these approximate posteriors. We propose Conformalized Amortized Neural Variational Inference (CANVI), a procedure that is scalable, easily implemented, and provides guaranteed marginal coverage. Given a collection of candidate amortized posterior approximators, CANVI constructs conformalized predictors based on each candidate, compares the predictors using a metric known as predictive efficiency, and returns the most efficient predictor. CANVI ensures that the resulting predictor constructs regions that contain the truth with high probability (exactly how high is prespecified by the user). CANVI is agnostic to design decisions in formulating the candidate approximators and only requires access to samples from the forward model, permitting its use in likelihood-free settings. We prove lower bounds on the predictive efficiency of the regions produced by CANVI and explore how the quality of a posterior approximation relates to the predictive efficiency of prediction regions based on that approximation. Finally, we demonstrate the accurate calibration and high predictive efficiency of CANVI on a suite of simulation-based inference benchmark tasks and an important scientific task: analyzing galaxy emission spectra. ## 1 Introduction Variational inference (VI) has become a staple in Bayesian inference; however, it has been repeatedly noted that a major shortcoming of VI is its lack of any theoretical guarantees and tendency to produce biased posterior estimates [1, 2, 3, 4]. For instance, in the subfield of likelihood-free inference, a meta-study on widely used algorithms relying on VI revealed that they consistently produce unfaithful, overconfident posterior approximations [5]. Researchers have begun to address this shortcoming. For instance, the authors of [6] proposed a method of correction, but noted that their method "should not be viewed as a way to obtain conservative posterior estimators with 100% reliability." In particular, this method has poor scaling to high-dimensional problems and only has theoretical guarantees of coverage in expectation, which are only enforced heuristically. Variational inference constructs an approximation of the full posterior distribution; however, in applications where many posteriors need to be estimated, constructing credible regions with _marginally_ calibrated coverage can be sufficient for downstream scientific inquiries. For instance, in astrophysics, there is great interest in constraining the \(\Lambda\)CDM model, the current concordance model in cosmology, through large-scale cosmological observations [7, 8, 9, 10, 11]. A recent work from this community, [12], leveraged Bayesian inference towards this end, obtaining approximate posteriors for the parameters of interest on each of 10,000 observations. Crucially, these posteriors were then solely used to produce credible intervals with _marginally_ valid frequentist coverage, from which they made subsequent claims on constraints on the \(\Lambda\)CDM model. Our insight is that conformal prediction can be leveraged to provide variational approximators with marginal coverage guarantees and provide users new ways to measure the quality of variational approximators. In this manuscript, we present CANVI (Conformalized Amortized Neural Variational Inference), a novel, general framework for producing marginally calibrated, informative prediction regions from a collection of variational approximators. Such regions can be produced with minimal implementation and computational overhead, requiring only samples from the prior and \(\mathcal{P}(X\mid\Theta)\), as shown in Figure 1. In Section 3.2, we provide theoretical analysis of the informativeness of the prediction regions produced by CANVI using a measure known as "predictive efficiency." High predictive efficiency is necessary to draw conclusions in downstream scientific inquiries. Finally, in Section 4, we show calibration and predictive efficiency empirically across simulation-based inference benchmark tasks and an important scientific task: analyzing galaxy emission spectra. ## 2 Background ### Variational Inference Bayesian methods aim to sample the posterior distribution \(\mathcal{P}(\Theta\mid X)\), typically using either MCMC or VI. VI has risen in popularity recently due to how well it lends itself to amortization. Given an observation \(X\), variational inference transforms the problem of posterior inference into an optimization problem by seeking a minimizer \[\varphi^{*}(X)=\operatorname*{arg\,min}_{\varphi}D(q_{\varphi}(\Theta)|| \mathcal{P}(\Theta\mid X)), \tag{1}\] where \(D\) is a divergence and \(q_{\varphi}\) is a member of a variational family of distributions \(\mathcal{Q}\) indexed by the free parameter \(\varphi\). Normalizing flows have emerged as a particularly apt choice for \(\mathcal{Q}\), as they are highly flexible and perform well empirically [13; 14]. Amortized variational inference expands on this approach by training a neural network to approximate \(\varphi^{*}(X)\). This leads to a variational posterior approximator \(q(\Theta\mid X)=q_{\varphi^{*}(X)}(\Theta)\) that can be rapidly computed for any value \(X\). The characteristics of \(\varphi^{*}\) depend in part on the variational objective, \(D\). For instance, using a reverse-KL objective, i.e. \(D_{KL}(q_{\varphi}(\Theta)||\mathcal{P}(\Theta\mid X))\), is known to produce mode-seeking posterior approximations, whereas using a forward-KL objective, i.e. \(D_{KL}(\mathcal{P}(\Theta\mid X)||q_{\varphi}(\Theta))\), encourages mode-covering behavior [15]. Popular variational objectives include the Forward-Amortized Variational Inference (FAVI) objective [16] (also known as the "sleep-phase" objective in Reweighted Wake-Sleep [17]), the Evidence Lower Bound (ELBO), and the Importance Weighted ELBO (IWBO) [18]. ### Conformal Prediction Given a dataset \(\mathcal{D}_{\mathcal{C}}=\{(X_{1},\theta_{1}),\ldots(X_{N_{\mathcal{C}}}, \theta_{N_{\mathcal{C}}})\}\) of i.i.d. observations from a distribution \(\mathcal{P}(\Theta,X)\), conformal prediction [19; 20] produces prediction regions with distribution-free theoretical guarantees. A prediction region is a mapping from observations of \(X\) to sets of possible values for \(\Theta\). A prediction region \(\mathcal{C}\) is said to be marginally calibrated at the \(1-\alpha\) level if \(\mathcal{P}(\Theta\notin\mathcal{C}(X))\leq\alpha\). Figure 1: CANVI is a wrapper around variational inference requiring minimal implementation and computational overhead that produces prediction regions with guaranteed marginal calibration. Among a family of candidate amortized posterior approximators, CANVI can identify the approximator leading to the most efficient prediction regions. CANVI can be used in any setting where the forward model \(\mathcal{P}(X\mid\Theta)\) can be sampled. Split conformal is one popular version of conformal prediction. In this approach, marginally calibrated regions \(\mathcal{C}\) are designed using a "score function" \(s(x,\theta)\). Intuitively, the score function should have the quality that \(s(x,\theta)\) is smaller when it is more reasonable to guess that \(\Theta=\theta\) given the observation \(X=x\). For example, if one has access to a function \(\hat{f}(x)\) which attempts to predict \(\Theta\) from \(X\), one might take \(s(x,\theta)=\|\hat{f}(x)-\theta\|\). The score function is evaluated on each point of the dataset \(\mathcal{D}_{\mathcal{C}}\), called the "calibration dataset," yielding \(\mathcal{S}=\{s(x_{i}^{c},\theta_{i}^{c})\}_{i=1}^{N_{\mathcal{C}}}\). Note that the calibration dataset cannot be used to pick the score function; if data is used to design the score function, it must independent of \(\mathcal{D}_{\mathcal{C}}\). This is how "split conformal" gets its name: in typical cases, data are split into two parts, one used to design \(s\) and the other to perform calibration. We then define \(\widehat{q}(\alpha)\) as the \(\lceil(N_{\mathcal{C}}+1)(1-\alpha)\rceil/N_{\mathcal{C}}\) quantile of \(\mathcal{S}\). For any future \(x\), the set \(\mathcal{C}(x)=\{\theta\mid s(x,\theta)\leq\widehat{q}(\alpha)\}\) satisfies \(1-\alpha\leq\mathcal{P}(\Theta\in\mathcal{C}(X))\). This inequality is known as the coverage guarantee, and it arises from the exchangeability of the score of a future test point \(s(x^{\prime},\theta^{\prime})\) with \(\mathcal{S}\). Those new to conformal inference may be surprised to note that the coverage guarantee holds regardless of the number of samples \(N_{\mathcal{C}}\) used in calibration; conformal guarantees are not asymptotic results. As noted in Vovk's tutorial [20], while the coverage guarantee holds for any score function, different score functions may lead to more or less informative prediction regions. For example, the score \(s(x,\theta)=1\) leads to the highly uninformative prediction region of all possible values of \(\Theta\). Predictive efficiency is one way to quantify informativeness [21; 22]. It is defined as the inverse of the expected Lebesgue measure of the prediction region, i.e. \(\left(\mathbb{E}[|\mathcal{C}(X)|]\right)^{-1}\). Methods employing conformal prediction often seek to identify prediction regions that are efficient as well as marginally calibrated. ## 3 Method We now propose a way to produce efficient prediction regions with coverage guarantees using a collection of amortized VI approximators \(\{q^{(1)}(\Theta\mid X),\ldots q^{(T)}(\Theta\mid X)\}\). CANVI can be applied whenever it is possible to sample from the joint distribution \(\mathcal{P}(X,\Theta)\). The coverage validity of CANVI is proven in Section 3.1, and analyses of its predictive efficiency follow in Section 3.2. ### Canvi In the simplest case, CANVI takes as input a single amortized posterior approximator \(q(\Theta\mid X)\) and computes an empirical score distribution by drawing \(\mathcal{D}_{\mathcal{C}}=\{(x_{i}^{c},\theta_{i}^{c})\}_{i=1}^{N_{\mathcal{C }}}\stackrel{{\mathrm{iid}}}{{\sim}}\mathcal{P}(\Theta)\mathcal{ P}(X\mid\Theta)\). In traditional applications of split conformal, much concern is given to the loss of accuracy of the predictor \(\widehat{f}\) in having to reserve a subset of the training data for calibration. Here we have no such issues; we can sample from the joint distribution to produce a calibration dataset that is arbitrarily large. Given \(q(\Theta\mid X)\), we employ the following score, as used in [19; 23]: \[s(x_{i},\theta_{i})=\frac{1}{q(\theta_{i}\mid x_{i})}. \tag{2}\] For a desired coverage of \(1-\alpha\), we denote the \(\lceil(N_{\mathcal{C}}+1)(1-\alpha)\rceil/N_{\mathcal{C}}\) quantile of the empirical score distribution over \(\mathcal{D}_{C}\) as \(\widehat{q}(\mathcal{D}_{\mathcal{C}},\alpha)\). The prediction region \(\mathcal{C}(x)=\{\theta:1/q(\theta\mid x)\leq\widehat{q}(\mathcal{D}_{\mathcal{ C}},\alpha)\}\) is then marginally calibrated. Note that \(\mathcal{C}(x)\) may be disjoint if the posterior is multimodal. This approach achieves coverage guarantees for any variational approximator \(q(\Theta\mid X)\). However, a poorly chosen approximator may lead to poor predictive efficiency. To mitigate the risk of producing an inefficient predictor, it is, therefore, natural to explore multiple posterior approximations, \(\{q^{(t)}(\Theta\mid X)\}_{t=1}^{T}\), where these posterior approximations could, for instance, differ in their choice of training objective, variational family, or hyperparameters. CANVI assesses a collection of candidate variational approximators as follows. For each candidate, CANVI uses a test dataset \(\mathcal{D}_{\mathcal{T}}=\{(x_{i},\theta_{i})\}_{i=1}^{N_{\mathcal{T}}} \stackrel{{\mathrm{iid}}}{{\sim}}\mathcal{P}(\Theta)\mathcal{P}(X \mid\Theta)\) and calculates the empirical inverse efficiencies for each candidate \(q^{(t)}(\Theta\mid X)\), as follows: \[\widehat{\ell}(q^{(t)},\alpha):=\frac{1}{N_{\mathcal{T}}}\sum_{i=1}^{N_{ \mathcal{T}}}\mathcal{L}\left(\left\{\theta:1/q^{(t)}(\theta\mid x_{i})\leq \widehat{q}^{(t)}(\mathcal{D}_{\mathcal{C}},\alpha)\right\}\right), \tag{3}\] where \(\mathcal{L}(\cdot)\) is the Lebesgue measure. CANVI identifies the \(t^{*}\) such that \(q^{(t^{*})}\) leads to the lowest empirical inverse efficiency, which we denote as \(q^{(*)}:=q^{(t^{*})}\). For each \(x_{i}\), \(\mathcal{L}(\mathcal{C}(x_{i}))\) is empirically estimated for each \(q^{(t)}\) using an importance-weighted Monte Carlo estimate over \(S\) samples, namely \[\mathbb{E}_{q^{(t)}(\Theta_{1:S}|x_{i})}\left[\frac{1}{S}\sum_{j=1}^{S}\frac{1} {q^{(t)}(\Theta_{j}\mid x_{i})}\mathbbm{1}\left[1/q^{(t)}(\Theta_{j}\mid x_{i} )\leq\widehat{q}^{(t)}\right]\right]. \tag{4}\] Note that this estimate can alternatively be performed using a grid-discretization over \(\text{Supp}(\Theta\mid x_{i})\). However, doing so is only feasible in low-dimensional cases or where the support has a known, small extent. We empirically demonstrate that this estimate captures efficiency behaviors that parallel those obtained from explicit gridding in Section 4.1.1. After selecting \(t^{*}\), an additional recalibration step must then be performed. CANVI uses an additional calibration dataset \(\mathcal{D}_{\mathcal{R}}\), which we take to be the same size as \(\mathcal{D}_{\mathcal{C}}\), i.e. \(|\mathcal{D}_{\mathcal{R}}|=N_{\mathcal{C}}\). CANVI constructs \(\mathcal{D}_{\mathcal{R}}\) with i.i.d. draws from \(\mathcal{P}(X,\Theta)\) and computes the quantile \(\widehat{q}^{(*)}:=\widehat{q}^{(t^{*})}(\mathcal{D}_{\mathcal{R}},\alpha)\). Using \((q^{(*)}(\Theta\mid X),\widehat{q}^{(*)})\) to construct prediction regions produces provably efficient predictions, as we show theoretically in Section 3.2. The additional recalibration is necessary to retain the exchangeability between scores of future test points \(s(x^{\prime},\theta^{\prime})\) and \(\mathcal{S}\), as exchangeability is lost in conditioning on \(\mathcal{D}_{\mathcal{C}}\) for selecting \(t^{*}\). The full CANVI framework is provided in Algorithm 1. In summary, CANVI chooses a posterior approximator \(q^{(*)}(\Theta\mid X)\) from a collection of posterior approximators \(\{q^{(t)}(\Theta\mid X)\}_{t=1}^{T}\) and produces a corresponding quantile \(\widehat{q}^{(*)}\) for a target coverage probability \((1-\alpha)\). CANVI implicitly defines a prediction region map defined by \(\mathcal{C}(x)=\left\{\theta:1/q^{(*)}(\theta\mid x)\leq\widehat{q}^{(*)}\right\}\) that is marginally calibrated at the \((1-\alpha)\) level. Note that, while CANVI uses Equation 2 as its score, the use of strictly monotonically increasing transforms of this score produces identical coverage regions, with a special case being the absolute z-score for a unimodal Gaussian variational family, proven in Appendix A. ``` 1:procedureCPQuantile 2Inputs: Posterior approximation \(q(\Theta\mid X)\), Calibration set \(\mathcal{D}_{\mathcal{C}}\), Desired coverage \(1-\alpha\) 3:\(\mathcal{S}\leftarrow\{\frac{1}{q(\theta_{i}|x_{i})}\}_{i=1}^{N_{\mathcal{C}}}\)\(\triangleright\)\(N_{\mathcal{C}}=|\mathcal{D}_{\mathcal{C}}|\) 4:Return \(\frac{[(N_{\mathcal{C}}+1)(1-\alpha)]}{N_{\mathcal{C}}}\) quantile of \(\mathcal{S}\) 5:endprocedure 6:procedureCPRegionSize 7:Inputs: Posterior approximation \(q(\Theta\mid X)\), CP quantile \(\widehat{q}\), Test set \(\mathcal{D}_{\mathcal{T}}\) 8:\(\widehat{\ell}_{i}\leftarrow\left\{\frac{1}{S}\sum_{j=1}^{S}\frac{1}{q(\theta _{ij}|x_{i}^{\prime})}\mathbbm{1}\left[1/q(\theta_{ij}\mid x_{i}^{\prime})\leq \widehat{q}\right]\right\}_{i=1}^{N_{\mathcal{T}}}\)\(\triangleright\)\(\Theta_{i,j}\sim q_{\varphi}(\Theta\mid x_{i}^{\prime})\) 9:Return \(\frac{1}{N_{\mathcal{T}}}\sum_{i=1}^{N_{\mathcal{T}}}\widehat{\ell}_{i}\)\(\triangleright\)\(N_{\mathcal{T}}=|\mathcal{D}_{\mathcal{T}}|\) 10:endprocedure 11:procedureCANVI 12Inputs: Posterior approximations \(\{q^{(t)}(\Theta\mid X)\}_{t=1}^{T}\), Prior \(\mathcal{P}(\Theta)\), Forward model \(\mathcal{P}(X\mid\Theta)\), Desired coverage \(1-\alpha\), Calibration set size \(N_{\mathcal{C}}\), Test set size \(N_{\mathcal{T}}\) 13:\(\mathcal{D}_{\mathcal{C}}\leftarrow\{\theta_{i}\sim\mathcal{P}(\Theta),x_{i} \sim\mathcal{P}(X\mid\theta_{i})\}_{i=1}^{N_{\mathcal{C}}}\). 14:\(\mathcal{D}_{\mathcal{T}}\leftarrow\{\theta_{i}\sim\mathcal{P}(\Theta),x_{i} \sim\mathcal{P}(X\mid\theta_{i})\}_{i=1}^{N_{\mathcal{T}}}\). 15:for\(t\in\{1,\ldots T\}\)do 16:\(\widehat{q}^{(t)}\leftarrow\textsc{CPQuantile}(q^{(t)}(\Theta\mid X), \mathcal{D}_{\mathcal{C}},1-\alpha)\) 17:\(\widehat{\ell}^{(t)}\leftarrow\textsc{CPRegionSize}(q^{(t)}(\Theta\mid X), \widehat{q}^{(t)},\mathcal{D}_{\mathcal{T}})\). 18:endfor 19:\(t^{*}\leftarrow\arg\min_{i}\widehat{\ell}^{(t)}\) 20:\(\mathcal{D}_{\mathcal{R}}\leftarrow\{\theta_{i}\sim\mathcal{P}(\Theta),x_{i} \sim\mathcal{P}(X\mid\theta_{i})\}_{i=1}^{N_{\mathcal{C}}}\). 21:\(\widehat{q}^{(*)}\leftarrow\textsc{CPQuantile}(q^{(t^{*})}(\Theta\mid X), \mathcal{D}_{\mathcal{R}},1-\alpha)\). 22:Return \(q^{(*)}(\Theta\mid X),\widehat{q}^{(*)}\)\(\triangleright\)\(q^{(*)}(\Theta\mid X):=q^{(t^{*})}(\Theta\mid X)\) 23:endprocedure ``` **Algorithm 1** CANVI (Conformalized Amortized Neural Variational Inference) **Lemma 3.1**.: _Let \(q^{(*)}(\Theta\mid X),\widehat{q}^{(*)}=\mathrm{CANVI}\left(\{q^{(t)}(\Theta\mid X )\}_{t=1}^{T},\mathcal{P}(\Theta,X),1-\alpha,N_{\mathcal{C}},N_{T}\right).\) Suppose \((x^{\prime},\theta^{\prime})\sim\mathcal{P}(\Theta,X)\) is drawn independently of \(\mathcal{D}\cup\mathcal{D}_{\mathcal{C}}\cup\mathcal{D}_{\mathcal{T}}\cup \mathcal{D}_{\mathcal{R}}\), where \(\mathcal{D}\) is the aggregate of training data used for \(\{q^{(t)}(\Theta\mid X)\}_{t=1}^{T}\). Then \(1-\alpha\leq\mathcal{P}(1/q^{(*)}(\theta^{\prime}\mid x^{\prime})\leq\widehat {q}^{(*)})\)._ Proof.: Consider the score function \(s^{(*)}(x,\theta):=1/q^{(*)}(\theta\mid x)\). Observe that \((x^{\prime},\theta^{\prime})\cup\mathcal{D}_{\mathcal{R}}\) are jointly sampled i.i.d. from \(\mathcal{P}(\Theta,X)\), independent of the datasets used to design \(s^{(*)}(x,\theta)\), namely \(\mathcal{D}\cup\mathcal{D}_{\mathcal{C}}\cup\mathcal{D}_{\mathcal{T}}\). Denoting \(\mathcal{S}_{\mathcal{R}}:=\{s^{(*)}(x_{i},\theta_{i})\}_{(x_{i},\theta_{i}) \in\mathcal{D}_{\mathcal{R}}}\), scores \(s^{(*)}(x^{\prime},\theta^{\prime})\cup\mathcal{S}_{\mathcal{R}}\), thus, too are i.i.d and, hence, exchangeable. The coverage guarantee then follows from the general theory of conformal prediction, presented in [19]. ### Predictive Efficiency of CANVI We now wish to characterize the efficiency of the prediction regions produced by CANVI. To do so, we first demonstrate that the choice \(q^{(*)}\) is, with high probability, oracle optimal. We define the oracle action (for a fixed \(\alpha\)) as returning the posterior approximator-quantile pair that minimizes Equation 3 over observed pairs \(\{(q^{(t)}(\Theta\mid X),\widehat{q}^{(t)}(\mathcal{D}_{\mathcal{C}},\alpha)) \}_{t=1}^{T}\). Recall that CANVI performs a recalibration after selecting \(t^{*}\), where importantly \(\widehat{q}^{(*)}(\mathcal{D}_{\mathcal{R}},\alpha)\) is defined using \(\mathcal{D}_{\mathcal{R}}\). Unfortunately, \((q^{(*)}(\Theta\mid X),\widehat{q}^{(*)}(\mathcal{D}_{\mathcal{R}},\alpha))\) may not be the minimizer of Equation 3 over this extended collection of pairs \(\{(q^{(t)}(\Theta\mid X),\widehat{q}^{(t)}(\mathcal{D}_{\mathcal{C}},\alpha)) \}_{t=1}^{T}\cup(q^{(*)}(\Theta\mid X),\widehat{q}^{(*)}(\mathcal{D}_{ \mathcal{R}},\alpha))\). CANVI could return \((q^{(*)}(\Theta\mid X),\widehat{q}^{(*)}(\mathcal{D}_{\mathcal{C}},\alpha))\) to ensure efficiency, but the recalibration is necessary to ensure coverage guarantees. This tradeoff between coverage and efficiency was studied in [21], from which we obtain guarantees on \(t^{*}\) remaining oracle optimal with high probability. Intuitively, \((q^{(*)}(\Theta\mid X),\widehat{q}^{(*)}(\mathcal{D}_{\mathcal{R}},\alpha))\) will remain close to oracle optimal if the score distributions and, hence, quantiles of \(q^{(*)}(\Theta\mid X)\) under \(\mathcal{D}_{\mathcal{C}}\) and \(\mathcal{D}_{\mathcal{R}}\) are close, and if, for any deviations in the quantiles, the predictive efficiency varies smoothly, respectively formalized from [21] as follows. **Assumption 1**.: _For each \(t\), \(\exists\)\(r^{*},\gamma\in(0,1)\) such that the inverse CDF of the score under \(q^{(t)}\), \(F_{t}^{-1}\), is \(\gamma\)-Holder continuous on \([\widehat{q}^{(t)}(\alpha)-r^{*},\widehat{q}^{(t)}(\alpha)+r^{*}]\) with Holder continuity constant \(L_{t}\)._ **Assumption 2**.: _For each \(t\), the map \(\alpha\to\widehat{\ell}(q^{(t)},\alpha)\) is Lipschitz continuous with constant \(L_{W}\)._ The proof in [21] is completed by demonstrating that the CDFs of the score distributions of \(q^{(*)}(\Theta\mid X)\) under \(\mathcal{D}_{\mathcal{C}}\) and \(\mathcal{D}_{\mathcal{R}}\), and hence \(\tilde{q}^{(*)}(\mathcal{D}_{\mathcal{C}},\alpha)\) and \(\widehat{q}^{(*)}(\mathcal{D}_{\mathcal{R}},\alpha)\) by Assumption 1, are close and concluding by then bounding resulting deviations in predictive efficiencies by Assumption 2. To make explicit the distinction in computed efficiency, we denote the recalibrated efficiency as \(\widehat{\ell}_{\mathcal{R}}(q^{(*)},\alpha)\), where \(\widehat{q}^{(*)}(\mathcal{D}_{\mathcal{R}},\alpha)\) is used in place of the standard \(\widehat{q}^{(*)}(\mathcal{D}_{\mathcal{C}},\alpha)\). The proof of Theorem 3.2 is explicitly provided in Appendix B. **Theorem 3.2**.: _Let \(q^{(*)}(\Theta\mid X),\widehat{q}^{(*)}=\mathrm{CANVI}\left(\{q^{(t)}(\Theta \mid X)\}_{t=1}^{T},\mathcal{P}(\Theta,X),1-\alpha,N_{\mathcal{C}},N_{\mathcal{ T}}\right).\) Let \(\mathcal{D},\mathcal{D}_{\mathcal{C}},\mathcal{D}_{\mathcal{T}}\), and \(\mathcal{D}_{\mathcal{R}}\) be drawn i.i.d. from \(\mathcal{P}(\Theta,X)\), where \(\mathcal{D}\) is the aggregate of training data used for \(\{q^{(t)}(\Theta\mid X)\}_{t=1}^{T}\). Take \(\delta\in[0,1]\). If Assumption 2 holds and Assumption 1 holds with_ \[r^{*}\geq\max\left\{\sqrt{\frac{\log(4T/\delta)}{2N_{\mathcal{C}}}},\frac{2}{N_ {\mathcal{C}}}\right\}, \tag{5}\] _then with probability at least \(1-\delta\),_ \[\widehat{\ell}_{\mathcal{R}}(q^{(*)},\alpha)\leq\min_{1\leq t\leq T}\widehat{ \ell}(q^{(t)},\alpha)+3L_{W}L_{[T]}\left[\left(\frac{\log(4T/\delta)}{N_{ \mathcal{C}}}\right)^{\gamma/2}+\left(\frac{2}{N_{\mathcal{C}}}\right)^{\gamma }\right], \tag{6}\] _where \(\gamma\), \(L_{W}\), and \(L_{[T]}=\max_{1\leq t\leq T}L_{t}\) are constants defined in Assumptions 1 and 2._ Again, in any setting where it is possible to sample from \(\mathcal{P}(X,\Theta)\), \(N_{\mathcal{C}}\) can be made arbitrarily large, achieving a tight bound in Equation 6 as a consequence. The implication of Theorem 3.2 is that practitioners can focus on obtaining efficient predictors, as discussed in the following section, knowing that CANVI will make the optimal selection from this set with high probability. ### Optimally Efficient Predictor Characterization We now wish to study how the qualities of a posterior approximation relate to the efficiency of its conformalization. Importantly, this characterization is independent of algorithmic decisions made by CANVI. For this reason, we introduce a population-level version of the inverse-efficiency, defined as \[\ell(q,\alpha):=\mathbb{E}_{X}\left[\mathcal{L}\left(\{\theta:1/q(\Theta\mid X) \leq\widehat{q}(\alpha)\}\right)\right], \tag{7}\] where \(\widehat{q}(\alpha)\) is the exact \(1-\alpha\) quantile for \(1/q(\Theta|X)\). We begin by observing that \(q(\Theta\mid X)\) with incorrect support can lead to the most inefficient prediction region possible, i.e., \(\text{Dom}(\Theta)\). **Theorem 3.3**.: _Suppose \(\forall x\in\text{Dom}(X)\), there exists a region \(\vartheta\subset\text{Dom}(\Theta)\) s.t. \(\mathcal{P}(\vartheta\mid x)\geq\beta\) but \(q(\vartheta\mid x)=0\). Then, for any \(\alpha\leq\beta\), \(\ell(q,\alpha)=|\text{Dom}(\Theta)|\)._ Proof.: We wish to find the distribution of \(s(X,\Theta)=1/q(\Theta\mid X)\) jointly over \(X,\Theta\). By assumption, for any draw \(\theta\in\vartheta\), \(s(x,\theta)=\infty\). Thus, \[\mathcal{P}(s(X,\Theta)=\infty)=\int\mathcal{P}(s(X,\Theta)=\infty\mid X=x) \mathcal{P}(X\in dx)\geq\beta\int\mathcal{P}(X\in dx)=\beta.\] Thus, \(\forall\alpha\leq\beta\), \(\widehat{q}(\alpha)=\infty\implies\{\theta:1/q(\theta\mid X)\leq\widehat{q}( \alpha)\}=\text{Dom}(\Theta)\). Since this holds \(\forall x\), it follows that \(\ell(q,\alpha)=|\text{Dom}(\Theta)|\). One implication of Theorem 3.3 is preference should be given to forward-KL over reverse-KL training objectives to encourage mode coverage, as we additionally demonstrate empirically in Section 4.2. We next consider whether the _true_ posterior leads to the greatest efficiency. Section 4.1.2 empirically suggests that efficiency generally improves with convergence to the exact posterior. This leads us to conjecture that the true posterior leads to the greatest _average_ efficiency over all possible \(\alpha\). **Conjecture 1**.: _Let \((X,\Theta)\sim\mathcal{P}(\Theta,X)\), \(q^{(*)}(\Theta\mid X)=\mathcal{P}(\Theta\mid X)\) and \(q(\Theta\mid X)\) denote any other posterior approximator. Then_ \[\int_{0}^{1}\ell(q^{(*)},\alpha)d\alpha\leq\int_{0}^{1}\ell(q,\alpha)d\alpha. \tag{8}\] If true, Conjecture 1 suggests practitioners should seek to use more flexible variational families to aim to recover the exact posterior to maximize efficiency. However, we note that for any particular \(\alpha\), the true posterior does not always lead to the greatest efficiency, as demonstrated in Appendix C. We conclude by considering a family of distributions within which the true posterior leads to the highest efficiency. Future work could be directed toward extending this analysis to a broader set of families. Details and experimental verification for this Gaussian case are given in Appendix D. **Theorem 3.4**.: _Let \(\Theta\) and \(X\) be zero-mean unit-variance Gaussian random variables with correlation \(\rho\). Let \(q_{\varphi}(\theta|x)=\mathcal{N}(\theta;\varphi x,1-\rho^{2})\). Then \(q_{\rho}\) gives the true posterior and \(\ell(q_{\rho},\alpha)=\min_{\varphi}\ell(q_{\varphi},\alpha)\)._ Proof.: We begin by finding the distribution of \(s_{\varphi}(X,\Theta)=1/q_{\varphi}(\Theta\mid X)\). Explicitly computing its distribution yields a closed form of the threshold as \[\widehat{q}_{\varphi}=\sqrt{2\pi(1-\rho^{2})\exp\left(\frac{\varphi^{2}+1-2 \varphi\rho}{1-\rho^{2}}\left(\Phi^{-1}\left(1-\frac{\alpha}{2}\right)\right) ^{2}\right)} \tag{9}\] Considering an arbitrary fixed \(X=x\), prediction intervals are of length \[\ell(q_{\varphi},\alpha)=2\sqrt{\varphi^{2}+1-2\varphi\rho}\left(\Phi^{-1} \left(1-\frac{\alpha}{2}\right)\right) \tag{10}\] Observe that \(\ell(q_{\varphi},\alpha)\) is an increasing function of \(|\varphi-\rho|\) with a minimum achieved at the exact posterior of \(\varphi=\rho\). Given this holds for any \(\alpha\), \(\int_{0}^{1}\ell(q_{\varphi},\alpha)d\alpha\) too is minimized at \(\varphi=\rho\) Experiments We now demonstrate calibration and study the predictive efficiency of CANVI empirically across several tasks. We investigate predictive efficiency specifically with respect to convergence of variational posteriors to the exact posteriors in Section 4.1.2 and training objectives in Section 4.2. In all experiments, coverage is assessed both by directly using the variational approximation and its conformalized wrapper. For the former, we construct the highest density credible region per \(x_{i}\), namely by estimating the \(\zeta\) such that \(\{\theta_{j}\mid q(\theta_{j}\mid x_{i})\geq\zeta\}\) captures \(1-\alpha\) of the probability mass. To estimate \(\widehat{\zeta}\), we draw \(\{\theta_{j}\}_{j=1}^{N}\sim q(\theta_{j}\mid x_{i})\) and find the \(1-\alpha\) quantile of \(\{q(\theta_{j}\mid x_{i})\}_{j=1}^{N}\). Assessing coverage of the true parameter \(\theta\) can, therefore, be done simply by checking if \(q(\theta\mid x_{i})\geq\widehat{\zeta}\). Experimental details are provided in Appendix F, and code will be made public upon acceptance. ### SBI Benchmarks We follow the precedent set in [5] and evaluate CANVI on the following simulation-based inference (SBI) benchmark tasks: Two Moons, Lotka-Volterra, Gaussian Mixture, Susceptible-Infected-Recovered (SIR), Simple Likelihood Complex Posterior (SLCP) Distractors, and Bernoulli GLM Raw. For full descriptions of the priors and forward models, refer to Appendix E. Tasks in this benchmark suite are all likelihood-free, so all posteriors were trained against \(\mathcal{L}_{\text{FAVI}}\) using a Neural Spline Flow variational family [24]. #### 4.1.1 Coverages Calibration was performed using 10,000 i.i.d. samples and was completed in under one second for each task. Coverage of \(q^{(t)}(\Theta\mid X)\) was assessed at every 2000 training steps over \(\alpha\in[0,1]\) discretized at steps of.05. Non-conformalized regions (\(\widehat{\zeta}\)) were estimated empirically from batches of 100 i.i.d. samples per point. Coverage was assessed over 10 batches of 10,000 i.i.d. test samples. Figure 2: Calibration on the SBI benchmarks with and without conformalization, respectively the solid and dashed lines. Conformalized lines are slightly difficult to distinguish, as they all lie along the desired \(y=x\) curve. Error bars from coverage assessments across test batches are plotted, although they are difficult to see due to the low variance between estimates across batches. Figure 2 demonstrates the miscalibration of the approximate posteriors and subsequent conformal correction. As expected, calibration of \(q(\Theta\mid X)\) improves with training; however, miscalibration persists even after convergence. In particular, while \(\mathcal{L}_{\text{FAVI}}\)-trained posterior approximations do well on the Two Moons and Bernoulli GLM Raw tasks, they are noticeably miscalibrated on the Lotka-Volterra, Gaussian Mixture, and SIR tasks, where conformalizations rectify the situations. \(\mathcal{L}_{\text{FAVI}}\)-training completely fails to learn the SLCP Distractors task, which can nonetheless be calibrated but produces totally uninformative prediction regions as a result. #### 4.1.2 Predictive Efficiency We study the predictive efficiencies over the training of \(q_{\varphi}\) in Figure 3, specifically for the marginal posteriors of \(\tilde{\Theta}=(\theta_{1},\theta_{2})\) for the Two Moons, SLCP, Gaussian Linear Uniform, and Bernoulli GLM Raw tasks, all of which suggest improving efficiency with convergence to the exact posterior. \(\widehat{\ell}(q_{\varphi},\alpha)\) was estimated for 5 batches of 100 test points (\(|\mathcal{D}_{\mathcal{T}}|=100\)) at every 500 training steps using 10,000 importance-weighted i.i.d. samples from \(q^{(t)}(\Theta\mid X)\) per point. As previously noted, a grid-discretized estimate is possible only in low-dimensional cases, which we show has similar trends in Appendix I. Estimation was performed for a fixed \(1-\alpha=0.95\). We additionally visualize the credible regions over training iterates for multiple tasks in Appendix G. ### CANVI Over Training Objectives We consider a lag-one Autoregressive Conditional Heteroskedasticity (ARCH) model, where the exact likelihood is available [25]. The data are generated as \[y^{(t)}=\theta_{1}y^{(t-1)}+e^{(t)}\text{ and }\quad e^{(t)}=\xi^{(t)}\sqrt{0.2+ \theta_{2}\left(e^{(t-1)}\right)^{2}}, \tag{11}\] where \(y^{(0)}=0\), \(e^{(0)}=0\), \(T=100\), and the \(\xi^{(t)}\) are independent standard normal random variables. For further details, consult Appendix E.9. Given a set of \(n=100\) realizations of \((\theta,y_{1:T})\) from the generative model, an amortized Neural Spline Flow (NSF) [24] was fit according to the FAVI, ELBO, and IWBO (\(K=10\)) objectives. Exact and approximate posteriors are visualized in Appendix H.7. We examine how CANVI affects coverage rates both numerically and pictorially over generated test datasets \(\{(\theta,y_{1:T})\}\). Table 1 shows that prior to conformalization, the variational posteriors are generally miscalibrated: training by the ELBO or IWBO results in significant under-coverage. As previously mentioned, targeting the ELBO is known to find solutions that are mode-seeking. While the variational posterior obtained by FAVI is better calibrated, it still needs correction. After applying CANVI, Table 1 (right) shows that the conformalized \(1-\alpha\) highest density regions are nearly perfectly calibrated. Importantly, correction by CANVI can result in either larger or smaller \(1-\alpha\) regions, depending on the direction of miscalibration. In settings where the variational posterior is overdispersed, applying CANVI results in statistical coverage guarantees and smaller \(1-\alpha\) density regions. We show explicit examples of this in Appendix H.7. Table 1 (right) additionally demonstrates using the better calibrated \(\mathcal{L}_{\text{FAVI}}\)-trained approximation results in higher predictive efficiency compared to the \(\mathcal{L}_{\text{ELBO}}\)- and \(\mathcal{L}_{\text{IWBO}}\)-trained counterparts, as anticipated. ### Galaxy Spectral Energy Distributions We now present the application of CANVI to an important scientific problem. The spectrum of an astronomical object is measured via a spectrograph, which records the flux (light intensity per unit area, time, wavelength) across a large grid of wavelength values [26, 27]. We construct a model of galaxy spectra based on the Probabilistic Value-Added Bright Galaxy Survey simulator (PROVABGS), which maps \(\theta\in\mathbb{R}^{1\over 1}\) to galaxy spectra. Additional details of the simulation setup and example draws from the simulator are available in Appendix J. A mixture of 20 Gaussian distributions was used as the variational posterior and trained against the \(\mathcal{L}_{\text{FAVI}},\mathcal{L}_{\text{ELBO}}\), and \(\mathcal{L}_{\text{IWBO}}\), as in Section 4.2. Table 2 shows that the ELBO and IWBO tend to be overly concentrated, failing to contain the entire parameter vector \(\theta\) in the \(1-\alpha\) highest-density region often. FAVI, on the other hand, is reasonably well-calibrated. After applying CANVI, all three methods achieve nearly perfect calibration across a range of desired confidence levels. Of course, the utility of these corrected regions depends on the level of information contained in the original model. In the case of the ELBO or IWBO variational approximations, the corrected regions achieve statistical validity, but are likely too large to be informative. For FAVI, on the other hand, application of CANVI results in statistical guarantees on coverage rates with minimal alterations to the high-density regions. ## 5 Discussion We have presented CANVI, a novel framework for producing marginally calibrated, efficient prediction regions from a collection of variational approximators with minimal computational and implementation overhead. We view guaranteeing marginal coverage as an important first step amongst many toward increasing the utility of such inference algorithms for downstream applications. This work thus suggests many interesting directions for future work. Of immediate theoretical interest would be identifying sufficient conditions for guaranteeing that the exact posterior recovers optimal efficiency of the CANVI procedure. Additionally, CANVI requires a well-specified forward model \(\mathcal{P}(X\mid\Theta)\) to generate valid prediction regions; while this is a typical assumption, it generally only holds approximately in practice. Extensions to the misspecified case could leverage recent work for using conformal prediction under distribution shift, such as in [28, 29]. Further, conditional coverage is of interest in certain scientific applications. While attaining conditional conformal guarantees was shown to be impossible in the general case in [30], this presumes the lack of any probabilistic model. We are actively pursuing the extension of CANVI towards a variant of group-balanced conformal prediction, where the structure imposed by \(q(\Theta\mid X)\) can be exploited to get stronger guarantees. Finally, leveraging conformal prediction over functional spaces may enable progress toward achieving guarantees for recovering full posterior distributions rather than solely for prediction regions. \begin{table} \begin{tabular}{l l l l} \hline \hline \(1-\alpha\) & ELBO & IWBO & FAVI \\ \hline 0.50 & 0.0007 (0.0008) & 0.1031 (0.0110) & 0.5514 (0.0127) \\ 0.75 & 0.0044 (0.0018) & 0.1994 (0.0115) & 0.7534 (0.0127) \\ 0.90 & 0.0195 (0.0044) & 0.3263 (0.0122) & 0.8797 (0.0065) \\ 0.95 & 0.0396 (0.0061) & 0.4074 (0.0186) & 0.9260 (0.0083) \\ \hline \hline \end{tabular} \begin{tabular}{l l l l} \hline \hline \(1-\alpha\) & ELBO & IWBO & FAVI \\ \hline 0.50 & 0.4930 (0.0229) & 0.4917 (0.0218) & 0.4945 (0.0104) \\ 0.75 & 0.7526 (0.0111) & 0.7445 (0.0154) & 0.7411 (0.0114) \\ 0.90 & 0.9035 (0.0117) & 0.8978 (0.0068) & 0.8995 (0.0081) \\ 0.95 & 0.9509 (0.0084) & 0.9479 (0.0065) & 0.9500 (0.0082) \\ \hline Eff & 1.9155 & 1.9036 & 0.6749 \\ \hline \hline \end{tabular} \end{table} Table 2: Coverage rates and standard errors for 11-dimensional parameter \(\theta\) before (left) and after (right) conformalization by CANVI, assessed by inclusion of \(\theta\) in the \(1-\alpha\) highest density region. Non-conformalized regions were estimated empirically from batches of 1000 i.i.d. samples per point. \begin{table} \begin{tabular}{l l l l l} \hline \hline \(1-\alpha\) & ELBO & IWBO & FAVI \\ \hline 0.50 & 0.4970 (0.0124) & 0.5019 (0.0167) & 0.5086 (0.0144) \\ 0.75 & 0.7488 (0.0139) & 0.7559 (0.0126) & 0.7565 (0.0092) \\ 0.90 & 0.8978 (0.0080) & 0.9036 (0.0111) & 0.9005 (0.0110) \\ 0.95 & 0.9496 (0.0071) & 0.9548 (0.0052) & 0.9487 (0.0081) \\ \hline \hline \end{tabular} \begin{tabular}{l l l l} \hline \hline \(1-\alpha\) & ELBO & IWBO & FAVI \\ \hline 0.50 & 0.4970 (0.0124) & 0.5019 (0.0167) & 0.5086 (0.0144) \\ 0.75 & 0.7488 (0.0139) & 0.7559 (0.0126) & 0.7565 (0.0092) \\ 0.90 & 0.8978 (0.0080) & 0.9036 (0.0111) & 0.9005 (0.0110) \\ 0.95 & 0.9496 (0.0071) & 0.9548 (0.0052) & 0.9487 (0.0081) \\ \hline \hline \end{tabular} \end{table} Table 3: Comparison of the results of the previous section.
2301.06386
Cluster size determines morphology of transcription factories in human cells
Transcription is a fundamental cellular process, and the first step of gene expression. In human cells, it depends on the binding to chromatin of various proteins, including RNA polymerases and numerous transcription factors (TFs). Observations indicate that these proteins tend to form macromolecular clusters, known as transcription factories, whose morphology and composition is still debated. While some microscopy experiments have revealed the presence of specialised factories, composed of similar TFs transcribing families of related genes, sequencing experiments suggest instead that mixed clusters may be prevalent, as a panoply of different TFs binds promiscuously the same chromatin region. The mechanisms underlying the formation of specialised or mixed factories remain elusive. With the aim of finding such mechanisms, here we develop a chromatin polymer model mimicking the chromatin binding-unbinding dynamics of different types of complexes of TFs. Surprisingly, both specialised (i.e., demixed) and mixed clusters spontaneously emerge, and which of the two types forms depends mainly on cluster size. The mechanism promoting mixing is the presence of non-specific interactions between chromatin and proteins, which become increasingly important as clusters become larger. This result, that we observe both in simple polymer models and more realistic ones for human chromosomes, reconciles the apparently contrasting experimental results obtained. Additionally, we show how the introduction of different types of TFs strongly affects the emergence of transcriptional networks, providing a pathway to investigate transcriptional changes following gene editing or naturally occurring mutations.
Massimiliano Semeraro, Giuseppe Negro, Giada Forte, Antonio Suma, Giuseppe Gonnella, Peter R. Cook, Davide Marenduzzo
2023-01-16T12:14:47Z
http://arxiv.org/abs/2301.06386v2
# A multicolour polymer model for the prediction of 3D structure ###### Abstract Within each human cell, different kinds of RNA polymerases and a panoply of transcription factors bind chromatin to simultaneously determine 3D chromosome structure and transcriptional programme. Experiments show that, in some cases, different proteins segregate to form specialised transcription factories; in others they mix together, binding promiscuously the same chromatin stretch. Here, we use Brownian dynamics simulations to study a polymer model for chromosomes accounting for multiple types ("colours") of chromatin-binding proteins. Our multi-colour model shows the spontaneous emergence of both segregated and mixed clusters of chromatin-bound proteins, depending mainly on their size, thereby reconciling the previous experimental observations. Additionally, remarkable small-world networks emerge; in these, positive and negative correlations in activities of transcription units provide simple explanations of why adjacent units in large domains are co-transcribed so often, and how one eQTL (expression quantitative trait locus) can up-regulate some genes and down-regulate others. We also explain how local genome edits induce distant omnigenic and pangenomic effects, and develop ways to predict activities of all transcription units on human chromosomes. All results point to 1D location being a key determinant of transcription, consistently with the conservation of synteny seen between rapidly-evolving enhancers and their more stable target genes. ## I Introduction Microscopy and high-throughput sequencing have uncovered a rich hierarchy of 3D folding in eukaryotic nuclei that ranges from loops of tens of bp up to hundreds of kbp, through topologically-associating domains (TADs) of \(100-1000\) kbp, AB compartments, heterochromatic aggregates, and on to chromosomal territories [1; 2; 3]. An outstanding question is whether, and to what extent, this rich multi-scale organisation is functionally related to, or even driven by, transcription [4]. On the one hand, it is widely believed that TADs remain largely invariant in cells with very different transcriptional programs - which points to little role for transcription in determining structure [5] (for an opposing view, see [6]). On the other hand, clusters of active polymerases - called phase-separated condensates, hubs, and transcription factories [7; 8; 9; 4; 10] - locally stabilise surrounding clouds of loops, thereby providing an example of a structural unit with a clear functional role. Brownian-dynamics simulations point to a simple and generic mechanism - the bridging-induced attraction - that spontaneously drives formation of many higher-order structures [11; 12]. Reversible binding of multivalent transcription factors (TFs) - or TF:polymerase (TF:pol) complexes - to transcription units (TUs) scattered along a string of non-binding beads inevitably leads to clustering of bound TF:pol complexes without any additional energy input; in turn, these clusters organise TADs and AB compartments. Such microphase separation of active polymerases and TFs depends on positive feedback and is eventually arrested by entropic costs associated with crowding and looping more and more DNA [13]. When these simulations involve 2 different kinds (or "colours") of TF (e.g., red and green ones) binding specifically to red and green TUs, resulting clusters often contain bound TFs of just one colour, rather than mixtures [12]. Formation of such distinct clusters mimics the presence of distinct factories that specialize in transcribing different sets of genes. For example, active forms of RNA polymerases II and III are each housed in distinct nucleoplasmic factories that make genic and snRNA transcripts, respectively [14; 15; 7]. Similarly, distinct ER\(\alpha\), KLF1, and NF\(\kappa\)B factories specialize in transcribing genes involved in the estrogen response, globin production, and inflammation [16; 17; 18]. One important consequence of this organisation is the creation of 3D networks, in which genes sharing the same TFs are co-transcribed in the same clusters, and so give contacts detected by techniques like Hi-C (a high-throughput variant of chromosome conformation capture), genome architecture mapping, and pore-C [2; 19; 20; 21]. The existence of clusters prompts the question: does any one cluster mainly contain just one kind of TF, or many different ones? In contrast to results cited above supporting the former possibility, others point to the latter. For example, chromatin immuno-precipitation (ChIP) shows that a promoter of a typical active human gene binds many different TFs, and that highly occupied targets (HOTs) are bound promiscuously by many TFs irrespective of whether or not they encode the specific binding motif for those factors [22; 23; 24]. Additionally, single-cell transcriptional profiling points to expression levels varying continuously as cells differentiate into other cell types, which points to a complex interplay between many factors, rather than a few acting as binary switches [25; 26]. Here, we simulate strings of beads (representing short chromosome fragments and whole human chromosomes in HUVEC and GM12878 cells) binding spheres of different colours (representing different TFs or TF:pol complexes). One key result is that specialised (demixed) and mixed clusters are not mutually exclusive: both emerge spontaneously depending on the 1D pattern of binding sites on strings, and this enables us to reconcile the apparently contrasting observations cited above. The size of emerging clusters correlates with the degree of mixing; specialised one-colour clusters are typically smaller. When we make the reasonable assumption that a bead is transcribed when it binds a TF, we find that cluster size and degree of mixing both determine transcriptional activities. Additionally, when we model just two colours of TF:pol complex that bind respectively to cell-type-invariant and cell-type-specific TUs in strings mimicking whole human chromosomes, we find transcriptional profiles that correlate well with those obtained by GRO-seq (global run-on sequencing [27; 28]). ## Materials and Methods Chromatin fibres are modelled as bead-and-spring polymers [9; 11; 12; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38], where each monomer (diameter \(\sigma\)) represents 3 kbp packed into a 30 nm sphere [9; 11; 35]. Different TFs (or TF:pol complexes) are modelled as differently-coloured spheres (also with diameter \(\sigma\)) able to bind (when in an "on" state) to cognate sites of the same colour that are scattered along the polymer. Each TF and TF:pol switches between "off" (non-binding) and "on" (binding) states to reflect the post-translational modifications that occur in many TFs. Polymer beads are either non-binding ("heterochromatic"), weakly-binding ("euchromatic"), or strongly-binding (containing cognate sites). TFs bind non-specifically to all weakly-binding beads, and strongly only to TUs of the same colour. The system evolves in a cubic simulation domain with periodic boundary conditions through constant-temperature Langevin dynamics that are integrated numerically by the LAMMPS simulation package [39]. Hydrodynamic interactions are neglected for computational efficiency, and averages are evaluated over 100 independent runs for each case. The TF volume fraction of each colour is set to \(\sim 3\ 10^{-5}\), and the polymer volume fraction to \(\sim 2\ 10^{-3}\). More information about the model can be found in the SI. Several quantities are monitored to describe the system's behavior. Mean transcriptional activity is measured as the fraction of time that a TU is "transcriptionally active" (i.e., within \(2.25\sigma\) of a TF) in 100 simulations, and so represents a population average (each simulation run may be thought of as a different cell). This quantity is compared with experimental data on transcriptional activity, obtained via GRO-seq - a method providing a genome-wide average readout of ongoing transcription of both genic and non-genic TUs in cell populations [27; 28]. The mean transcriptional Pearson correlation between all pairs of TUs is also evaluated, and a graphical overview of this feature is provided via the Pearson correlation matrix. We also analyse clusters/factories of bound and spatially-proximate TFs, count the number of TFs of similar colour in each cluster, and introduce a demixing coefficient \[\theta_{\text{dem}}=\frac{nx_{i,max}-1}{n-1}, \tag{1}\] where \(n\) is the number of colors, and \(x_{i,max}\) the largest fraction of same-coloured TFs in a single TF cluster. If \(\theta_{\text{dem}}=1\), this means that a cluster contains only TFs of one colour and so is fully demixed; if \(\theta_{\text{dem}}=0\), the cluster contains a mixture of TFs of all colors in equal number, and so is maximally demixed. More details can be found in the SI. We consider two different types of string, one with \(M=3000\) beads (or 9 Mbp) which is referred to as a "toy" string, and a second representing a whole human chromosome. Chromosomes are initialised in both cases as random walks: for more details, see SI. ### Toy model The toy model is built by placing one yellow, red, or green TU every 30 weakly-binding beads, giving a total of 100 TUs of all types in a string of 3000 beads [11]. Various different sequences of TU colour down the string are considered. In one - the "random" string - TU colours are chosen randomly (see Figure 1a and SI for the specific sequence generated). In a second and third - the "1-pattern" and "6-pattern" strings - TU colors follow a repeating pattern (red, then yellow, then green) 1 or 6 times (see Figure 4). For the random string, we monitor how the system responds to different perturbations. Local "mutations" are inspired by editing experiments performed using CRISPR/Cas9 [40]. One to four mutations are mimicked by switching selected yellow beads inside a cluster of consecutive yellow TUs (between TUs 1920 to 2070) to red ones (Figure 2). Thus, conversion of TU bead 1980 gives a string with 1 mutation, of 1950 and 1980 gives 2 mutations, of 1950 to 2010 gives 3 mutations, and 1950 to 2040 gives 4 mutations. Global perturbations are in spired by experiments reducing global levels of TFs using auxin-induced degrons [41]. Here, we study the effects of reducing the concentration of yellow TFs by 30%. ### Human chromosomes Our reference case for whole human chromosome simulations in the main text is the mid-sized human chromosome HSA 14 (107 Mbp), coarse-grained into \(M=35784\) beads. For Figure 6, weakly- and strongly-binding beads are identified (using ENCODE data [42] for human umbilical vein endothelial cells, HUVECs) by the presence of H3K27ac modifications and DNase- hypersensitivity sites (DHSs) in the 3 kbp region corresponding to that bead - as these are good markers of open chromatin and active TUs (both genic and non-genic), respectively. For Figure 6, TUs are split into ones only active in HUVECs and others ("house-keeping" ones) that are also active in H1-hESC cells (again using DHS sites and ENCODE data). Then, if a TU appears in both HUVECs and H1-hESCs, it is marked as housekeeping and coloured red; if it appears only in HUVECs it is marked as HUVEC-specific and coloured green. This allows an intuitive and simple multicolour model of HUVECs to be constructed. All remaining beads (which are not either weakly-binding or TUs) are non-binding. This approach represents a generalisation of the DHS model described in [9], so we call it the multicolour DHS model. We also consider HSA 18 (80 Mbp, 26026 beads) and 19 (58 Mbp, 19710 beads) in HUVECs, chosen as they represent gene-poor and gene-rich chromosomes, respectively. Additionally, we consider HSA 14 in the B-lymphocyte line GM12878 (again, colours are chosen by combining DHS data for GM12878 and H1-hESCs). H3K27ac and DHS data is again from ENCODE. For human chromosomes, transcriptional-activity data obtained from simulations and GRO-seq are compared in two ways [9]. First, we rank activities of each TU, and build a two-dimensional histogram providing an overview of the agreement between the two sets of ranks. Second, we quantify Spearman's rank correlation coefficient between numerical and experimental data (see Supplemental Information for more details). ## III Results ### Toy model with three colours We first simulate binding of yellow, red, and green TFs to a string representing 3 Mbp. TU beads are positioned regularly in this string, but are coloured randomly yellow, red, or green (we refer to this case as the random string). TFs also bind reversibly and strongly to beads of the same colour, and weakly to all others (Figure 1A). Figure 1: **Toy model, with TUs coloured randomly (the random string).****(A)** Overview. (i) Yellow, red, and green TFs (25 of each colour) bind strongly (when in an on state) to 100 TUs beads of the same colour in a string of 3000 beads (representing 3 Mb), and weakly to blue beads. TU beads are positioned regularly and coloured randomly, as indicated in one region of the string. TFs switch between off and on states at rates \(\alpha_{off}=10^{-5}\)\(\tau_{B}^{-1}\) and \(\alpha_{on}=\alpha_{off/4}\) (\(\tau_{B}\) Brownian time, which one can map to \(0.6-6\,10^{-3}\)\(s\), see SI). (ii) The sequence of bars reflects the random sequence of yellow, red, and green TUs (blue beads not shown). **(B)** Snapshot of a typical conformation obtained after a simulation (TFs not shown). Inset: enlargement of boxed area. TU beads of the same colour tend to cluster and organize blue beads into loops. **(C)** Bridging-induced phase separation drives clustering and looping. Local concentrations of red, yellow, and green TUs and TFs might appear early during the simulation (blue beads not shown). Red TF 1 – which is multivalent – has bound to two red TUs and so forms a molecular bridge that stabilizes a loop, when it dissociates it is likely to re-bind to one of the nearby red TUs. As red TU 2 diffuses through the local concentration, it is also likely to be caught. Consequently, positive feedback drives growth of the red cluster (until limited by molecular crowding). Similarly, the yellow and green clusters grow as yellow TF 3 and green TF 4 are captured. **(D)** Bar heights give transcriptional activities of each TU in the string (average of 100 runs each lasting 8 \(10^{5}\tau_{B}\)). A TU bead is considered to be active whilst within \(2.24\sigma\sim 6.7\times 10^{-9}\)m of a TF:pol complex of similar colour. Dashed boxes: regions giving the 3 clusters in the inset in **(B).****(E)** Pearson correlation matrix for the activity of all TUs in the string. TU bead number (from low to high) is reported on axes, with pixel colour giving the Pearson value for each bead pair (bar on right). Bottom: reproduction of pattern shown in **(A,ii)**. Boxes: regions giving the 3 clusters in the inset in **(B)**. After running a Brownian-dynamics simulation, Figure 1B shows a typical 3D conformation found in the steady state. Remarkably, clusters of TUs and TFs with distinct colours appear and disappear spontaneously, as seen previously using a single-colour model [9; 11; 12]. Such clustering is driven by the positive feedback illustrated in Figure 1C; it depends critically on TFs being able to form molecular bridges that anchor loops. We now assume that the spheres represent TF:pol complexes, and make the reasonable assumption that a TU bead is transcribed if it lies within 2.25 diameters (\(2.25\sigma\)) of a complex of the same colour; then, the transcriptional activity of each TU is given by the fraction of time that the TU and a TF:pol lie close together. Figure 1D reports the mean activity profile down the string; TUs with the lowest activities are flanked by differently-coloured TUs, while those with the highest activities are flanked by similarly-coloured TUs (dashed rectangles in Figure 1D). As expected, a single-colour model with the same TU placement leads to a flat activity profile (Fig. S1A). Clearly, close proximity in 1D genomic space favours formation of similarly-coloured clusters. We next examine how closely transcriptional activities of different TUs correlate [43]; the Pearson correlation matrix for all TUs is shown in Figure 1E. Correlations between neighbouring TUs of similar colour are often positive and strong, resulting in square red blocks along the diagonal (coloured boxes in Figure 1E highlight the 3 clusters shown in the zoom in Figure 1B). This effect is again due to the self-assembly of clusters containing neighbouring TUs of the same colour. In contrast, neighbours with different colours tend to compete with each other for TF:pols, and so down-regulate each other to yield smaller correlations. Correlations are more trivial in the single-color counterpart of Figure 1, where the matrix yields only a positive-correlation band along the diagonal (Fig. S1B). These results provide simple explanations of two mysterious effects - the first being why adjacent TUs throughout large domains tend to be co-transcribed so frequently [44]. The second concerns how expression quantitative trait loci (eQTLs) work. Current models see them doing so post-transcriptionally in highly-convoluted ways [9; 45], but we have argued that any TU can act as an eQTL directly at the transcriptional level [9]. Here, we see individual TUs up-regulate some TUs and down-regulating others - defining features of eQTLs that can lead to genetic effects like "transgressive segregation" [46]. **Local mutations.** These simulations are inspired by editing experiments performed using CRISPR/Cas9 [40]. We choose the most active region in the random string - one containing a succession of yellow TUs - and "mutate" \(1-4\) of these TUs by recolouring them red (Figure 2A). Typical snapshots show red mutants are often ejected from yellow clusters (Figure 2Bi), or cluster together to leave their wild-type neighbours in isolation (Figure 2Bii). These changes are reflected in activity profiles (Figure 2C; arrows indicate mutations). As the number of mutations in the cluster increase, activities of yellow beads in that cluster decrease (Figure 2D), and new red clusters often emerge (Figure 2B,ii; Figure 2Ciii). To confirm that 4 mutations in a yellow cluster often lead to the development of a red cluster, we monitor cluster dynamics over time. Figure 2Ei illustrates a typical kymograph illustrating changes in activity of all TUs in the wild-type; yellow, red, and green pixels mark activity of respective TUs, and black ones inactivity. In this particular simulation, a yellow cluster in the region that will be mutated (marked by the blue rectangle) is present during the first quarter of the time window; it then disappears to reappear half-way through the window and then persists until the end. In the string with 4 mutations, a yellow cluster is never seen; instead, different red clusters appear and disappear (Figure 2Eii). Pearson correlation matrices provide complementary information: the yellow cluster in the wild-type yields a solid red block indicating strong positive correlations (Figure 2Fi), but this block fragments in the string with 4 mutations (Figure 2Fii); mutations also induce subtle long-distance correlations between TUs. These results confirm that local arrangements of TUs on the genetic map determine the extent to which any particular TU will cluster and so become active. **Variations in TF concentration.** These simulations are inspired by experiments reducing global levels of TFs using auxin-induced degrons [41]; we reduce the concentration of yellow TFs binding to the random string by 30% (Figure 3A). As expected, transcriptional activity falls both globally and locally (see yellow dotted rectangles in Figure 3B and C). Surprisingly, activity of a nearby cluster of red TUs (numbers 1080, 1110, 1170, 1200, and 1530 to 1650) increases by 50% (red dotted rectangles in Figure 3B and C). This effect is specific, in the sense that there is little effect on green clusters (e.g., compare Figure 1D with Figure 3B). We attribute this to a now-reduced steric competition for 3D space by yellow neighbours - fewer yellow clusters are present to stunt growth of nearby red ones. Comparison of correlation matrices (compare Figure 1E and Figure 3D) shows a reduction in range and strength of positively-correlations for all yellow TUs and and an opposite effect for red ones. Again, TUs in green clusters are little affected, as they happen to be further away in 1D genomic space from yellow TUs in this particular string. Overall, these results show there are many statistically-significant correlations in activities both near and far away on the genetic map - much like those seen between singly-coloured TUs considered earlier [9]. **Effects of 1D TU patterns on transcriptional activity.** To better understand local effects uncovered with the random string, we now compare different toy A Random string with mutations Figure 2: **Simulating effects of mutations.** Yellow TU beads 1920, 1950, 1980, 2010, 2040 and 2070 in the random string have the highest transcriptional activity. 1-4 of these beads are now mutated by recolouring them red. **(A)** The sequence of bars reflects the sequence of yellow, red, and green TUs in random strings with 1, 2 and 4 mutations (blue beads not shown). Black boxes highlight mutant locations. **(B)** Typical snapshots of conformations with **(i)** one, and **(ii)** 4 mutations. **(C)** Transcriptional-activity profiles of mutants (averages over 100 runs, each lasting \(8\,10^{5}\tau_{B}\)). Bars are coloured according to TU colour. Black boxes: activities of mutated TUs. **(D)** Activities (+/- SDs) of wild-type (yellow) and different mutants. 3 mutations: TUs 1950, 1980 and 2010 mutated from yellow to red. **(E)** Typical kymographs for **(i)** wild-type and **(ii)** 4-mutant cases. Each row reports the transcriptional state of a TU during one simulation. Black pixels denote inactivity, and others activity; pixels colour reflects TU colour. Blue boxes: region containing mutations. **(F)** Pearson correlation matrices for wild-type and 4-mutant cases. Black boxes: regions containing mutations (mutations also change patterns far from boxes). strings with regular and repeating patterns of colored TUs (Figure 4). Two results are apparent. First, activities (Figure 4Bii) in the 6-pattern case are higher overall (compare horizontal dotted lines), and more variable (compare activities of the two central TUs within each repeat with peripheral ones) relative to the 1-pattern case (Figure 4Bi). This is consistent with positive additive effects acting centrally within each 6-pattern repeat, coupled to competitive negative effects of flanking and differently-coloured repeats at the edges. Second, the 6-pattern also has a Pearson correlation matrix (Figure 4Cii) that is highly-structured, with a checkerboard pattern; red blocks on the diagonal indicate high positive correlations (so the 1D 6-pattern clearly favours 3D clustering). [Such a checkerboard pattern is not seen with a single-color model that has a correlation matrix with one red continuous diagonal when TUs are regularly spaced (Figure \(S1\)).] Additionally, blue off-diagonal blocks indicate repeating negative correlations that reflect the period of the 6-pattern. These results show how strongly TU position in 1D genomic space affect 3D clustering and activity, and that these effects depend on inclusion of more than one colour. **Emergent regulatory networks.** We have seen many positive and negative correlations between activities of TUs in the random string (Figure 1). We now select significant correlations from Pearson correlation matrices (those which are \(>0.2\), Figure 5A) to highlight emergent interaction networks [9]. In such networks, nodes represent each TU from first to last (other beads are not shown), and edges indicate positive (black) or negative (grey) correlations in activities of node pairs. Even for the toy random string, these networks prove to be very complex (Fig. S2A). They are also "small-world" (i.e., most nodes can be reached from other ones by a few steps [9; 47]). Given this complexity, we now consider simplified versions. Thus, in Figure 5Ai, only interactions between red TUs are shown (the first red TU is at position 60, the last at position 2910, and interactions between different colours are not depicted). As expected, activities of most red TUs are positively correlated with those of nearby TUs. Conversely, negative correlations connect distant TUs, as found in the single-color model [9]; as we have seen, binding of red TFs to any red cluster reduces the number available to bind elsewhere. In Figure 5Aii, we consider just interactions between red TUs and green TUs. Remarkably, close-range positive correlations (black edges) are still seen between TU Figure 4: **Clustering similar TUs in 1D genomic space increases transcriptional activity.****(A)** Simulations involve toy strings with patterns (dashed boxes) repeated 1 or 6 times. Activity profiles plus Pearson correlation matrices are determined (100 runs, each lasting \(8\;10^{5}\tau_{B}\)). **(B)** The 6-pattern yields a higher mean transcriptional activity (arrow highlights difference between the two means). **(C)** The 6-pattern yields higher positive correlations between TUs within each pattern, and higher negative correlations between each repeat. Figure 3: **Reducing the concentration of yellow TFs reduces the transcriptional activity of most yellow TUs while enhancing the activities of some red TUs. (A)** Overview. Simulations are run using the random string with the concentration of yellow TFs reduced by 30%, and activities determined (means from 100 runs each lasting \(8\;10^{5}\tau_{B}\)). **(B)** Activity profile. Dashed boxes: activities fall in the region containing the biggest cluster of yellow TUs seen with 100% TFs, as those of an adjacent red cluster increase. **(C)** Differences in activity induced by reducing the concentration of yellow TFs. **(D)** Pearson correlation matrix. Boxes: regions giving the 3 clusters in **Fig. 1B, inset**. pairs that no longer bind TUs of the same colour. We suggest this is due to the presence of weakly-binding beads. Specifically, a red cluster organises a surrounding cloud of weakly-binding beads, and these will bind some green TFs that - in turn - bind green TUs. In contrast to the same-colour network in Fig. 5Ai, there are now more long-range positive correlations, showing that the presence of multiple colors enriches the emerging network. To obtain further quantitative insight into these subtle yet remarkable correlations, we compute the average of those between same- and different-colour TUs as a function of genomic separation (Figure 5B). For the random string, same-colour correlations switch from clearly positive to slightly negative at about 300 beads (Figure 5Bi, red curve). Differently-coloured correlations yield a broadly-similar switch, although positive and negative values are weaker (Figure 5Bi, blue curve). The 6-pattern gives qualitatively similar trends, with the magnitude of differently-coloured correlations dampened further (Figure 5Bii). In contrast, the 1-pattern string yields largely overlapping curves (Figure 5Biii). These results illustrate how the sequence of TUs on a string can strikingly affect formation of mixed clusters; they also provide an explanation of why activities of human TUs within genomic regions of hundreds of kbp are positively correlated [48]. To quantify the extent to which TFs of different colours share clusters, we introduce a demixing coefficient, \(\theta_{\rm dem}\) (defined in Equation 1), which can vary between 0 and 1. If \(\theta_{\rm dem}=1\), a cluster contains only TFs of one colour (and so is fully demixed); if \(\theta_{\rm dem}=0\), it contains both red Figure 5: **TU regulatory networks and demixing.** Simulations are run using the toy models indicated, and complete regulatory networks constructed from Pearson correlation matrices. **(A)** Simplified network given by the random string. TUs from first (bead 300) to last (bead 3000) are shown as peripheral nodes (coloured according to TU); black and grey edges denote statistically-significant positive and negative correlations, respectively (above a threshold of 0.2, corresponding to a \(p\)-value \(\sim 5\;10^{-2}\)). The complete network consists of \(n=100\) individual TUs, so that there are \(n_{c}=\binom{100}{2}=4950\) pairs of TUs couples; we find 990 black and 742 gray edges. Since \(p\)-value-\(n_{c}=223\), most interactions (edges) are statistically significant. Networks shown here only correlations (i) between red TUs, and (ii) between red green TUs. (ii) **(B)** Average correlation (shading shows +/-SD, and is usually less than line/spot thickness) as a function of genomic separation for the (i) random, (ii) 6-, and (iii) 1-pattern cases. Correlation values at fixed genomic distance are taken from super-/sub-diagonals of Pearson matrices. Red dots give mean correlation between TUs of the same color (3 possible combinations), and blue dots those between TUs of different colors (4 possible combinations). Cartoons depict contents of typical clusters to give a pictorial representation of mixing degree (as this determines correlation patterns); see SI for exact values of \(\theta_{\rm dem}\). and green TFs in equal numbers (and so is fully mixed). Intuitively, one might expect \(\theta_{\text{dem}}\) to fall as the number of adjacent TUs of similar colour in a string fall; this is what is seen with the 6- and 1-patterns - strings with the most and least numbers of adjacent TUs of similar colour, respectively (Fig. S2B; shown schematically by the cluster cartoons in Figure 5B). Our results then show that in cases where same- and different-colour correlations overlap (as in the 1-pattern string), clusters are more mixed (have a larger value of \(\theta_{\text{dem}}\)). Instead, in cases where same- and different-color correlations diverge, or are more different (as in the 6-pattern string), then clusters are typically unmixed, and so have a larger value of \(\theta_{\text{dem}}\) (Fig. S2B). Mixing is facilitated by the presence of weakly-binding beads, as replacing them with non-interacting ones increases demixing and reduces long-range negative correlations (Fig. S6). Therefore, the sequence of strong and weak binding sites along strings determines the degree of mixing, and the types of small-world network that emerge. If eQTLs also act transcriptionally in the way we suggest [9], we predict that down-regulating eQTLs will lie further away from their targets than up-regulating ones. More generally, we suggest that the presence of multiple TF colours provides a powerful pathway to enrich and modulate transcriptional regulation. ### Simulating whole human chromosomes **Transcriptional activity and comparison with GRO-seq data.** We now simulate human chromosome 14 (HSA 14) in HUVECs, with individual beads in the string coloured appropriately (Figure 6A). Thus, Figure 6: **Comparison of transcriptional activities of TUs on different human chromosomes determined from simulations and GRO-seq.****(A)** Overview of panels (A-C). The 35784 beads on a string representing HSA14 in HUVECs are of 4 types: TUs active only in HUVECs (red), “house-keeping” TUs – ones active in both HUVECs and ESCs (green), “euchromatic” ones (blue), and “heterochromatic” ones (grey). Red and green TFs bind strongly to TUs of the same colour, and weakly to euchromatin; neither binds to heterochromatin. **(B)** Snapshot of a typical conformation, showing both specialized and mixed clusters. **(C)** TU activities seen in simulations and GRO-seq are ranked from high to low, binned into quintiles, and activities compared. **(D)** Spearman’s rank correlation coefficients for the comparison between activity data obtained from analogous simulations and GRO-seq for the chromosomes and cell types indicated. TUs transcribed uniquely in HUVECs are coloured red, housekeeping TUs (i.e., ones also expressed in a stem cell, namely H1-hESCs) are green, euchromatic regions blue, and heterochromatic ones grey. Figure 6B shows a typical snapshot; red and green clusters again form spontaneously. We next determine transcriptional activities, rank them in order from high to low, and compare binned rank orders with those obtained experimentally by GRO-seq (Figure 6C); most counts lie along the diagonal, meaning there is a good agreement between the two data sets. More quantitatively, Spearman's rank correlation coefficient is 3.66 \(10^{-1}\), which compares with 3.24 \(10^{-1}\) obtained previously using a single-colour model [9]. In both cases the estimated uncertainty is of order \(10^{-3}\) (mean and SD obtained using the bootstrap technique over 100 trials); consequently, use of an additional color provides a statistically-significant improvement (\(p\)-value \(<10^{-6}\), 2-sided t-test). Activity predictions are also improved compared to the one-colour model with HSA 18 and HSA 19 in HUVECs, plus HSA 14 in GM12878 (Fig. 6D). However, Spearman's rank coefficient for gene-poor HSA 18 is about twice that for gene-rich HSA 19; this may be due to additional regulatory layers in regions with high promoter density. These results confirm that our multicolour polymer model generates strings that can mimic structures and functions found in whole chromosomes. **Specialized and mixed clusters.** Inspection of snapshots shows 1-colour clusters tend to be smaller than mixed (2-colour) ones (Figure 7A). To quantify this, we count numbers and types of TFs in individual clusters (Figs. 7B and Fig. S6). Clusters with just two bound TFs never contain both colours; conversely, those with \(>20\) bound TFs never contain just one colour (Figure 7B). We also measure the average value of the demixing coefficient, \(\theta_{\text{dem}}\) (Materials and Methods). The cross-over point between fully mixed and demixed (where the average value of \(\theta_{\text{dem}}=0.5\)) occurs when there are \(\sim 10\) TFs per cluster (Figure 7C): notably, this is similar to the average number of productively-elongating pols seen experimentally in a transcription factory [4]. Similar results are obtained for different cell types, or chromosomes (see, e.g., Fig. S4 for the case of HSA 14 in GM12878). The transition between specialized (demixed) and mixed factories with increasing size can be explained as follows. Two red TFs in an unmixed cluster might stabilise 3 loops, and so bring into close proximity only a few non-specific binding sites that could bind a green TF. In contrast, 10 red TFs in a cluster will stabilise many loops that inevitably bring into close proximity many non-specific binding sites - and this makes it highly likely that some green TFs will also bind nearby to create a mixed cluster. This mixing transition provides a way to reconcile observations that some clusters are unmixed (like factories rich in polymerases II and III), and others highly mixed (like HOTs). Finally, as for the toy model, the balance between mixing and demixing determines correlation patterns. Figure 7: **Small clusters tend to be unmixed, large ones mixed.** After running one simulation for HSA 14 in HUVECs, clusters are identified. **(A)** Snapshot of a typical final conformation (TUs, non-binding beads, and TFs in off state not shown). Insets: a large mixed cluster and a small demixed one. **(B)** Example clusters with different numbers of TFs/cluster (2, 10, 20, 30, 40) chosen to represent the range seen from all-red to all-green (with 3 intervening bins). Black numbers: observed number of clusters of that type seen in the simulation. **(C)** Average of the demixing coefficient \(\theta_{\text{dem}}\) (error bars: SD). Values of 1 and 0 are completely demixed and completely mixed respectively. Grey area: demixed regime where \(\theta_{\text{dem}}\) is \(>0.5\). For example, activity patterns of same- and differently-colored TUs in the whole chromosome (Fig. S7) are much like those in the 1-pattern model (Figure 5Biii). We attribute this to \(\sim 78\%\) TFs being in mixed clusters (\(\theta_{\text{dem}}<0.5\)), and and so inevitably the resulting interactions will dominate the pattern seen. ## Discussion and Conclusions We use coarse-grained simulations of long polymers representing human chromosomes to study inter-relations between 3D structure and transcriptional dynamics. Unlike previous work [9; 12], we adopt a multi-colour model where different "colours" of TF:pol complex bind to different genic and non-genic TUs. We confirm that TF:poll spontaneously self-assemble into clusters (Figure 1), and obtain various striking results. First, when small, these clusters typically contain TFs of just one colour; these are reminiscent of the specialized transcription factories found in the nucleoplasm that contain active forms of just pol II or pol III - but not both [49]. When large, they are typically mixed (Figure 7C); this provides a mechanistic basis for the formation of HOTs, where many different TFs bind promiscuously and weakly to segments of open chromatin that are often devoid of cognate binding cites [22; 23; 24]. Consequently, this simple mechanism provides an appealing resolution of the puzzling observation that specialized factories and HOTs can occur in the same cell. Second, we see remarkable positive and negative correlations in the transcriptional activities of different TUs. For example, activities of same-colour and nearby TUs tend to be strongly positively correlated, as such TUs tend to co-cluster (Figure 5). Conversely, activities of similar TUs lying far from each other on the genetic map are often weakly negatively correlated, as the formation of one cluster inevitably sequesters some TFs to reduce the number available to bind elsewhere. Taken together, these results provide simple explanations of why adjacent TUs throughout large domains tend to be co-transcribed so frequently [48], and how one eQTL might up-regulate some TUs and down-regulate others - and so lead to "transressive segregation" [46]. More generally, they suggest that transcriptional correlations can be modulated by varying TU position and by tilting the balance between weakly-binding and non-binding sites in fibres. Third, we can predict effects of local mutations and genome edits that often induce distant omnigenic and pangenomic effects uncovered by genome-wide association studies ([9; 45]. For example, mutations that switch a binding site of one TF to another can convert a cluster of one colour into another (Figure 2). Similarly, global effects of knocking down TF levels are easily assessed (Figure 3). Fourth, we also predict transcriptional activities of all TUs (both genic and non-genic) on whole human chromosomes by including cell-type-invariant and cell-type-specific TFs (Figure 6). We find this yields a better correlation with GRO-seq experimental data than a single-colour model (where just one TF binds to all TUs similarly). This result underscores the importance of including different TFs in polymer models. Finally, all our results point to the importance of the 1D pattern of TUs and TF-binding sites on chromosomes in determining activity. In other words, 1D location is a key feature determining transcriptional patterns, and so cell identity. We speculate this is why relative locations of active regulatory elements are so highly conserved. For instance, despite human enhancers evolving much more rapidly than their target protein-coding genes, the synteny between the two (over distances up to 2 Mbp) is highly conserved [50; 51]. In the future, it would be of interest to include many more types of TF and TU both within this and other frameworks like HiP-HoP [30] and ones incorporating epigenetic modifications [13; 52]. From a theoretical point of view, we hope our results will stimulate work aimed at understanding inter-relations between structure and function. ## Acknowledgements The work has been performed within the HPC-EUROPA3 Project (INFRAIA-2016-1-730897), with the support of the EC Research Innovation Action under the H2020 Programme. We acknowledge funding from MIUR Project No. PRIN 2020/PFCXPE, and from the Wellcome Trust (223097/Z/21/Z). ## Data Availability All the experimental data used in this paper are available in the ENCODE database [42]. Simulations are performed using the open source software LAMMPS. All the custom scripts used for the simulations presented here will be shared on reasonable request to the corresponding author. ### Conflict of interest statement. None declared.
2304.03420
Toward Unsupervised 3D Point Cloud Anomaly Detection using Variational Autoencoder
In this paper, we present an end-to-end unsupervised anomaly detection framework for 3D point clouds. To the best of our knowledge, this is the first work to tackle the anomaly detection task on a general object represented by a 3D point cloud. We propose a deep variational autoencoder-based unsupervised anomaly detection network adapted to the 3D point cloud and an anomaly score specifically for 3D point clouds. To verify the effectiveness of the model, we conducted extensive experiments on the ShapeNet dataset. Through quantitative and qualitative evaluation, we demonstrate that the proposed method outperforms the baseline method. Our code is available at https://github.com/llien30/point_cloud_anomaly_detection.
Mana Masuda, Ryo Hachiuma, Ryo Fujii, Hideo Saito, Yusuke Sekikawa
2023-04-07T00:02:37Z
http://arxiv.org/abs/2304.03420v1
# Toward Unsupervised 3D Point Cloud Anomaly Detection ###### Abstract In this paper, we present an end-to-end unsupervised anomaly detection framework for 3D point clouds. To the best of our knowledge, this is the first work to tackle the anomaly detection task on a general object represented by a 3D point cloud. We propose a deep variational autoencoder-based unsupervised anomaly detection network adapted to the 3D point cloud and an anomaly score specifically for 3D point clouds. To verify the effectiveness of the model, we conducted extensive experiments on the ShapeNet dataset. Through quantitative and qualitative evaluation, we demonstrate that the proposed method outperforms the baseline method. Our code is available at [https://github.com/llien30/point_cloud_anomaly_detection](https://github.com/llien30/point_cloud_anomaly_detection). Mana Masuda\({}^{\star}\) Ryo Hachiuma\({}^{\star}\) Ryo Fujii\({}^{\star}\) Hideo Saito\({}^{\star}\) Yusuke Sekikawa\({}^{\dagger}\)\({}^{\star}\) Keio University, Japan \({}^{\dagger}\)Denso IT Laboratory, Japan 3D point cloud, anomaly detection, unsupervised learning, variational autoencoder ## 1 Introduction Anomaly detection is the task of recognizing whether an input sample is within the distribution of a given target normal class or an anomaly class. Anomaly detection is a fundamental task in various fields, such as detecting malicious actions, system failures, intentional fraud, and diseases. Many deep learning-based methods have been proposed [1], with a wide range of input data, including sound [2], big data [3], signal data [4], natural language [5], image [6], and video [7]. Thanks to the development of 3D sensing devices, such as LiDAR, stereo cameras, and structured light sensors, 3D point clouds are ubiquitous today. As a result, there has been growing interest in developing algorithms for performing classification [8], segmentation [8], and object detection [9]. Unlike images, 3D data can be represented in various ways, such as 3D volumes, meshes, and point clouds (set of points). Based on PointNet [10], many methods for processing point clouds have been proposed that handle the permutation invariance of the input data and memory efficiency. Sekuboyina _et al._[11] proposed an anomaly detection method for 3D point clouds for analyzing vertebral shapes, but this method is not suitable for detecting the anomaly of general objects, as the network can reconstruct only a fixed number of point data. In this paper, we tackle the task of detecting anomalies for the 3D point clouds of a general object in an unsupervised manner. A formal definition of an unsupervised anomaly detection task is as follows: given a dataset \(\mathcal{D}\) that contains a large number of normal data \(X\) for training, and several abnormal data \(\hat{X}\) for testing, model \(f\) is optimized over its parameter \(\theta\) using training data \(X\). \(f\) learns normal distribution \(p_{x}\) during training and identifies abnormal data as outliers during testing by outputting an anomaly score \(\mathcal{A}(x)\), where \(x\) is a given test sample. A larger \(\mathcal{A}(x)\) indicates possible abnormalities within the test sample because \(f\) learns to minimize the output score during training. In this paper, we present a variational autoencoder (VAE)-based anomaly detection method for 3D point clouds. Considering the characteristics of the anomaly detection task, we hypothesized that the anomaly detection task in 3D point clouds should also be solved with a reconstruction-based method, referring to the anomaly detection method for images [6, 12, 13, 14]. Although many reconstruction methods have been proposed for 3D point clouds [10, 15, 16], they are not adapted to the anomaly detection task. In addition, we have conducted many experiments to evaluate the loss during training and the method for measuring the anomaly score \(A(x)\) of 3D point clouds, which is appropriate for solving the task of anomaly detection in 3D point clouds. To validate the proposed method, we performed a category-out experiment on the ShapeNet dataset [17], referring to the experiments of anomaly detection methods on images [12, 13]. The contributions of the paper are as follows: * As far as we know, this paper is the first to tackle the anomaly detection task for 3D point clouds of general objects. We present an anomaly detection framework based on a variational autoencoder. We also present the loss function and the anomaly score calculation for 3D point clouds. * We conducted extensive experiments to validate the proposed network, loss function, and anomaly score. The results verified that the proposed method achieves high accuracy of more than 76% on average when the area under the curve (AUC) of the receiver operating characteristic (ROC) is used as an evaluation metric. ## 2 Methods An overview of the proposed method is shown in Fig. 1. Inspired by the conventional anomaly detection method for images [6, 12, 13, 14], we propose a reconstruction-based anomaly detection method for 3D point clouds. At training time, given normal point clouds as inputs, the model extract feature distributions and try to reconstruct the inputs. The point clouds of large reconstruction errors are then treated as anomalies at test time. This method is also inspired by the feature learning method (FoldingNet) [15]. ### Model Overview We propose a VAE model suitable for anomaly detection of 3D point clouds. For the encoder, we introduce skip-connection and a graph max-pooling layer which estimates local features based on the graph structure [18]. For the decoder, we use the FoldingNet [15] decoder, but we adopt a spherical shape as the grid instead of a plane. The input for the encoder is an \(n\)-by-\(3\) matrix. Each row of the matrix is composed of the 3D position \((x,y,z)\). The encoder concatenates the local covariance matrix proposed by Yang _et al_. [15] to the input before input to the convolution layer. The output is also an \(n\)-by-\(3\) matrix representing the reconstructed point positions. The encoder computes the mean \(\mu\) and variance \(\sigma\) from each input point cloud, and the decoder reconstructs the point cloud using the sampled vector \(z\) from its mean \(\mu\) and variance \(\sigma\). The mean \(\mu\) and variance \(\sigma\) length is set as \(512\) in accordance with Achlioptas _et al_. [19]. ### Model Training #### 2.2.1 Reconstruction loss Two permutation-invariant metrics for comparing unordered point sets have been proposed [20]. For the reconstruction error between the original point cloud \(S\) and the reconstructed point cloud \(\hat{S}\), the earth mover's distance (EMD)[21], \[d_{EMD}(S,\hat{S})=\min_{\phi:S\to\hat{S}}\sum_{x\in S}||x-\phi(x)||_{2}, \tag{1}\] where \(\phi:S\to\hat{S}\) is a bijection and the Chamfer distance (CD), \[\begin{split}& d_{CD}(S,\hat{S})=\\ &\frac{1}{|S|}\sum_{x\in S}\min_{\hat{x}\in S}||x-\hat{x}||_{2}+ \frac{1}{|\hat{S}|}\sum_{x\in\hat{S}}\min_{x\in S}||\hat{x}-x||_{2},\end{split} \tag{2}\] can be considered. Following FoldingNet [15], we employ the CD for the reconstruction error \(\mathcal{L}rec\), because training with the CD is faster in terms of convergence, and the CD is less computationally expensive than the EMD. #### 2.2.2 KL divergence Following the traditional method of the VAE [22], we adopt the KL divergence as a loss. We compute the loss to be small for the KL divergence between Gaussian distribution \(\mathcal{N}(0,1)\) and \(\mathcal{N}(\mu,\sigma)\) computed from the original point cloud \(S\). We define this KL divergence as \(D_{KLori}\): \[D_{KLori}=D_{KL}(\mathcal{N}(\mu,\sigma^{2})||\mathcal{N}(0,1)). \tag{3}\] We also adopt the second KL divergence between Gaussian distribution \(\mathcal{N}(0,1)\) and \(\mathcal{N}(\hat{\mu},\hat{\sigma})\), where \(\hat{\mu}\) and \(\hat{\sigma}\) are obtained by entering the reconstructed point cloud \(\hat{S}\) into the network. We define this KL divergence as \(D_{KLrec}\): Figure 1: Overview of the proposed method. We adopted a FoldingNet-based decoder [15] and introduced a skip connection in the encoder, which allows compressed features to include global and local features. \[D_{KLrec}=D_{KL}(\mathcal{N}(\hat{\mu},\hat{\sigma}^{2})||\mathcal{N}(0,1)). \tag{4}\] Overall, the objective function becomes the following: \[\mathcal{L}=\mathcal{L}rec+D_{KLori}+D_{KLrec}. \tag{5}\] ### Anomaly Detection To measure whether the sample is anomalous or not, we adapt the anomaly score proposed in [6]. We choose the Chamfer distance as the anomaly score \(\mathcal{A}(S)\) for 3D point clouds. ## 3 Experiment and Results To evaluate the anomaly detection framework, we used the ShapeNet dataset [17]. We used only seven classes of the datasets (airplane, car, chair, lamp, rifle, table, sofa) that included more than \(2000\) data. All data were pre-processed by sampling \(2048\) points randomly. During training, we set the number of points in the output layer to \(2048\) (\(m=2048\)). To validate the proposed method, we performed a category-out experiment referring to image-based anomaly detection methods [12, 13]. To verify the anomaly detection method for 3D point clouds, we conducted two different experiments: The first was a comparison of the models, and the second was a comparison of the anomaly scores. Mitsuba2 renderer [23] was used to visualize the dataset and its reconstruction results. We implemented the code by regarding this repository1. Footnote 1: [https://github.com/AnTao97/UnsupervisedPointCloudReconstruction](https://github.com/AnTao97/UnsupervisedPointCloudReconstruction) ### Quantitative Evaluation We compared the proposed model with an anomaly detection result with FoldingNet and summarized the results in Table 1. Following the experimental setup in [12, 13], we measured the average AUC by computing the area under the ROC with varying threshold values for the anomaly scores. We report the AUC performance of two of the models, with and without \(D_{KLrec}\) in the loss. From the table, we verify two things: (1) The model without \(D_{KLrec}\) in the loss shows a better result than the FoldingNet [15], achieving an average AUC of \(75.1\)%. This demonstrates the effectiveness of the proposed VAE network for anomaly detection. (2) The model with \(D_{KLrec}\) in the loss shows the best result among the three models, achieving an average AUC of 76.3%. Although we did not use this value as the anomaly score, the anomaly detection accuracy was improved when \(D_{KLrec}\) was used as the loss. This indicates that \(D_{KLrec}\) is a very effective loss for the reconstruction of 3D point clouds for anomaly detection. ### Qualitative Evaluation In Fig. 2, we show the qualitative results of the proposed model when the chair is the anomaly class. Fig. 2 shows an example in which normal and abnormal samples are correctly and incorrectly classified, respectively, based on the threshold value calculated so that the best accuracy would be achieved. The top row shows the correctly classified samples, and the bottom row shows the misclassified samples. The left side of each sample is the original point cloud and the right is the reconstructed point cloud. From the qualitative results, it can be confirmed that for normal data, data that are correctly reconstructed are correctly classified as normal, while data that are poorly reconstructed are incorrectly classified as abnormal. For abnormal data, data with relatively good reconstructions were misclassified as normal data, and data with poor reconstructions were correctly classified as abnormal data. ### Ablation Study of the Anomaly Score Because this is the first work that tackles the anomaly detection task on a general object represented by a point cloud, an ablation study of various scores was conducted. There are mainly two types of anomalies in the point cloud: reconstruction errors and feature differences. For the reconstruction error, we considered two sorts of errors: the EMD and the CD. For the feature difference, we considered two types of feature difference: the KL divergence between the unit Gaussian and the Gaussian predicted from the input point clouds and the sampled feature from Gaussian distribution. We considered that reconstruction errors and feature differences are important, and examined the five anomaly scores in addition to the proposed anomaly score. We report the AUC performance for the variants of the anomaly scores of the proposed model in Table 2. As there is a numerical difference between the reconstruction error and the difference in the feature values, when we added the two types of anomaly scores, we scaled each anomaly score to one at the maximum and zero at the \begin{table} \begin{tabular}{l|c|c|c} \hline & FoldingNet [15] & w/o \(D_{KLrec}\) & w/ \(D_{KLrec}\)(ours) \\ \hline airplane & 72.9 & **77.0** & 74.7 \\ car & **83.0** & 72.4 & 75.7 \\ chair & 88.9 & 89.5 & **93.1** \\ lamp & 84.2 & 90.1 & **90.7** \\ table & 77.7 & **87.1** & 83.9 \\ rifle & 25.7 & 33.0 & **38.2** \\ sofa & 73.6 & 76.5 & **77.7** \\ \hline average & 72.3 & 75.1 & **76.3** \\ \hline \end{tabular} \end{table} Table 1: Quantitative comparison of an anomaly detection result with FoldingNet [15]. We measured the AUC (%) on the ShapeNet dataset [17]. Numbers in bold indicate the best performance, and underscored numbers are the second best. We set the number of output points to \(4096\). minimum: \[N_{scale}(X)=\frac{X-x_{min}}{x_{max}-x_{min}}, \tag{6}\] where \(x_{min}\) and \(x_{max}\) are the minimum and maximum anomaly scores in \(X\), and then added them for the comparison. Fig. 3 shows the ROC curves and their variance for the six anomaly scores with lamps as the anomaly class. When we used only the CD as the anomaly score, not only was the AUC score the best but also the variance was the smallest, indicating that the accuracy was stable and the best. ### Ablation Study of the Number of Points Table 3 shows a comparison of the accuracy according to the number of reconstructed points. The number of points with the highest accuracy depends on the shape, but on average, the highest accuracy was \(76.3\)% at \(4096\) points. From the table, we can confirm that the effective number of reconstructed points differed among the objects' categories. ## 4 Conclusions We presented a novel unsupervised method for 3D point cloud anomaly detection. We evaluated a deep variational autoencoder network and a loss function and showed that both are suitable for 3D point cloud anomaly detection. We also com \begin{table} \begin{tabular}{l c c c c c c c|c} \hline & airplane & car & chair & lamp & table & rifle & sofa & average \\ \hline \(||(\mu+\epsilon\times\sigma)-(\tilde{\mu}+\tilde{\epsilon}\times\tilde{\sigma})||_ {2}\) & 50.5 & 44.7 & 30.3 & 53.4 & 45.7 & 37.2 & 47.0 & 44.1 \\ \hline \(D_{KL}(\mathcal{N}(\mu,\sigma)||\mathcal{N}(0,1))\) & 36.9 & 70.2 & 94.7 & 57.6 & 89.0 & 34.4 & 52.1 & 62.1 \\ \hline \(N_{scale}(d_{EMD}(S,\tilde{S}))+N_{scale}(D_{KL}(\mathcal{N}(\mu,\sigma))|| \mathcal{N}(0,1))\) & 57.5 & 59.3 & 91.5 & 76.1 & 88.8 & 37.3 & 57.2 & 66.8 \\ \hline \(d_{EMD}(S,\tilde{S})\) & 65.4 & 49.3 & 78.4 & 86.4 & 82.1 & **44.4** & 62.2 & 66.9 \\ \hline \(N_{scale}(d_{CD}(S,\tilde{S}))+N_{scale}(D_{KL}(\mathcal{N}(\mu,\sigma))|| \mathcal{N}(0,1))\) & 56.9 & 73.7 & **96.3** & 73.7 & **90.3** & 29.9 & 69.4 & 70.0 \\ \hline \(d_{CD}(S,\tilde{S})\) & **71.6** & **75.2** & 91.8 & **90.3** & 83.4 & 28.6 & **77.8** & **74.1** \\ \hline \end{tabular} \end{table} Table 2: Anomaly score ablation study. When we added two anomalies together, we normalized each anomaly score to a magnitude between zero and one. As in Table 1, we measured the AUC (%) on ShapeNet dataset [17] for each anomaly score. Numbers in bold indicate the best performance. The number of output points is fixed to \(2048\) for comparison with the EMD. Figure 3: Comparison of ROC curves for the average of \(50\) random seeds of the six anomaly scores with lamps as the anomaly class. Shaded areas in the plot represent the variance. \begin{table} \begin{tabular}{l|c|c|c|c|c} \hline & 1024 & 2048 & 3072 & 4096 & 5120 \\ \hline airplane & 58.7 & 71.6 & 72.0 & 74.7 & **75.5** \\ car & 70.5 & 75.2 & 75.4 & **75.7** & 75.4 \\ chair & 87.8 & 91.8 & 92.6 & **93.1** & **93.1** \\ lamp & 87.2 & 90.3 & 90.6 & 90.7 & **90.8** \\ table & 80.2 & 83.4 & 83.2 & **83.9** & 83.4 \\ rifle & 17.8 & 28.6 & 31.7 & **38.2** & 37.4 \\ sofa & 75.1 & 77.8 & 77.8 & 77.7 & **78.1** \\ \hline average & 68.2 & 74.1 & 74.8 & **76.3** & 76.2 \\ \hline \end{tabular} \end{table} Table 3: Comparison of the number of output points. We measured the accuracy with AUC (%). Numbers in bold indicate the best performance. Figure 2: Qualitative results when the chair is the anomaly class. The top row shows the correctly classified samples, and the bottom row shows the incorrectly classified samples. The left side of each sample is the original point cloud and the right side is the reconstructed point cloud. The rifles had small reconstruction errors, and the proposed model did not misclassify them as abnormal. pared various anomaly score functions and their combinations for anomaly detection in 3D point clouds and proposed the optimal anomaly score for 3D point clouds. In the future, we will tackle the practical application of anomaly detection of 3D point clouds. For example, the 3D point cloud anomaly detection task is useful for finding defects in industrial products.
2305.03709
Velocity field and cavity dynamics in drop impact experiments
Drop impact experiments allow the modelling of a wide variety of natural processes, from raindrop impacts to planetary impact craters. In particular, interpreting the consequences of planetary impacts requires an accurate description of the flow associated with the cratering process. In our experiments, we release a liquid drop above a deep liquid pool to investigate simultaneously the dynamics of the cavity and the velocity field produced around the air-liquid interface. Using particle image velocimetry, we analyse quantitatively the velocity field using a shifted Legendre polynomial decomposition. We show that the velocity field is more complex than considered in previous models, in relation to the non-hemispherical shape of the crater. In particular, the velocity field is dominated by degrees 0 and 1, with contributions from degree 2, and is independent of the Froude and the Weber numbers when these numbers are large enough. We then derive a semi-analytical model based on the Legendre polynomial expansion of an unsteady Bernoulli equation coupled with a kinematic boundary condition at the crater boundary. This model explains the experimental observations and can predict the time evolution of both the velocity field and the shape of the crater, including the initiation of the central jet.
Victor Lherm, Renaud Deguen
2023-05-05T17:43:29Z
http://arxiv.org/abs/2305.03709v1
# Velocity field and cavity dynamics in drop impact experiments ###### Abstract Drop impact experiments allow the modelling of a wide variety of natural processes, from raindrop impacts to planetary impact craters. In particular, interpreting the consequences of planetary impacts requires an accurate description of the flow associated with the cratering process. In our experiments, we release a liquid drop above a deep liquid pool to investigate simultaneously the dynamics of the cavity and the velocity field produced around the air-liquid interface. Using particle image velocimetry, we analyse quantitatively the velocity field using a shifted Legendre polynomial decomposition. We show that the velocity field is more complex than considered in previous models, in relation to the non-hemispherical shape of the crater. In particular, the velocity field is dominated by degrees 0 and 1, with contributions from degree 2, and is independent of the Froude and the Weber numbers when these numbers are large enough. We then derive a semi-analytical model based on the Legendre polynomial expansion of an unsteady Bernoulli equation coupled with a kinematic boundary condition at the crater boundary. This model explains the experimental observations and can predict the time evolution of both the velocity field and the shape of the crater, including the initiation of the central jet. ## 1 Introduction When a raindrop splashes on the surface of a pond, it takes less than the blink of an eye for a crater to form beneath the surface, throwing a fluid crown into the air, and for it to collapse, propelling upwards a fluid jet. These are the key features of the splashing regime, which occurs within a specific range of drop radius, impact velocity, impact angle, and physical properties of the fluids such as surface tension, density and viscosity (Rein 1993). Worthington (1908) was the first to report these features using pioneering high-speed photography methods. The splashing regime was then extensively investigated, regarding, in particular, the time evolution of the transient crater following the impact (_e.g._ Engel 1967; Morton _et al._ 2000; Bisighini _et al._ 2010), and the scaling of the maximum crater radius (_e.g._ Macklin & Metaxas 1976; Engel 1966; Lherm _et al._ 2022). The formation, evolution and fragmentation of the fluid crown (_e.g._ Allen 1975; Krechetnikov & Homsy 2009; Zhang _et al._ 2010; Agbaglah _et al._ 2013) and of the central jet (_e.g._ Fedorchenko & Wang 2004; Ray _et al._ 2015; van Rijn _et al._ 2021) have also been examined. The drop impact processes cover a wide variety of applications. This includes engineering applications such as the water entry of projectiles (Clanet _et al._ 2004) or spray painting (Hines 1966). This also includes Earth sciences applications such as the production of oily marine aerosol by raindrops (Murphy _et al._ 2015), spray generation from raindrop impacts on seawater and soil (Zhou _et al._ 2020), and planetary impact craters (Melosh 1989; Landeau _et al._ 2021; Lherm _et al._ 2021, 2022). Planetary impacts occur on terrestrial planets from the early stages of accretion to modern meteorite impacts. During planetary formation, thermal and chemical partitioning between the core and the mantle is influenced by the physical mechanisms of segregation between the metal of the impactors' core and the silicates of the growing planet (Stevenson 1990; Rubie _et al._ 2015; Lherm & Deguen 2018), with major implications on the chemical, thermal and magnetic evolution of the planet (Fischer _et al._ 2015; Badro _et al._ 2018; Olson _et al._ 2022). In particular, the cratering process is responsible for the initial dispersion and mixing of the impactors' core (Landeau _et al._ 2021; Lherm _et al._ 2022). In planetary science, impact craters are also a tool to sample the shallow interior of planets and satellites by combining observations of planetary surfaces with excavation and ejecta deposition models (Maxwell 1977; Barnhart & Nimmo 2011; Kurosawa & Takada 2019). Therefore, understanding the implications of these planetary impacts requires to model the velocity field produced during the formation of the crater. In the splashing regime, the fate of the crater, the fluid crown and the central jet is directly related to the velocity field produced around the crater boundary. The dynamics of the crater is indeed closely related to the velocity field in the ambient fluid, in particular regarding the evolution of the shape of the cavity. The formation of the fluid crown is also related to the ambient velocity field through the mass flux distribution across the initial water surface. Finally, the production of the central jet is associated with a convergent velocity field, resulting from the collapse of the crater due in part to buoyancy forces. The velocity field associated with the crater evolution in the splashing regime has been investigated both experimentally and numerically in previous studies. Engel (1962) was the first to examine the velocity field around the crater by seeding the flow with particles in order to visualize the flow streamlines. These observations allowed to determine the velocity field configuration associated with the crater expansion and its subsequent collapse. More recently, the velocity field was investigated using modern Particle Image Velocimetry (PIV) methods. These velocity field measurements have been used to investigate the origin of vortex rings beneath the crater (Liow & Cole 2009), the formation of the central jet (van Rijn _et al._ 2021), or solutocapillary flows following the impact of drops on salted water (Musunuri _et al._ 2017). Numerical simulations have also focused on the crater velocity field, regarding in particular the entrapment of air bubbles when the crater collapses and the formation of the central jet (Morton _et al._ 2000; Ray _et al._ 2015). Most of the models involving a prediction of the crater velocity field assume either an arbitrary velocity field (Maxwell 1977) or an arbitrary velocity potential associated with an imposed crater geometry, such as a hemispherical crater (Engel 1967; Leng 2001) or a spherical crater able to translate vertically (Bisighini _et al._ 2010). Since these models have only been compared with experimental measurements of the crater size and/or shape, a comparison with experimental measurements of the velocity field is thus required to assess their accuracy. In any event, a new model is required to consistently model the geometry of the cavity without the use of an arbitrarily imposed velocity field or potential. In this paper, we examine simultaneously the dynamics of the cavity and of the velocity field produced in drop impact experiments. In SS 2, we present the experimental setup, methods and diagnostics, as well as the set of dimensionless numbers used in this study. In SS 3, we describe the experimental results obtained for the crater shape and the velocity field. In SS 4, we compare the existing velocity field models with our experimental measurements. In SS 5, we finally derive a Legendre polynomials model based on an unsteady Bernoulli equation coupled with a kinematic boundary condition. ## 2 Experiments ### Experimental setup In these experiments, we release a liquid drop in the air above a deep liquid pool of the same liquid (figure 1). We vary the impact velocity \(U_{i}\) by changing the release height of the drop while keeping the drop radius \(R_{i}\) fixed. We also keep constant the density \(\rho\), the viscosity \(\mu\) and the surface tension \(\sigma\) of the fluids. The liquid pool is contained in a \(16\times 16\times 30\) cm glass tank. The pool level is set at the top of the tank to minimise the thickness of the meniscus on the sides of the tank. This allows to image a field of view unperturbed by the free surface meniscus effect. We generate the drops using a needle supplied with fluid by a syringe driver. When the weight of the drop exceeds the surface tension forces, the drop comes off. We use a nylon plastic needle with an inner diameter of 4.7 mm, generating drops with a radius \(R_{i}=2.7\) mm. We measured the drop size based on a calibration using mass measurements of dozens of drops and assuming the drop is spherical. We validate this method using high-speed pictures of the drop prior to impact where we can directly measure the drop radius. We obtain a relative difference of 1.4% between mass measurements and direct measurements. Impact velocities are in the range \(U_{i}=1-5\) m.s\({}^{-1}\). We calculate the impact velocity for each experiment using a calibrated free-fall model for the drop, including a quadratic drag. We validate this method using high-speed pictures of the drop prior to impact where we can directly measure the Figure 1: Schematic view of the drop impact experimental setup. drop velocity. We obtain a relative difference of \(0.6\%\) between the velocity model and direct measurements. We use water both in the drop and in the pool, in a temperature-controlled environment. The density is \(\rho=998\pm 1\) kg.m\({}^{-3}\). It was measured using an Anton Paar DMA 35 Basic densitometer. The viscosity is \(\mu=1\pm 0.01\) mPa.s (Haynes, 2016). The surface tension at the air-water interface is \(\sigma=72.8\pm 0.4\) mJ.m\({}^{-2}\)(Haynes, 2016). In our experiments, we position the camera at the same height as the water surface. We record images at 1400 Hz with a \(2560\times 1600\) pixels resolution (\(21~{}\mu\)m/px) and a 12 bits dynamic range, using a high-speed Phantom VEO 640L camera and a Tokina AT-X M100 PRO D Macro lens. ### Dimensionless numbers In these experiments, the impact dynamics depends on \(U_{i}\), \(R_{i}\), \(\rho\), \(\mu\), \(\sigma\), and the acceleration of gravity \(g\). Since these six parameters contain three fundamental units, the Vaschy-Buckingham theorem dictates that the impact dynamics depends on a set of three independent dimensionless numbers. We choose the following set: \[Fr=\frac{U_{i}^{2}}{g\,R_{i}},\quad We=\frac{\rho U_{i}^{2}R_{i}}{\sigma}, \quad Re=\frac{\rho U_{i}R_{i}}{\mu}. \tag{1}\] The Froude number \(Fr\) is a measure of the relative importance of impactor inertia and gravity forces. It can also be interpreted as the ratio of the kinetic energy \(\rho R_{i}^{3}U_{i}^{2}\) of the impactor to its gravitational potential energy \(\rho g\,R_{i}^{4}\) just before impact. The Weber number \(We\) compares the impactor inertia and interfacial tension at the air-liquid interface. The Reynolds number \(Re\) is the ratio between inertial and viscous forces. In the following, time, lengths and velocities are made dimensionless using the drop radius and the impact velocity, _i.e._ using respectively \(R_{i}/U_{i}\), \(R_{i}\), \(U_{i}\). These dimensionless quantities are denoted with a tilde. For example, we use a dimensionless time \(\tilde{t}=t/(R_{i}/U_{i})\). We focus on four cases with Froude numbers, Weber numbers and Reynolds numbers respectively in the range \(Fr\simeq 100-1000\), \(We\simeq 100-1000\) and \(Re\simeq 4400-13600\) (table 1). For each case, we conducted three acquisitions, with similar experimental results regarding both the crater shape (_e.g._ figure 4) and the velocity field (_e.g._ figure 11). This validates the repeatability of the experiments. ### Particle Image Velocimetry The velocity field is obtained using PIV. We seed the tank with polyamide particles (figure 1), the concentration, diameter and density of which being respectively \(C_{p}=0.26\) g.L\({}^{-1}\), \(d_{p}=20~{}\mu\)m and \(\rho_{p}=1030\) kg.m\({}^{-3}\). We illuminate these particles in suspension with a 1 mm thick laser sheet (532 nm), produced using a continuous 10 W Nd:YAG laser, together with a diverging cylindrical lens and a telescope. The laser sheet is verticalised using a \(45^{\circ}\) inclined mirror located below the tank. The laser wavelength is isolated using a band-pass filter (\(532\pm 10\) nm). In order to calculate the velocity field, the camera records two images of the field of view separated by a short time (\(\Delta t=200\)\(\mu\)s). These two images are divided into interrogation windows in which a cross-correlation operation allows to obtain the average particle displacement. This involves a five-stage multi-pass processing with interrogation windows decreasing in size. The final interrogation window size is a 64 px square with an overlap of 75%. In each window, a velocity vector is then calculated, which allows to construct the velocity field over the whole field of view. Finally, the velocity field is spatially calibrated using a sight. ### Experimental diagnostic #### 2.4.1 Crater shape The crater shape is directly obtained from the raw images used in the PIV procedure (figure 2). The crater corresponds to a particle-free area, together with a high light intensity area, explained by reflections at the air-water interface, in particular at the bottom of the crater. The crater boundary is defined using these image properties, which allow to delineate the cavity using background removal, an intensity threshold method and image binarisation. We fit the crater boundary position \(R\) (figure 2), which depends on the polar angle \(\theta\) and time \(t\), using a set of shifted Legendre polynomials \(\bar{P}_{k}\) up to degree \(k_{max}=2\) \[R(\theta,t)=\sum_{k=0}^{k_{max}}R_{k}(t)\bar{P}_{k}(\cos\theta), \tag{2.2}\] where \(R_{k}(t)\) are coefficients fitted with a least-square method. The shifted Legendre polynomials are an affine transformation of the standard Legendre polynomials \(\bar{P}_{k}(x)=P_{k}(2x-1)\), and are orthogonal on \([0,1]\), _i.e._ on a half-space. The coefficients \(R_{k}(t)\) correspond to increasingly small scale deviations from a hemispherical shape. \(R_{0}(t)\) corresponds to the mean crater radius (figure 2, blue line). \(R_{1}(t)\) corresponds to a deformation of the crater, linear Figure 2: Velocity field resulting from the PIV procedure, superposed on a corresponding experimental raw image. The solid lines correspond to the shape of the crater obtained from the shifted Legendre polynomials decomposition (equation 2.2), for degrees \(k=0\) (blue), \(k=1\) (orange) and \(k=2\) (green). The definitions of the Cartesian \((x,y,z)\) (black) and of the spherical \((r,\theta,\varphi)\) (red) coordinate systems are also represented. in \(\cos\theta\), with respect to an hemisphere (figure 2, orange line). When \(R_{1}(t)>0\), the crater is stretched vertically, resulting in a prolate cavity. When \(R_{1}(t)<0\), the crater is stretched horizontally, resulting in an oblate cavity. Finally, \(R_{2}(t)\) corresponds to a deformation of the crater, quadratic in \(\cos\theta\), with respect to a hemisphere (figure 2, green line). In order to validate the crater shape determination procedure, we compare the coefficients \(R_{k}(t)\) obtained from the raw images used in the PIV procedure (_e.g._ figure 2), with the coefficients obtained from an experiment in the same condition, but illuminated from behind (_e.g._ figure 13). This backlight experiment (see Lherm _et al._ (2022) for experimental details) allows to determine reliably the shape of the crater. Figure 3 shows that the coefficients are very similar between the two methods, which validates the crater shape determination procedure from PIV raw images. #### 2.4.2 Velocity field We aim to compare the experimental velocity field obtained using the PIV procedure with velocity models. For that purpose, the velocity field \(\mathbf{u}=(u_{r},u_{\theta},u_{\varphi})\) is expressed in a spherical coordinate system (\(r,\theta,\varphi\)) defined such that \(u_{r}\) and \(u_{\theta}\) are in the plane of the laser sheet (figure 2, red coordinates). The origin of this coordinate system is the contact point between the impacting drop and the target liquid (figure 2, point O). We decompose the components of the velocity field on a shifted Legendre polynomials basis \[u_{r}(r,\theta,t)=\sum_{l=0}^{+\infty}u_{r,l}(r,t)\bar{P}_{l}(\cos\theta), \tag{2.3}\] \[u_{\theta}(r,\theta,t)=\sum_{l=0}^{+\infty}u_{\theta,l}(r,t)\bar{P}_{l}(\cos \theta), \tag{2.4}\] where \(u_{r,l}(r,t)\) and \(u_{\theta,l}(r,t)\) are respectively the decomposition coefficients of \(u_{r}\) and \(u_{\theta}\). The shifted Legendre polynomials \(\bar{P}_{l}(\cos\theta)\) being orthogonal on half-hemispheres (\(\theta\in[0,\pi/2]\)), we obtain the \(u_{r,l}(r,t)\) and \(u_{\theta,l}(r,t)\) coefficients using a least-square inversion of the experimental velocity components over the separate half-hemispheres \(\theta\geqslant 0\) and \(\theta<0\), before averaging the results from the left and right half-hemispheres. Since the flow is close to axisymmetric (_e.g._ figure 2), the coefficients obtained by the inversion over each half-hemisphere are very close to each other. Assuming an axisymmetric flow, note that \(u_{r,0}(r,t)\) is the average of \(u_{r}\) over the full hemisphere. Figure 3: Coefficients \(R_{0}(t)/R_{i}\) (a), \(R_{1}(t)/R_{0}\) (b) and \(R_{2}(t)/R_{0}\) (c) as a function of time \(\tilde{t}\). The circles and the solid lines correspond to the crater shape obtained respectively from the PIV procedure (case B, \(Fr=444\)) and a similar backlight experiment (\(Fr=442\)). ## 3 Experimental results ### Crater shape Figure 4 shows the fitted coefficients of the shifted Legendre decomposition of the crater boundary (equation 2) as a function of time, for all experimental cases. We normalise the fitted coefficients \(R_{1}(t)\) and \(R_{2}(t)\) by \(R_{0}(t)\), _i.e._ the mean crater radius. Using this normalisation, we quantify the deviation of the crater geometry from a hemisphere. We also normalise time by the opening timescale of the crater (Lherm _et al._, 2022) \[\tilde{t}_{max}=\frac{1}{2}\left(\frac{8}{3}\right)^{1/8}\mathrm{B}\left(\frac {1}{2},\frac{5}{8}\right)\Phi^{1/8}\xi^{1/2}Fr^{5/8}, \tag{1}\] where \(\Phi\) and \(\xi\) are respectively energy partitioning and kinetic energy correction coefficients, and B is the beta function. This scaling is obtained by using an energy conservation equation where the sum of the potential energy of the crater and of the kinetic energy of the crater, corrected by \(\xi\), is equal at any instant of time to the kinetic energy of the impacting drop, corrected by \(\Phi\). Assuming that the kinetic energy of the crater vanishes when the cavity reaches its maximum size (Lherm _et al._, 2022), the maximum crater radius scales as \[\tilde{R}_{max}=\left(\frac{8}{3}\right)^{1/4}\Phi^{1/4}Fr^{1/4}. \tag{2}\] Using this \(Fr^{1/4}\) scaling law, the energy conservation equation is integrated between \(\tilde{t}=0\) and \(\tilde{t}=\tilde{t}_{max}\) to obtain the opening timescale of the crater given by equation 1. More details can be found in Lherm _et al._ (2022). With our experimental range of Froude number, we use \(\Phi=Fr^{-0.156}\) and \(\xi=0.34\)(Lherm _et al._, 2022). This normalisation allows to collapse our experiments on the same timescale. In figure 4, the crater shape evolution of case A is markedly different from cases B, C and D. Thus, we describe this case separately. We first deal with the high \(We\) experiments (cases B, C and D), where surface tension effects are negligible in comparison with the impactor inertia (_e.g._Pumphrey & Elmore, 1990; Morton _et al._, 2000; Leng, 2001; Ray _et al._, 2015). The crater size increases with the Froude number (figure 4, inset), in a way that is compatible with a \(Fr^{1/4}\) scaling law for the maximum mean crater radius \(R_{0_{max}}\)(Engel, 1966; Leng, 2001; Lherm _et al._, 2022). Furthermore, the evolution of the crater shape relative to the mean crater size is independent of the Froude number, with similar evolution of \(R_{1}(t)/R_{0}(t)\) and \(R_{2}(t)/R_{0}(t)\) (figure 4b-c). Figure 4: Coefficients \(R_{0}(t)/R_{i}\) (a), \(R_{1}(t)/R_{0}(t)\) (b) and \(R_{2}(t)/R_{0}(t)\) (c) as a function of time normalised by the opening timescale of the crater \(\tilde{t}/\tilde{t}_{max}\), in the four cases. Inset: Maximum mean crater radius \(R_{0_{max}}/R_{i}\) as a function of the Froude number \(Fr\). For each case, the different types of markers correspond to different experiments. At early times of the crater opening stage (\(\tilde{t}/\tilde{t}_{max}\lesssim 0.25\)), the mean radius of the crater \(R_{0}(t)\) increases (figure 4a) as the cavity opens. The crater has a flat-bottomed oblate shape (_e.g._ figure 5, i) as a result of the spread of the drop on the surface of the pool, with negative \(R_{1}(t)\) (figure 4b). The flat-bottomed oblate cavity gradually becomes hemispherical as a result of the overpressure produced at the contact point between the impacting drop and the surface (_e.g._ figure 5, ii). The magnitude of \(R_{1}(t)/R_{0}(t)\) indeed decreases with time during this stage (figure 4b). The crater is also deformed at higher degrees with mostly negative \(R_{2}(t)/R_{0}(t)\) (figure 4c). This corresponds to second-order deviations from the hemispherical shape, with a flattened crater boundary close to the surface (_e.g._ figure 5, i). At intermediate times of the crater opening stage (\(0.25\lesssim\tilde{t}/\tilde{t}_{max}\lesssim 0.5\)), the crater continues to open (figure 4a). The cavity is still stretched vertically, which leads to increasingly positive \(R_{1}(t)/R_{0}(t)\) (figure 4b), _i.e._ a prolate cavity (figure 5, iii). The crater reaches a maximum prolate deformation when \(\tilde{t}/\tilde{t}_{max}\simeq 0.5\), with \(R_{1}(t)/R_{0}(t)\simeq 0.08\) (figure 4b). The crater is also deformed at higher degrees, with positive \(R_{2}(t)/R_{0}(t)\) (figure 4c). This corresponds to a vertical crater boundary close to the surface (figure 5, iii). At late times of the crater opening phase (\(0.5\lesssim\tilde{t}/\tilde{t}_{max}\lesssim 1\)), the mean crater radius still increases (figure 4a) but the crater starts to flatten with decreasing \(R_{1}(t)/R_{0}(t)\) (figure 4b). As the opening velocity of the crater decreases, buoyancy forces become significant, resulting in the horizontal stretching of the cavity. The crater flattens to give an approximately hemispherical crater at \(\tilde{t}/\tilde{t}_{max}\simeq 1\) (figure 5, v). After the crater has reached its maximum size (\(\tilde{t}/\tilde{t}_{max}\gtrsim 1\)), the mean crater radius starts to decrease (figure 4a). \(R_{1}(t)/R_{0}(t)\) decreases at a rate higher than in the opening stage of the crater (figure 4b). Horizontal stretching of the crater is accelerated, as expected since buoyancy forces are now prevailing. This leads to the formation of an increasingly oblate cavity (figure 5, vi-vii). When \(\tilde{t}/\tilde{t}_{max}\gtrsim 1.5\), higher degrees eventually deviate from zero with positive \(R_{2}(t)/R_{0}(t)\) (figure 4c). In addition to the negative value of \(R_{1}(t)/R_{0}(t)\), this corresponds to the formation of the central jet (figure 5, viii-ix). We now deal with the moderate \(We\) experiment (case A), where surface tension effects are significant in comparison with the impactor inertia (_e.g._ Pumphrey & Elmore, 1990; Morton _et al._, 2000; Leng, 2001; Ray _et al._, 2015). In this case, a downward propagating capillary wave develops at the cavity interface and drives the crater deformation, often leading to the entrapment of a bubble due to the pinching of the cavity (_e.g._ Oguz & Prosperetti, 1990; Pumphrey & Elmore, 1990; Prosperetti & Oguz, 1993; Elmore _et al._, 2001). This mechanism is typically expected at moderate \(We\), _i.e._\(We\simeq 30-140\) (figure 6 in Pumphrey & Elmore, 1990). During crater opening, this explains why the maximum prolate deformation occurs later than in the other cases, at \(\tilde{t}/\tilde{t}_{max}\simeq 0.8\), and why the prolate deformation is larger, with \(R_{1}(t)/R_{0}(t)\simeq 0.2\) (figure 4b). During crater closing, the evolution of \(R_{1}(t)/R_{0}(t)\) and \(R_{2}(t)/R_{0}(t)\) is markedly different from the other cases due to the convergence of the capillary wave at the bottom of the crater. ### Velocity field #### 3.2.1 Velocity maps Figure 5 shows the evolution of the norm of the velocity \(|\mathbf{u}|=(u_{x}^{2}+u_{z}^{2})^{1/2}\) as a function of time, for case B. During the opening stage of the crater, the velocity around the crater gradually decreases due to the deceleration of the crater boundary (figure 5, i-iv). The maximum velocity is \(\simeq 1.1\) m.s\({}^{-1}\) at time 1.3 ms after contact (figure 5, i), which corresponds to 32% of the impact velocity. When \(\tilde{t}/\tilde{t}_{max}\gtrsim 0.1\), the norm of the velocity decreases radially around the crater (figure 5, ii-iv), whereas, when \(\tilde{t}/\tilde{t}_{max}\lesssim 0.1\), the velocity decreases at a higher rate on the side of the crater. This may be explained by the initial oblate shape of the crater, related to the spread of the drop on the water surface upon impact, which leads to a higher velocity beneath the crater as it becomes gradually hemispherical. The velocity field is composed of a dominant radial component and of a polar component responsible for an upward flow across the initial water surface (figure 5, i-iv). The polar component is thus responsible for the formation of the liquid crown above the water surface (_e.g._ Rein 1993; Fedorchenko & Wang 2004; Zhang _et al._ 2010). When the crater reaches its maximum size (figure 5, v), the cavity is nearly hemispherical and the velocity field seems to vanish simultaneously in the entire flow, consistently with Engel (1966)'s observations, which were subsequently used in several velocity models (_e.g._ Engel 1967; Prosperetti & Oguz 1993). However, this first-order assumption on the simultaneous vanishing velocity field does not hold when the flow is examined in detail. Beneath the cavity, the velocity gradually decreases and eventually vanishes just before the crater reaches its maximum size. The velocity is directed downwards due to the expansion of the crater. The velocity then increases again but is directed upwards due to the collapse of the crater. On the side of the cavity, close to the surface, the velocity does not vanish when the crater reaches its maximum size. The collapse of the crater takes over its initial expansion, which allows to keep outward velocities on the side of the crater. When the crater collapses (figure 5, vi-ix), a convergent flow forms towards the centre of the cavity. This leads to the formation of the central jet. Figure 6 shows the evolution of the vorticity \(\omega_{y}=\partial u_{x}/\partial z-\partial u_{z}/\partial x\) as a function of time, for case B. The vorticity produced by the impact around the crater is confined close to the air-water boundary, in particular when the crater is strongly deformed, at the beginning of Figure 5: Time evolution of the velocity \(|\mathbf{u}|=(u_{x}^{2}+u_{z}^{2})^{1/2}\) for case B. The vector field corresponds to the experimental velocity field, normalised by its maximum value in each snapshot. The solid green line corresponds to the crater boundary determined using the Legendre polynomial decomposition (equation 2.2). the crater opening (figure 6, i-ii) and when it collapses (figure 6, vi-ix). This suggests that the flow is mostly irrotational, which supports the potential flow assumption used in previous models (SS4). Furthermore, some of the vorticity observed around the crater boundary may be an artefact related to spurious velocity measurements produced by cross-correlations on reflections at the air-water interface, and not on PIV particles. This assumption is supported by the estimated diffusion length of the vorticity (0.3 mm in 100 ms) which is significantly smaller than the typical size of the vorticity band. #### 3.2.2 Velocity coefficients Figure 7 shows the coefficients \(u_{r,l}(r,t)\) and \(u_{\theta,l}(r,t)\) (equations 2.3-2.4) as a function of the radial coordinate at a given time \(\tilde{t}=15.4\) (\(\tilde{t}/\tilde{t}_{max}=0.43\)) during the crater opening stage of case B. During this stage, the velocity field is dominated by the degrees \(l=0\) and \(l=1\), the higher degrees \(l\geqslant 2\) being much smaller. When \(r\lesssim 1.2R_{0}\), we observe a decrease in the slope of the coefficients. This may be related to the deviation of the crater from a hemisphere. The coefficients indeed sample points located at varying distances from the actual crater boundary, including artefacts located into the crater, which may influence the radial dependency of these coefficients close to the crater boundary. In figure 7, we identify this misleading trend by using dashed lines when the radius is smaller than \(\max\{R(\theta)\}\) (\(r\leqslant 1.11R_{0}\) in figure 7). Figures 8 and 9 compare the time evolution of the coefficients \(u_{r,l}(r,t)\) and \(u_{\theta,l}(r,t)\) (equations 2.3-2.4) between the cases, for \(l\leqslant 2\). Except for the different normalisation, figure 7 is thus similar to a radial slice of these coefficients maps, for case B, at \(\tilde{t}/\tilde{t}_{max}=0.43\). As Figure 6: Time evolution of the vorticity \(\omega_{y}=\partial u_{x}/\partial z-\partial u_{z}/\partial x\) for case B. The vector field corresponds to the experimental velocity field, normalised by its maximum value in each snapshot. The solid green line corresponds to the crater boundary determined using the Legendre polynomial decomposition (equation 2.2). for the crater shape, the moderate \(We\) case A is different from the high \(We\) cases B, C and D, both for the radial (figure 8) and the polar (figure 9) component of the velocity field. We thus deal with this case separately. We first deal with the high \(We\) experiments (cases B, C and D), where both components of the velocity field are similar among cases, regardless of the degree in question. The velocity field is mostly dominated by the degrees \(l=0\) and \(l=1\), both during the opening and the closing stage of the crater, in agreement with figure 7. During the crater opening stage \((\tilde{t}/\tilde{t}_{max}\lesssim 1)\), the dominant degrees of the radial component \(u_{r,0}(r,t)\) and \(u_{r,1}(r,t)\) are positive (figure 8). This corresponds to the strong radial velocity field related to the expansion of the cavity. The dominant degrees of the polar component \(u_{\theta,0}(r,t)\) and \(u_{\theta,1}(r,t)\) are concomitantly positive and negative, respectively (figure 9), with a lower magnitude. This corresponds to a polar perturbation of the dominant radial velocity field, related to the mass flux across the surface \(z=0\) which produces the fluid crown. The positive coefficient \(u_{\theta,0}(r,t)\) indeed corresponds to a flow toward the surface, while the negative coefficient \(u_{\theta,1}(r,t)\) corresponds to a degree \(l=1\) perturbation, linear in \(\cos\theta\). The degree \(l=2\) also contributes to the velocity field of both components, in particular when the crater is strongly deformed due to the spread of the drop at the surface of the pool, at the beginning of the opening stage \((\tilde{t}/\tilde{t}_{max}\lesssim 0.25)\). When the crater reaches its maximum size \((\tilde{t}/\tilde{t}_{max}\simeq 1)\), the dominant degrees of both components change signs as the crater starts to collapse. In detail, \(u_{r,0}(r,t)\) vanishes later \((\tilde{t}/\tilde{t}_{max}\simeq 1)\) than \(u_{r,1}(r,t)\) (\(\tilde{t}/\tilde{t}_{max}\simeq 0.6\)) (figure 8) and \(u_{\theta,0}(r,t)\) (\(\tilde{t}/\tilde{t}_{max}\simeq 0.8\)) (figure 9). This is in agreement with the observations of figure 5 at \(\tilde{t}/\tilde{t}_{max}\simeq 1\), where the velocity vanishes beneath the crater but not on the sides. During the crater closing stage \((\tilde{t}/\tilde{t}_{max}\gtrsim 1)\), \(u_{r,0}(r,t)\) and \(u_{r,1}(r,t)\) are both negative (figure 8) and \(u_{\theta,0}(r,t)\) is negative (figure 9). This corresponds to the development of the convergent flow related to the collapse of the crater and the formation of the central jet. As at the beginning of the opening stage, the degree \(l=2\) of both components contributes significantly to the velocity field at the end of the closing stage \((\tilde{t}/\tilde{t}_{max}\gtrsim 1.5)\), in relation with the strongly deformed crater boundary. Figure 7: Coefficients \(u_{r,l}(r,t)\) and \(u_{\theta,l}(r,t)\) normalised by the mean crater velocity \(\dot{R}_{0}(t)\), as a function of the radial coordinate \(r\), normalised by the mean crater radius \(R_{0}(t)\), up to degree \(l=5\). Dashed lines correspond to regions where the coefficients sample velocity artefacts are located in the crater. The coefficients are calculated at \(\tilde{t}=15.4\) (\(\tilde{t}/\tilde{t}_{max}=0.43\)) for case B. We now deal with the moderate \(We\) experiment (case A). Although the degree \(l=0\) of both components is similar to the high \(We\) cases, the degrees \(l=1\) and \(l=2\) of case A are significantly larger than their counterparts of cases B, C and D. Furthermore, the time at which \(u_{r,1}(r,t)\) and \(u_{\theta,0}(r,t)\) vanish is significantly modified. This may also be a consequence of significant surface tension effects in this moderate \(We\) experiment, related to vigorous deformations of the crater boundary by the propagation of a capillary wave towards the bottom of the crater. ## 4 Comparison with existing velocity models In this section, we review the velocity models proposed by Engel (1967), Maxwell (1977), Leng (2001) and Bisighini _et al._ (2010), and compare their predictions with our observations. Since most of these models have been designed to understand the crater opening stage, we compare these models with our experimental velocity measurements by focusing on a typical snapshot of this initial stage. For that purpose, figure 10 shows the dominant coefficients \(u_{r,0}(r,t)\), \(u_{r,1}(r,t)\), \(u_{\theta,0}(r,t)\) and \(u_{\theta,1}(r,t)\) of case B as a function of the radial coordinate at \(\tilde{t}/\tilde{t}_{max}=0.24\), as well as the predictions of the models. Figure 8: Time evolution of the coefficient \(u_{r,l}(r,t)\) (\(l\in\{0,1,2\}\)) normalised by the impact velocity \(U_{i}\), as a function of the radial coordinate \(r\) normalised by the drop radius \(R_{i}\), for case B. Time is normalised by the opening timescale of the crater \(\tilde{r}_{max}\). ### Engel (1967)'s model Engel (1967)'s model assumes an energy balance where the potential energy of the crater, the potential energy of a cylindrical wave developing above the surface, the surface tension energy of the produced interface, the kinetic energy of the flow around the crater, the kinetic energy of the cylindrical wave and viscous dissipation are equal at any time to half of the kinetic energy of the impacting drop. Among the assumptions of such a model, Engel (1967) assumes a hemispherical crater with a radius \(R_{0}(t)\) and a potential flow with a velocity potential \(\phi\) satisfying the boundary conditions on the velocity \(|\mathbf{u}|(r=+\infty)=0\) and \(|\mathbf{u}|[r=R_{0}(t)]=\dot{R}_{0}(t)\). The velocity potential used in the model is \[\phi=-\frac{\dot{R}_{0}R_{0}^{2}\cos\theta}{r}. \tag{10}\] The radial component \(u_{r}\) and the polar component \(u_{\theta}\) of the velocity field, obtained by deriving the velocity potential, write \[\left\{\begin{array}{ll}u_{r}=\frac{\dot{R}_{0}R_{0}^{2}\cos\theta}{r^{2}}\\ u_{\theta}=\frac{\dot{R}_{0}R_{0}^{2}\sin\theta}{r^{2}}\end{array}\right.. \tag{11}\] Figure 9: Time evolution of the coefficient \(u_{\theta,l}(r,t)\) (\(l\in\{0,1,2\}\)) normalised by the impact velocity \(U_{l}\), as a function of the radial coordinate \(r\) normalised by the drop radius \(R_{i}\), for case B. Time is normalised by the opening timescale of the crater \(\tilde{t}_{max}\). This model allows to capture the evolution of the mean crater radius (_e.g._ Engel 1967, figure 3). The velocity field has a \(l=0\) (figure 10a) and \(l=1\) (figure 10b) radial components and a \(l=0\) (figure 10c) and \(l=1\) (figure 10d) polar components. This allows to obtain a velocity field qualitatively similar to the experiments, including in particular a degree \(l=1\) of the radial component, and a polar component. However, the slopes of the velocity components are smaller than the experimental slopes, in particular the \(1/r^{2}\) slope of \(u_{r,0}(r,t)\). The main limitations of Engel (1967)'s model are the fixed hemispherical geometry of the crater and the arbitrary velocity potential defined to fit experimental observations of the velocity field. More importantly, this velocity potential (equation 4.1) corresponds to the flow around an expanding cylinder (with \(r\) being the distance from the cylinder axis, and \(\theta\) the angular position around this axis) rather than around an expanding sphere, as incorrectly assumed in Engel (1967). It is not a solution of the Laplace equation in spherical coordinates and has a non-zero divergence. ### Maxwell (1977)'s model Maxwell (1977)'s model assumes an empirical form of the velocity field based on planetary cratering observations. The model assumes that the radial component \(u_{r}\) is independent of \(\theta\) and that its radial dependency is a power \(Z\) of the radius \(r\). \(u_{\theta}\) is then calculated using fluid Figure 10: Coefficients \(u_{r,0}(r,t)\) (a), \(u_{r,1}(r,t)\) (b), \(u_{\theta,0}(r,t)\) (c) and \(u_{\theta,1}(r,t)\) (d) normalised by the mean crater velocity \(\tilde{R}_{0}(t)\), as a function of the radial coordinate \(r\), normalised by the mean crater radius \(R_{0}(t)\). The circles correspond to case B at \(\tilde{t}/\tilde{t}_{max}=0.24\). The dash-dotted lines correspond to the models of Engel (1967), Maxwell (1977), Leng (2001) and Bisighini _et al._ (2010). The solid line corresponds to the solution of the predictive model, using the simplified equation system 5.13) with the reference set of initial conditions (equation 5.14). incompressibility. The velocity field thus writes \[\left\{\begin{array}{l}u_{r}=\frac{\alpha(t)}{r^{Z}}\\ u_{\theta}=(Z-2)\frac{\sin\theta}{1+\cos\theta}\frac{\alpha(t)}{r^{Z}}\end{array} \right., \tag{4.3}\] where \(\alpha(t)\) is an arbitrary coefficient corresponding to the time-dependent flow intensity. According to Maxwell (1977) and Melosh (1989), the value \(Z=3\) gives a velocity field consistent with numerical simulations of explosion and planetary impacts. This model allows to predict the experimental \(u_{r,0}(r,t)\) (figure 10a), \(u_{\theta,0}(r,t)\) (figure 10c) and \(u_{\theta,1}(r,t)\) (figure 10d) using \(Z=3\) and \(\alpha(t)=1\). In particular, the slopes predicted by the model are very close to the experimental slopes. However, this model does not allow a degree \(l=1\) of the radial velocity component. The main limitations of Maxwell (1977)'s model are the arbitrary choice for the model time-dependency, with \(\alpha(t)\), the fact that \(Z\) could depend on \(\theta\), which would yield a degree \(l=1\) for \(u_{r}\), and the fact that Maxwell's flow is not potential, which is inconsistent with the experimental results (figure 6). ### Leng (2001)'s model Leng (2001)'s model is similar to Engel (1967)'s model since it uses a hemispherical crater with a radius \(R_{0}(t)\) and a potential flow. The velocity potential \(\phi\) writes \[\phi=-\frac{\dot{R}_{0}R_{0}^{2}}{r}, \tag{4.4}\] which allows to obtain the velocity components \(u_{r}\) and \(u_{\theta}\) of the velocity field \[\left\{\begin{array}{l}u_{r}=\frac{\dot{R}_{0}R_{0}^{2}}{r^{2}}\\ u_{\theta}=0\end{array}\right.\,. \tag{4.5}\] This velocity potential satisfies the boundary conditions and is a solution of the Laplace equation in spherical coordinates. This model allows, in particular, to capture the evolution of the mean crater radius using an energy balance, although it requires to multiply the kinetic energy and the total energy by empirical correction factors (_e.g._Lherm _et al._ 2022). However, the velocity field has only a degree \(l=0\) (figure 10a) on the radial component and no polar component. As for Engel (1967)'s model, the \(1/r^{2}\) slope of \(u_{r,0}(r,t)\) is smaller than the experimental slope. The main limitations of Leng (2001)'s model are the hemispherical geometry and the oversimplified velocity potential which prevents a polar dependency of the radial component and a polar component of the velocity field. ### Bisighini _et al._ (2010)'s model Bisighini _et al._ (2010)'s model assumes an expanding spherical crater able to translate vertically over time, with a radius \(R_{0}(t)\) and a vertical position of the crater barycenter \(z_{c}(t)\). This allows to define a velocity potential \(\phi\) which corresponds to the superposition between the radial expansion of the crater and the flow past a translating sphere. This potential satisfies the boundary conditions and the Laplace equation in spherical coordinates. In the moving sphere coordinate system \((r^{\prime},\theta^{\prime})\), it writes \[\phi=-\frac{\dot{R}_{0}R_{0}^{2}}{r^{\prime}}-\dot{z}_{c}r^{\prime}\left(1- \frac{R_{0}^{3}}{2{r^{\prime}}^{3}}\right)\cos\theta^{\prime}, \tag{4.6}\] with components \(u_{r}\) and \(u_{\theta}\) of the velocity field writing \[\left\{\begin{array}{l}u_{r}=\frac{\dot{R}_{0}R_{0}^{2}}{r^{\prime 2}}- \left(1-\frac{R_{0}^{3}}{r^{\prime 3}}\right)\dot{z}_{c}\cos\theta^{\prime}\\ u_{\theta}=\left(1+\frac{R_{0}^{3}}{2r^{\prime 3}}\right)\dot{z}_{c}\sin\theta^{ \prime}\end{array}\right.. \tag{4.7}\] Bisighini _et al._ (2010) then use an unsteady Bernoulli equation to determine the evolution of the sphere radius and position over time. To compare Bisighini _et al._ (2010)'s model with our experimental data, we need to calculate the corresponding velocity field in the fixed frame of reference by adding the velocity of the crater barycenter \(\dot{z}_{c}\left(\cos\theta,-\sin\theta\right)\) to equation 4.7, and expressing \(r^{\prime}\) and \(\theta^{\prime}\) as functions of \(r\) and \(\theta\) (\(r^{\prime}=\sqrt{r^{2}+z_{c}^{2}-2z_{c}r\cos\theta}\), \(\cos\theta^{\prime}=(r\cos\theta-z_{c})/r^{\prime}\), \(\sin\theta^{\prime}=r\sin\theta/r^{\prime}\)). The velocity field has a \(l=0\) (figure 10a) and a \(l=1\) (figure 10b) radial component, as well as a \(l=0\) (figure 10c) and a \(l=1\) (figure 10d) polar component. The coefficients are calculated using \(z_{c}=0\) and \(\dot{z}_{c}=0.2U_{i}\), which corresponds to typical values during crater opening (_e.g._ figure 5). This model explains relatively well the shape of the crater (_e.g._ Bisighini _et al._ 2010, figure 17), and the key tendencies of the experimental components of the velocity field. However, Bisighini _et al._ (2010)'s model strongly constrains the geometry of the crater, as well as the related velocity potential definition. As in Engel (1967)'s and Leng (2001)'s models, the \(1/r^{2}\) slope of \(u_{r,0}\) is smaller than the experimental slope. ### Towards a new model In all models, either the geometry of the velocity field (Engel 1967; Maxwell 1977; Leng 2001) or the shape of the cavity (Engel 1967; Leng 2001; Bisighini _et al._ 2010) are imposed. This leads in particular to an incorrect radial dependency of \(u_{r}\), with an exponent much larger in the experiments than in the models, except for Maxwell (1977)'s model where the radial dependency is arbitrarily imposed by the parameter \(Z\). The experimental observation that the radial velocity field decreases with \(r\) faster than \(1/r^{2}\) is unexpected since it suggests that the flow component associated with an isotropic expansion of the cavity (\(\propto 1/r^{2}\)) is not dominant. New models are thus required to explain the geometry of the experimental velocity field, as well as the evolution of the non-hemispherical shape of the cavity. In the following section, we develop a semi-analytical model based on a Legendre polynomials expansion of an unsteady Bernoulli equation, coupled with a kinematic boundary condition at the crater boundary. ## 5 Legendre polynomials model In this model, we assume that the fluid is inviscid (_i.e._\(\mu=0\)), incompressible (_i.e._\(\nabla\cdot\mathbf{u}=0\)), and that the flow is irrotational (_i.e._\(\nabla\times\mathbf{u}=0\)). This means that the flow is potential and satisfies the Laplace equation \(\nabla^{2}\phi=0\), where \(\phi\) is the velocity potential defined as \(\mathbf{u}=\nabla\phi\). In the spherical coordinate system \((r,\theta,\varphi)\), assuming an axisymmetric flow, the solution of the Laplace equation writes \[\phi(r,\theta,t)=\sum_{n=0}^{+\infty}\frac{\phi_{n}(t)}{r^{n+1}}P_{n}(\cos \theta), \tag{5.1}\] where \(\phi_{n}(t)\) are time-dependent coefficients and \(P_{n}(x)\) are the standard Legendre polynomials, orthogonal on \([-1,1]\). The components \(u_{r}\) and \(u_{\theta}\) of the velocity field then writes \[\left\{\begin{array}{rcl}u_{r}(r,\theta,t)&=&\frac{\partial\phi}{\partial r}&=& \sum_{n=0}^{+\infty}-(n+1)\frac{\phi_{n}(t)}{r^{n+2}}P_{n}(\cos\theta)\\ u_{\theta}(r,\theta,t)&=&\frac{1}{r}\frac{\partial\phi}{\partial\theta}&=& \sum_{n=0}^{+\infty}\frac{\phi_{n}(t)}{r^{n+2}}\frac{\partial P_{n}(\cos \theta)}{\partial\theta}\end{array}\right.. \tag{5.2}\] We also assume a non-hemispherical crater, where the shape of the cavity is decomposed on a set of shifted Legendre polynomials (equation 2.2). Since we assume that the fluid is inviscid and a potential flow, the flow is governed by an unsteady Bernoulli equation \[\frac{\partial\phi}{\partial t}+\frac{1}{2}u^{2}-gz+\frac{p}{\rho}=\text{ constant}, \tag{5.3}\] where \(\rho\) is the fluid density, \(u\) is the norm of the velocity, \(p\) is the pressure, \(g\) is the acceleration due to gravity and \(z\) is the vertical coordinate below the initial fluid surface. This equation is constant in the entire fluid domain. Far from the crater, \(u\to 0\), \(\phi\to 0\) and the pressure is hydrostatic \(p(z)=p_{0}+\rho gz\), where \(p_{0}\) is the atmospheric pressure. This means that the constant is equal to \(p_{0}/\rho\). At the crater boundary, _i.e._ at \(r=R(\theta,t)\) (equation 2.2), the Young-Laplace equation writes \[p(R)-p_{0}=\sigma C, \tag{5.4}\] where \(C(\theta,t)\) is the mean local curvature of the interface and \(\sigma\) the surface tension. In cylindrical coordinates, the curvature writes \[C(\theta,t)=\frac{R^{2}+2\left(\frac{\partial R}{\partial\theta}\right)^{2}-R \frac{\partial^{2}R}{\partial\theta^{2}}}{\left[R^{2}+\left(\frac{\partial R} {\partial\theta}\right)^{2}\right]^{3/2}}. \tag{5.5}\] The Bernoulli equation at the crater boundary thus writes \[\left(\frac{\partial\phi}{\partial t}\right)_{r=R}+\frac{1}{2}u(R)^{2}-gR\cos \theta+\frac{\sigma}{\rho}C=0. \tag{5.6}\] We also use a kinematic boundary condition at the crater boundary \[\frac{\partial R}{\partial t}+\boldsymbol{u}\cdot\nabla R=\frac{\partial R}{ \partial t}+u_{\theta}(R)\frac{1}{R}\frac{\partial R}{\partial\theta}=u_{r}( R). \tag{5.7}\] Equations 5.6 and 5.7 are made dimensionless using the scaling laws for the crater opening timescale \(\tilde{t}_{max}\) (equation 3.1) and the maximum crater radius \(\widetilde{R}_{max}\) (equation 3.2), which gives the partial differential equation system \[\left\{\begin{array}{rcl}\left(\frac{\partial\phi^{*}}{\partial t^{*}} \right)_{r^{*}=R^{*}}=-\frac{1}{2}u^{*}(R^{*})^{2}+\frac{1}{4}\text{B}\left( \frac{1}{2},\frac{5}{8}\right)^{2}\xi R^{*}\cos\theta-\frac{1}{8}\sqrt{\frac{3 }{2}}\text{B}\left(\frac{1}{2},\frac{5}{8}\right)^{2}\frac{\xi}{\sqrt{\Phi }}\frac{\sqrt{Fr}}{We}C^{*}\\ \frac{\partial R^{*}}{\partial t^{*}}=u_{r}^{*}(R^{*})-u_{\theta}^{*}(R^{*}) \frac{1}{R^{*}}\frac{\partial R^{*}}{\partial\theta}\end{array}\right., \tag{5.8}\] where the star notation denotes quantities made dimensionless with \(\tilde{R}_{max}\) and \(\tilde{t}_{max}\), _e.g._\(t^{*}=\tilde{t}/\tilde{t}_{max}\). We solve this differential equation system (equations 5.8) by expanding the velocity potential (equation 5.1) up to degree \(n_{max}=2\) \[\phi^{*}(r^{*},\theta,t^{*})=\frac{\phi_{0}^{*}(t^{*})}{r^{*}}+\frac{\phi_{1}^ {*}(t^{*})\cos\theta}{{r^{*}}^{2}}+\frac{\phi_{2}^{*}(t^{*})\left(3\cos^{2} \theta-1\right)}{2{r^{*}}^{3}}. \tag{5.9}\] The components of the velocity field then write (equation 5.2) \[\left\{\begin{array}{l}u_{r}^{*}(r^{*},\theta,t^{*})=-\frac{\phi_{0}^{*}(t^{*})} {{r^{*}}^{2}}-\frac{2\phi_{1}^{*}(t^{*})\cos\theta}{{r^{*}}^{3}}-\frac{3\phi_{2 }^{*}(t^{*})\left(3\cos^{2}\theta-1\right)}{2r^{*}}\\ u_{\theta}^{*}(r^{*},\theta,t^{*})=-\frac{\phi_{1}^{*}(t^{*})\sin\theta}{{r^{* }}^{3}}-\frac{3\phi_{2}^{*}(t^{*})\sin\theta\cos\theta}{{r^{*}}^{4}}\end{array} \right.. \tag{5.10}\] We also expand the crater boundary position (equation 2.2) up to degree \(k_{max}=1\) \[R^{*}(\theta,t^{*})=R_{0}^{*}(t^{*})+R_{1}^{*}(t^{*})(2\cos\theta-1). \tag{5.11}\] Note that the crater position \(R^{*}(\theta,t^{*})\) is written as a sum of shifted Legendre polynomials, while the velocity potential \(\phi^{*}(r^{*},\theta,t^{*})\) is a sum of standard Legendre polynomials. We then project the differential equation system (equation 5.8) on a set of shifted Legendre polynomials \(\bar{P}_{m}\) up to degree \(m_{max}=2\) for the Bernoulli equation and degree \(m_{max}=1\) for the kinematic boundary condition. The projection of a function \(X\) writes \[\langle X,\bar{P}_{m}\rangle=(2m+1)\int_{0}^{\pi/2}X\bar{P}_{m}(\cos\theta) \sin\theta\mathrm{d}\theta. \tag{5.12}\] We simplify the equations by expanding the Bernoulli equation and the kinematic boundary condition to the third and the fourth order in \(R_{1}^{*}\). We obtain a system of five equations with five unknowns \(\phi_{0}^{*}(t^{*})\), \(\phi_{1}^{*}(t^{*})\), \(\phi_{2}^{*}(t^{*})\), \(R_{0}^{*}(t^{*})\) and \(R_{1}^{*}(t^{*})\) (equations A 1-A 5). The general equation system (equation 5.8) and its projection (equations A 1-A 5) may be further simplified. The third term on the right-hand side of the Bernoulli equation (in equation 5.8) corresponds to surface tension effects associated with the curvature of the air-water interface. If this term is neglected, which corresponds to \(\sqrt{Fr}/We\ll 1\), equation 5.8 then simplifies as \[\left\{\begin{array}{c}\left(\frac{\partial\phi^{*}}{\partial r^{*}}\right) _{r^{*}=R^{*}}=-\frac{1}{2}u^{*}(R^{*})^{2}+\frac{1}{4}\mathrm{B}\left(\frac{ 1}{2},\frac{5}{8}\right)^{2}\xi R^{*}\cos\theta\\ \frac{\partial R^{*}}{\partial t^{*}}=u_{r}^{*}(R^{*})-u_{\theta}^{*}(R^{*}) \frac{1}{R^{*}}\frac{\partial R^{*}}{\partial\theta}\end{array}\right.. \tag{5.13}\] In our experiments, \(\sqrt{Fr}/We\) is two to three times larger for case A (\(1.0\times 10^{-1}\)) than for cases B, C and D (\(4.9\times 10^{-2}\), \(3.9\times 10^{-2}\) and \(3.3\times 10^{-2}\), respectively). This is consistent with the surface tension argument used to explain the difference between case A and the other cases (SS3). Since \(\xi\) is independent of \(Fr\) and \(We\) in our experimental range (Lherm _et al._, 2022), this normalised equation system without surface tension is independent of the impact parameters and may be used to provide a predictive model. The general and the simplified equation systems are solved numerically as initial value problems, using a differential equation solver. The solution thus depends on the choice of initial conditions. On one hand, we can solve the equation systems separately for each experiment. The initial conditions are defined at \(\tilde{t}=1\), which corresponds to an advection time of the impacting drop, and fitted on each experiment by using a joint least-square inversion of the five experimental coefficients over the entire time series. On the other hand, we can solve the equation systems at the same time for all the experiments. The initial conditions are also defined at \(\tilde{t}=1\) but fitted simultaneously on all the experiments using the joint least-square inversion over the entire time series. This method allows to define a unique set of initial conditions that may be used in a predictive model. In both cases, the fitting procedure is motivated by the sensitivity of the model to the initial conditions used. A slight modification of the initial conditions may change significantly the time evolution of the coefficients. This sensitivity might be related to the exact impact conditions, including a possible variability in the contact dynamics with the surface of the pool and in the shape of the drop upon impact. Furthermore, the sensitivity to initial conditions might be amplified by the truncation of the crater shape and of the velocity potential expansion, which is probably insufficient to model properly the early evolution of the crater. This sensitivity is investigated in more detail in appendix B. We now define two models using different systems of equations and definitions of initial conditions. The first model, referred to as the general model, accounts for surface energy effects and uses the general equation system (equation 5.8) and initial conditions fitted on single experiments. This means that the number of sets of initial conditions is equal to the number of experiments. For example, the initial conditions of a given experiment in case B are \(\phi_{0}^{*}(1)=-0.07\pm 0.02\), \(\phi_{1}^{*}(1)=-0.07\pm 0.02\), \(\phi_{2}^{*}(1)=0.009\pm 0.003\), \(R_{0}^{*}(1)=0.41\pm 0.03\) and \(R_{1}^{*}(1)=-0.28\pm 0.02\). Uncertainties on the coefficients correspond to \(1-\sigma\) standard deviations on the parameters in the least-square inversion. The initial conditions of all the experiments are presented in appendix B. The second model, referred to as the simplified model, uses the simplified equation system, without surface tension and independent of the impact parameters (equation 5.13), as well as initial conditions fitted on all the experiments. The reference set of initial conditions is \[\left\{\begin{array}{ll}\phi_{0}^{*}(1)=-0.21\pm 0.01,&\phi_{1}^{*}(1)=0.002 \pm 0.005,&\phi_{2}^{*}(1)=0.0004\pm 0.0005,\\ R_{0}^{*}(1)=0.29\pm 0.02,&R_{1}^{*}(1)=-0.39\pm 0.02.&\end{array}\right. \tag{5.14}\] Given the uncertainties, this set of initial conditions can be further simplified by using \(\phi_{1}^{*}(1)=\phi_{2}^{*}(1)=0\), which corresponds to an initial velocity field given by \((u_{r}^{*}=-\phi_{0}^{*}(1)/{r^{*}}^{2},u_{\theta}^{*}=0)\). The physical interpretation of these initial conditions should be investigated in the future. It probably involves the contact dynamics between the drop and the pool and the early evolution of the crater. Nonetheless, the simplified model is a predictive model, independent of the impact parameters, that can be used to predict the crater and velocity field evolution within the range of \(Fr\) and \(We\) covered by our experiments. However, we anticipate the model to show predictability limitations outside of this range, in particular at low \(Fr\) and \(We\) in the bubble entrapment region (_e.g._ Pumphrey & Elmore, 1990), due to the neglected surface tension term and more generally to the relatively low degree of truncation used in our model. Figure 11 compares the experimental coefficients \(\phi_{0}^{*}\) (a), \(\phi_{1}^{*}\) (b) and \(\phi_{2}^{*}\) (c) of the velocity potential and the experimental coefficients \(R_{0}^{*}\) (d) and \(R_{1}^{*}\) (e) of the crater shape with the coefficients obtained with the general (coloured solid lines) and the simplified (black solid lines) models. We determine the experimental velocity potential coefficients from the experimental velocity field using a joint least-square inversion of the radial and the polar components (equation 5.2). We also obtain the experimental crater shape coefficients by fitting the crater boundary position with the shifted Legendre polynomials expansion (equation 2.2), using the method described in SS 2.4.1. The models capture well the evolution of the velocity potential (figure 11a-c) for all cases. In detail, the models are dominated by \(\phi_{1}^{*}\) and are slightly less accurate when it comes to fit \(\phi_{2}^{*}\), as expected since this corresponds to velocity fluctuations on smaller scales. These results are consistent with the good agreement between the simplified model and the experimental velocity coefficients of figure 10, in particular regarding the slope of \(u_{r,0}(r,t)\). Although \(u_{r,0}(r,t)\) remains less steep than in the experiments, it decreases significantly faster than \(1/r^{2}\). The models also capture well the evolution of the crater shape (figure 11d-e). Note that \(\tilde{R}_{max}\) slightly overestimates the experimental maximum crater radius, with maximum \(R_{0}^{*}\) systematically smaller than 1. This can be explained by the neglected surface energy in the energy balance (Lherm _et al._, 2022). In detail, \(R_{1}^{*}\) is slightly underestimated and changes at a higher rate than experimental data when \(\tilde{t}/f_{max}\lesssim 0.4\) and \(\tilde{t}/f_{max}\gtrsim 1.7\). This corresponds respectively to the early opening of the crater and the end of crater collapse, including the formation of the central jet, where an expansion of \(R\) to a higher degree (at least \(k=2\)) would be required to model the observed degree of deformation of the cavity (_e.g._ figure 4c). Note that the predictive model, using the simplified equation system (equation 5.13) with the reference set of initial conditions (equation 5.14), is particularly in good agreement with the experimental data. The sensitivity of the simplified model to the initial conditions is illustrated with two solutions where the initial conditions have been modified by \(\pm 25\%\) with respect to their reference value (figure 11, black dashed lines). Although case A is slightly different from cases B, C and D due to surface tension effects (see SS3), the models capture properly the general cratering dynamics. In detail, \(\phi_{2}^{*}\) and \(R_{1}^{*}\) are significantly underestimated when \(0.5\lesssim\tilde{t}/\tilde{t}_{max}\lesssim 1.4\), as expected since the models do not account for the capillary wave propagation responsible for this cavity deformation. Figure 12 compares snapshots of the radial (a-b) and polar components (c-d) of the experimental velocity field (a-c) with the components calculated from the predictive simplified model (b-d), in case B. The comparison is conducted at different times during the opening stage (i), just before the crater reaches its maximum size (ii), and during the closing stage (iii). This illustrates that the velocity fields from the simplified model and the experiment are very similar during all stages of the cratering process. The differences observed are mainly in the magnitude of the velocity, in particular close to the crater and the initial water surface (\(\theta=\pm\pi/2\)). Similar results are obtained in the other cases. The good agreement between the experimental velocity field and the simplified model shows that the truncation used in the model (degree \(k=1\) in shifted Legendre polynomials for \(R^{*}\) and degree \(n=2\) in Legendre polynomials for \(\phi^{*}\)) is sufficient to accurately capture the flow dynamics. Figure 13 compares the crater shape obtained in a backlight experiment similar to case Figure 11: Coefficients \(\phi_{0}^{*}\) (a), \(\phi_{1}^{*}\) (b), \(\phi_{2}^{*}\) (c), \(R_{0}^{*}\) (d) and \(R_{1}^{*}\) (e) as a function of time normalised by the opening timescale of the crater \(\tilde{t}/\tilde{t}_{max}\), in the four cases. For each case, the different types of markers correspond to different experiments. The coloured solid lines give the solution of the general model, where the initial conditions are fitted on a single experiment of the corresponding case. The black solid lines give a solution to the simplified model, where the initial conditions are fitted simultaneously on all the experiments. The black dashed lines give a solution to the simplified model, where the initial conditions are modified by \(\pm 25\%\) with respect to their reference value. B (\(Fr=442\)) with the crater boundary position calculated from the predictive simplified model. The crater shape is well captured by the model, consistently with the results of figure 11d-e. In detail, at the very beginning of the crater opening stage (figure 13, i), the model overestimates the width of the crater and does not capture accurately the flat-bottomed shape of the cavity. During the crater opening stage and the beginning of the crater collapse stage (figure 13, iii-v), the model slightly underestimates the crater depth and width, consistently with the coefficients of figure 11d-e. Finally, when the crater collapses (figure 13, vi), the model shows the central jet initiation, although it visibly lacks higher degrees to account for the vertical walls of the cavity. Figure 13 also compares the experimental velocity field obtained in case B with the velocity field obtained from the simplified model. The comparison shows a good agreement between the two, which is consistent with the analysis of figure 12. Figure 12: Radial (a, b) and polar (c, d) component of the velocity field from experimental data (a, c) and from the simplified model (b, d), in case B. The snapshots correspond to times when the crater is opening (i), when the crater is almost at its maximum size (ii) and when the crater is closing (iii). The solid green lines correspond to the experimental crater boundary. ## 6 Conclusion In this paper, we analyse quantitatively the velocity field around the crater produced by the impact of a liquid drop onto a deep liquid pool. Using new high-resolution PIV measurements, we obtain simultaneously the evolution of the velocity field around the cavity and the crater shape. We found that the shape of the cavity and the velocity field seem to be independent of \(Fr\) and \(We\) at a given \(t/t_{max}\), when these two dimensionless numbers are large enough (cases B, C and D). The velocity field is dominated by the degrees 0 and 1 in terms of shifted Legendre polynomials, with the degree 0 of the radial component \(u_{r,0}(r,t)\) decreasing faster than \(1/r^{2}\). Furthermore, the radial component of the velocity field is dominated by the degree 1 in terms of standard Legendre polynomials. This is not inconsistent with the growth of the crater because the degree 1 of the radial component has a non-zero average over a hemisphere. The experiments also show significant contributions from the degree 2, in particular when the crater is strongly deformed. This is possibly related to the non-hemispherical shape (degree 1) of the cavity. We also found that the velocity field does not vanish when the crater reaches its maximum size. In the previous velocity models (Engel 1967; Maxwell 1977; Leng 2001; Bisighini _et al._ 2010), strong constraints were imposed on the crater shape and/or on the velocity field. They were unable to explain the properties observed in our experimental measurements, Figure 13: Time evolution of the crater shape obtained in a backlight experiment similar to case B (\(Fr\) = 442). The solid green lines correspond to the crater boundary obtained from the simplified model. The black arrows correspond to the experimental velocity field, normalised by its maximum value in each snapshot. The grey arrows correspond to the velocity field obtained from the simplified model, normalised by its maximum value in each snapshot. in particular the radial dependency of the radial component of the velocity field and the evolution of the shape of the cavity. We thus developed a semi-analytical model based on a Legendre polynomials expansion of an unsteady Bernoulli equation, coupled with a kinematic boundary condition at the crater boundary. Assuming that the surface tension term involved in the Bernoulli equation is negligible, we define a simplified model, independent of the impact parameters, that can predict the evolution of the crater shape and of the velocity field within the range of \(Fr\) and \(We\) numbers covered in our experiments. Although the model is sensitive to the initial conditions, it remains predictive by using a unique set of fitted initial conditions. In particular, the model can capture the initiation of the central jet. However, one intrinsic limitation of the model is that it assumes the cavity radius to be a bijective function of \(\theta\). While this assumption is true during the opening stage and part of the crater closing stage, including the central jet initiation, it eventually fails when the central jet reaches a critical height, since the air/water interface can be crossed twice at a given \(\theta\). The model can therefore not be used to describe the full growth of the central jet. **Acknowledgements.** We thank M. Moulin for his help with the design and construction of the experimental apparatus. We thank three anonymous reviewers for their valuable comments which significantly improved the manuscript. **Funding.** This work was supported by the European Research Council (ERC) under the European Unions Horizon 2020 research and innovation programme (grant number 716429). ISTerre is part of Labex OSUG@2020 (ANR10 LABX56). Partial funding for this research was provided by the Center for Matter at Atomic Pressure (CMAP), a National Science Foundation (NSF) Physics Frontier Center, under award PHY-2020249. Any opinions, findings, conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect those of the National Science Foundation. **Declaration of interests.** The authors report no conflict of interest. **Author ORCID.** V. Lherm, [https://orcid.org/0000-0001-5814-0637](https://orcid.org/0000-0001-5814-0637); R. Deguen, [https://orcid.org/0000-0002-6883-0655](https://orcid.org/0000-0002-6883-0655) **Author contributions.** V.L. and R.D. designed the study, derived the model and contributed to analysing data and reaching conclusions. V.L. conducted the experiments. V.L. and R.D wrote the paper. ## Appendix A Equations of the Legendre polynomials model The Legendre polynomials model equations correspond to the projection of equation 5.8 up to degree \(m_{max}=1\) for the kinematic boundary condition and up to degree \(m_{max}=2\) for the Bernoulli equation. The projected boundary conditions and Bernoulli equations are respectively expanded to the fourth and the third order in \(R_{1}^{*}\). The boundary condition then writes \[\dot{R}_{0}^{*} =\frac{1}{6{R_{0}^{*}}^{8}}\left\{-6\phi_{0}^{*}{R_{0}^{*}}^{2} \left({R_{0}^{*}}^{2}{R_{1}^{*}}^{2}+{R_{0}^{*}}^{4}+{R_{1}^{*}}^{4}\right)-2 \phi_{1}^{*}{R_{0}^{*}}\left({R_{0}^{*}}^{3}{R_{1}^{*}}+10{R_{0}^{*}}^{2}{R_{1 }^{*}}^{2}+6{R_{0}^{*}}^{{R_{1}^{*}}}^{3}\right.\right.\] \[\left.\left.+3{R_{0}^{*}}^{4}+21{R_{1}^{*}}^{4}\right)+3\phi_{2} ^{*}{R_{1}^{*}}(3{R_{0}^{*}}-{R_{1}^{*}})\left({R_{0}^{*}}^{2}+3{R_{1}^{*}}^{2 }\right)\right\}\] \[+O({R_{1}^{*}}^{5}), \tag{15}\] \[\dot{R}_{1}^{*} =\frac{1}{140{R_{0}^{*}}^{8}}\left\{14{R_{0}^{*}}^{3}{R_{1}^{*}}[ 20{R_{0}^{*}}(\phi_{0}^{*}{R_{0}^{*}}+2\phi_{1}^{*})+9\phi_{2}^{*}]+6{R_{0}^{*} }{R_{1}^{*}}^{3}[56{R_{0}^{*}}(\phi_{0}^{*}{R_{0}^{*}}+5\phi_{1}^{*})+75\phi_{ 2}^{*}]\right.\] \[\left.+84{R_{0}^{*}}^{2}{R_{1}^{*}}^{2}(2\phi_{1}^{*}{R_{0}^{*}} -15\phi_{2}^{*})+15{R_{1}^{*}}^{4}(68\phi_{1}^{*}{R_{0}^{*}}-189\phi_{2}^{*}) -35{R_{0}^{*}}^{4}(4\phi_{1}^{*}{R_{0}^{*}}+9\phi_{2}^{*})\right\}\] \[+O({R_{1}^{*}}^{5}), \tag{16}\] and the Bernoulli equation writes \[0 = \frac{1}{1680{R_{0}^{*}}^{11}}\left\{28{R_{0}^{*}}^{2}\left[{R_{0}^{ *}}^{3}\left(20{R_{0}^{*}}^{3}\phi_{0}^{*}\left(3{R_{0}^{*}}^{2}+{R_{1}^{*}}^{2} \right)+10{\phi_{0}^{*}}^{2}\left(3{R_{0}^{*}}^{2}+10{R_{1}^{*}}^{2}\right)\right.\right.\] (A.3) \[\left.\left.+2{R_{0}^{*}}\phi_{1}^{*}\left(-10{R_{0}^{*}}^{2}{R_{1 }^{*}}+15{R_{0}^{*}}{R_{1}^{*}}^{2}+15{R_{0}^{*}}^{3}-12{R_{1}^{*}}^{3}\right)- 3{R_{1}^{*}}\phi_{2}^{*}\left(-4{R_{0}^{*}}{R_{1}^{*}}+15{R_{0}^{*}}^{2}\right.\right.\] \[\left.\left.+30{R_{1}^{*}}^{2}\right)\right)+20\phi_{0}^{*}\phi_ {1}^{*}R_{0}^{*}\left(-5{R_{0}^{*}}^{2}{R_{1}^{*}}+15{R_{0}^{*}}{R_{1}^{*}}^{2 }+3{R_{0}^{*}}^{3}-21{R_{1}^{*}}^{3}\right)+6{\phi_{1}^{*}}^{2}\left(-15{R_{0} ^{*}}^{2}{R_{1}^{*}}\right.\right.\] \[\left.\left.+77{R_{0}^{*}}{R_{1}^{*}}^{2}+10{R_{0}^{*}}^{3}-84{R_ {1}^{*}}^{3}\right)\right]+168\phi_{2}^{*}R_{0}^{*}\left[-9{R_{0}^{*}}^{2}{R_{ 1}^{*}}(5\phi_{0}^{*}R_{0}^{*}+7\phi_{1}^{*})+7{R_{0}^{*}}{R_{1}^{*}}^{2}(3 \phi_{0}^{*}R_{0}^{*}\right.\right.\] \[\left.\left.+28\phi_{1}^{*}\right)-36{R_{1}^{*}}^{3}(7\phi_{0}^{ *}R_{0}^{*}+13\phi_{1}^{*})+15\phi_{1}^{*}{R_{0}^{*}}^{3}\right]+72{\phi_{2}^{* }}^{2}\left[-70{R_{0}^{*}}^{2}{R_{1}^{*}}+558{R_{0}^{*}}{R_{1}^{*}}^{2}\right.\] \[\left.\left.+35{R_{0}^{*}}^{3}-720{R_{1}^{*}}^{3}\right]-\frac{3 5}{\sqrt{\Phi}}{\rm Be}\left(1/2,5/8\right)^{2}\xi{R_{0}^{*}}^{6}\left[-3\sqrt {6}\sqrt{Fr}{R_{0}^{*}}^{2}{R_{1}^{*}}^{2}\right.\right.\] \[\left.\left.+3\sqrt{6}\sqrt{Fr}{R_{0}^{*}}^{2}{R_{1}^{*}}\left({R _{1}^{*}}^{2}-{R_{0}^{*}}^{2}\right)-3\sqrt{6}\sqrt{Fr}{R_{0}^{*}}^{4}+2\sqrt {\Phi}{We}{R_{0}^{*}}^{5}{R_{1}^{*}}+6\sqrt{\Phi}{We}{R_{0}^{*}}^{6}\right]\right\}\] \[+O({R_{1}^{*}}^{4}),\] \[0 = \frac{1}{280{R_{0}^{*}}^{11}}\left\{-14{R_{0}^{*}}^{2}\left[4{R_ {0}^{*}}^{2}{R_{1}^{*}}\left({R_{0}^{*}}^{3}\dot{\phi}_{0}^{*}\left(5{R_{0}^{* }}^{2}+3{R_{1}^{*}}^{2}\right)+10{\phi_{0}^{*}}^{2}\left({R_{0}^{*}}^{2}+3{R_{ 1}^{*}}^{2}\right)\right)\right.\right.\] (A.4) \[\left.\left.-20\phi_{0}^{*}\phi_{1}^{*}{R_{0}^{*}}\left(-5{R_{0}^ {*}}^{2}{R_{1}^{*}}+9{R_{0}^{*}}{R_{1}^{*}}^{2}+{R_{0}^{*}}^{3}-21{R_{1}^{*}}^ {3}\right)+2{R_{0}^{*}}^{4}\dot{\phi}_{1}^{*}\left(10{R_{0}^{*}}^{2}{R_{1}^{*}} -9{R_{0}^{*}}{R_{1}^{*}}^{2}\right.\right.\right.\] \[\left.\left.\left.-5{R_{0}^{*}}^{3}+12{R_{1}^{*}}^{3}\right)+3{ \phi_{1}^{*}}^{2}\left(44{R_{0}^{*}}^{2}{R_{1}^{*}}-63{R_{0}^{*}}{R_{1}^{*}}^{ 2}-5{R_{0}^{*}}^{3}+256{R_{1}^{*}}^{3}\right)\right]\right.\] \[+42{\phi_{2}^{*}}^{R_{0}^{*}}\left[3{\phi_{0}^{*}}^{R_{0}}\left(- 4{R_{0}^{*}}^{2}{R_{1}^{*}}+63{R_{0}^{*}}{R_{1}^{*}}^{2}+5{R_{0}^{*}}^{3}-32{R_ {1}^{*}}^{3}\right)+2{\phi_{1}^{*}}\left(-49{R_{0}^{*}}^{2}{R_{1}^{*}}\right.\right.\] \[\left.\left.+156{R_{0}^{*}}{R_{1}^{*}}^{2}+9{R_{0}^{*}}^{3}-396{R _{1}^{*}}^{3}\right)\right]+6{R_{0}^{*}}^{5}\dot{\phi}_{2}^{*}\left[-14{R_{0}^{* }}^{2}{R_{1}^{*}}+126{R_{0}^{*}}{R_{1}^{*}}^{2}+35{R_{0}^{*}}^{3}-40{R_{1}^{*}}^ {3}\right]\right.\] \[\left.\left.+9{\phi_{2}^{*}}^{2}\left[-496{R_{0}^{*}}^{2}{R_{1}^{ *}}+864{R_{0}^{*}}{R_{1}^{*}}^{2}+35{R_{0}^{*}}^{3}-4960{R_{1}^{*}}^{3}\right]\right.\right.\] \[\left.\left.-\frac{1120\pi}{\sqrt{\Phi}}{\mathbf{{\rm{ \Phi}}}}We\frac{\Gamma\left(5/8\right)^{2}}{\Gamma\left(1/8\right)^{2}}\xi{R_{0 }^{*}}^{6}({R_{0}^{*}}+{R_{1}^{*}})\left[3\sqrt{6}\sqrt{Fr}{R_{0}^{*}}^{*}{R_{1}^ {*}}^{2}+2\sqrt{\Phi}We{R_{0}^{*}}^{5}\right]\right\}\right.\right.\] \[\left.\left.+O({R_{1}^{*}}^{4}),\right.\] \[0 = \frac{1}{336{R_{0}^{*}}^{11}}\left\{{R_{0}^{*}}^{2}\left[{R_{0}^{*}} ^{3}\left(4\left(56{R_{1}^{*}}^{2}\left({R_{0}^{*}}^{3}\dot{\phi}_{0}^{*}+5{\phi _{0}^{*}}^{2}\right)-4R_{0}^{*}R_{1}^{*}\dot{\phi}_{1}^{*}\left(-21R_{0}^{*}R_{1 }^{*}+14{R_{0}^{*}}^{2}\right.\right.\right.\] (A.5) \[\left.\left.+24{R_{1}^{*}}^{2}\right)+3\dot{\phi}_{2}^{*}\left(-42 {R_{0}^{*}}^{2}R_{1}^{*}+22{R_{0}^{*}}^{2}{R_{1}^{*}}^{2}+7{R_{0}^{*}}^{3}-120{ R_{1}^{*}}^{3}\right)\right)\] \[-\frac{7}{\sqrt{\Phi}We}{\rm B}\left(1/2,5/8\right)^{2}\xi R_{1}^ {*}\left(3\sqrt{6}\sqrt{Fr}{R_{0}^{*}}^{3}R_{1}^{*}-21\sqrt{6}\sqrt{Fr}{R_{0} ^{*}}^{2}{R_{1}^{*}}^{2}\right.\] \[\left.\left.+4\sqrt{\Phi}We{R_{0}^{*}}^{6}\right)\right)-1120 \phi_{0}^{*}\phi_{1}^{*}R_{0}^{*}R_{1}^{*}\left(-3R_{0}^{*}R_{1}^{*}+{R_{0}^{ *}}^{2}+6{R_{1}^{*}}^{2}\right)+84{\phi_{1}^{*}}^{2}\left(-12{R_{0}^{*}}^{2}R_ {1}^{*}\right.\right.\] \[\left.\left.+67{R_{0}^{*}}{R_{1}^{*}}^{2}+{R_{0}^{*}}^{3}-96{R_{1 }^{*}}^{3}\right)\right]+84\phi_{2}^{*}R_{0}^{*}\left[3\phi_{0}^{*}R_{0}^{*} \left(-12{R_{0}^{*}}^{2}R_{1}^{*}+11{R_{0}^{*}}{R_{1}^{*}}^{2}+{R_{0}^{*}}^{3}\right.\right.\] \[\left.\left.-96{R_{1}^{*}}^{3}\right)+2\phi_{1}^{*}(R_{0}^{*}-6R_ {1}^{*})\left(-9{R_{0}^{*}}R_{1}^{*}+3{R_{0}^{*}}^{2}+46{R_{1}^{*}}^{2}\right) \right]+18\phi_{2}^{*}\left[-148{R_{0}^{*}}^{2}R_{1}^{*}\right.\] \[\left.+1116{R_{0}^{*}}{R_{1}^{*}}^{2}+23{R_{0}^{*}}^{3}-1860{R_{1 }^{*}}^{3}\right]\right\}\] \[+O({R_{1}^{*}}^{4}).\] The simplified version of the equation system (equation 5.13) can be obtained by using \(\sqrt{Fr}/We=0\) in equations A.1-A.5. ## Appendix B Initial conditions of the Legendre polynomials model Figure 14 shows the initial conditions of the general model, obtained by fitting individually the experiments, and of the simplified model, obtained by fitting all the experiments simultaneously. They are both defined at \(\tilde{t}=1\) and use a joint least-square inversion of the experimental coefficients over the entire time series. Uncertainties on the initial conditions correspond to \(1-\sigma\) standard deviations on the parameters in the least-square inversion. At \(\mathrm{low}\sqrt{Fr}/We\), corresponding to high \(Fr\) and \(We\) numbers (cases B, C, D), the dispersion of the initial conditions is larger than the uncertainties associated with the least-square inversion, whereas at higher \(\sqrt{Fr}/We\), corresponding to moderate \(Fr\) and \(We\) numbers (case A), the initial conditions are clustered within the inversion uncertainties. This dispersion at higher \(Fr\) and \(We\) suggests a higher variability of the crater shape and of the velocity field upon impact. This might be related to a greater sensitivity to the exact impact conditions, possibly including variability in the contact dynamics with the surface of the pool and in the shape of the drop upon impact. Furthermore, we do not find any secondary dependency on \(Fr\) or \(We\). Finally, the initial conditions of the simplified model, obtained by fitting all the experiments simultaneously, are similar to the initial conditions obtained by fitting individually the experiments. The relatively large dispersion observed for a given case (except for case A) indicates that the model is sensitive to the initial conditions. For example, a change in all the initial conditions by \(\pm 25\%\) gives a significantly modified evolution of the coefficients over time (figure 11, black dashed lines). In order to further investigate this initial condition sensitivity, we conducted a quantitative test on the simplified model. Figure 15 shows the relative change of the model coefficients with respect to the simplified model, as a result of an individual modification of a single initial condition from the reference value defined in equation 5.14. The relative change \(\delta X\) is defined as the absolute change in \(X=\{\phi_{0}^{*},\phi_{1}^{*},\phi_{2}^{*},R_{0}^{*},R_{1}^{*}\}\), \(X-X_{\mathrm{ref}}\), scaled by the root mean square of the simplified model \(\mathrm{RMS}(X_{\mathrm{ref}})\). We choose to scale the absolute change by the root mean square of the simplified model to ensure a non-diverging value of the relative change when \(X_{\mathrm{ref}}\to 0\). Note that this sensitivity test only investigates the role of independent parameter modifications. Coupled modifications of the initial conditions (as in figure 11, black dashed lines) might amplify significantly the changes in the evolution of the coefficients. Within the range of parameter modifications (by \(\pm 40\%\)), the coefficients are generally more influenced by modifications of the initial conditions of the crater shape, _i.e._\(R_{0}^{*}(1)\) (figure 15d) and \(R_{1}^{*}(1)\) (figure 15e). Besides, the coefficient \(R_{0}^{*}\) is the least modified with a maximum change of \(\sim 30\%\) (figure 15iv), while \(\phi_{0}^{*}\), \(\phi_{1}^{*}\), \(\phi_{2}^{*}\) and \(R_{1}^{*}\) reach respectively \(\sim 300\%\) (figure 15i), \(\sim 150\%\) (figure 15ii), \(\sim 200\%\) (figure 15iii) and \(\sim 100\%\) (figure 15v). Finally, the change in the coefficients over time is not homogeneous. For example, \(\phi_{0}^{*}\) is changed relatively uniformly over time (in magnitude), independently of the modified initial condition, while \(\phi_{2}^{*}\) is changed much more heterogeneously and depends on the modified initial condition.
2307.15712
Quantum-noise-limited optical neural networks operating at a few quanta per activation
Analog physical neural networks, which hold promise for improved energy efficiency and speed compared to digital electronic neural networks, are nevertheless typically operated in a relatively high-power regime so that the signal-to-noise ratio (SNR) is large (>10). What happens if an analog system is instead operated in an ultra-low-power regime, in which the behavior of the system becomes highly stochastic and the noise is no longer a small perturbation on the signal? In this paper, we study this question in the setting of optical neural networks operated in the limit where some layers use only a single photon to cause a neuron activation. Neuron activations in this limit are dominated by quantum noise from the fundamentally probabilistic nature of single-photon detection of weak optical signals. We show that it is possible to train stochastic optical neural networks to perform deterministic image-classification tasks with high accuracy in spite of the extremely high noise (SNR ~ 1) by using a training procedure that directly models the stochastic behavior of photodetection. We experimentally demonstrated MNIST classification with a test accuracy of 98% using an optical neural network with a hidden layer operating in the single-photon regime; the optical energy used to perform the classification corresponds to 0.008 photons per multiply-accumulate (MAC) operation, which is equivalent to 0.003 attojoules of optical energy per MAC. Our experiment used >40x fewer photons per inference than previous state-of-the-art low-optical-energy demonstrations, to achieve the same accuracy of >90%. Our work shows that some extremely stochastic analog systems, including those operating in the limit where quantum noise dominates, can nevertheless be used as layers in neural networks that deterministically perform classification tasks with high accuracy if they are appropriately trained.
Shi-Yuan Ma, Tianyu Wang, Jérémie Laydevant, Logan G. Wright, Peter L. McMahon
2023-07-28T17:59:46Z
http://arxiv.org/abs/2307.15712v1
# Quantum-noise-limited optical neural networks operating at a few quanta per activation ###### Abstract Analog physical neural networks, which hold promise for improved energy efficiency and speed compared to digital electronic neural networks, are nevertheless typically operated in a relatively high-power regime so that the signal-to-noise ratio (SNR) is large (\(>\)10). What happens if an analog system is instead operated in an ultra-low-power regime, in which the behavior of the system becomes highly stochastic and the noise is no longer a small perturbation on the signal? In this paper we study this question in the setting of optical neural networks operated in the limit where some layers use only a single photon to cause a neuron activation. Neuron activations in this limit are dominated by quantum noise from the fundamentally probabilistic nature of single-photon detection of weak optical signals. We show that it is possible to train stochastic optical neural networks to perform deterministic image-classification tasks with high accuracy in spite of the extremely high noise (SNR \(\sim\) 1) by using a training procedure that directly models the stochastic behavior of photodetection. We experimentally demonstrated MNIST handwritten-digit classification with a test accuracy of 98% using an optical neural network with a hidden layer operating in the single-photon regime; the optical energy used to perform the classification corresponds to 0.008 photons per multiply-accumulate (MAC) operation, which is equivalent to 0.003 attojoules of optical energy per MAC. Our experiment used \(>\)40\(\times\) fewer photons per inference than previous state-of-the-art low-optical-energy demonstrations, to achieve the same accuracy of \(>\)90%. Our work shows that some extremely stochastic analog systems, including those operating in the limit where quantum noise dominates, can nevertheless be used as layers in neural networks that deterministically perform classification tasks with high accuracy if they are appropriately trained. ## I Introduction The development and widespread use of very large neural networks for artificial intelligence [1; 2; 3] has motivated the exploration of alternative computing paradigms--including analog processing--in the hope of improving both energy efficiency and speed [4; 5]. Photonic implementations of neural networks using analog optical systems have experienced a resurgence of interest over the past several years [6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17]. However, analog processors--including those constructed using optics--inevitably have noise and typically also suffer from imperfect calibration and drift. These imperfections can result in degraded accuracy for neural-network inference performed using them [6; 18; 19; 20]. To mitigate the impact of noise, noise-aware training schemes have been developed [21; 22; 23; 24; 25; 26; 27; 28; 29]. These schemes treat the noise as a relatively small perturbation to an otherwise deterministic computation, either by explicitly modeling the noise as the addition of random variables to the processor's output or by modeling the processor as having finite bit precision. Recent demonstrations of ultra-low optical energy usage in optical neural networks (ONNs) [13; 16] were in this regime of noise as a small perturbation and used hundreds to thousands of photons to represent the average neuron pre-activation signal prior to photodetection. In Ref. [13], we reported achieving 90% accuracy on MNIST handwritten-digit classification using slightly less than 1 photon per scalar weight multiplication (i.e., per MAC)--which is already counterintuitively small--and one might be tempted to think that it's not possible to push the number of photons per MAC much lower while preserving accuracy. More typically, millions of photons per activation are used [11; 12; 16; 30]. In this paper we address the following question: what happens if we use such weak optical signals in a ONN that each photodetector in a neural-network layer receives at most just one, or perhaps two or three, photons? Physical systems are subject to various sources of noise. While some noise can be reduced through improvements to the hardware, some noise is fundamentally unavoidable, especially when the system is operated with very little power--which is an engineering goal for neural-network processors. Shot noise is a fundamental noise that arises from the quantized, i.e., discrete, nature of information carriers: the discreteness of energy in the case of photons in optics, and of discreteness of charge in the case of electrons in electronics [31]. A shot-noise-limited measurement of a signal encoded with an average of \(N_{\mathrm{p}}\) photons (quanta) will have an SNR that scales as \(\sqrt{N_{\mathrm{p}}}\)[32].1 To achieve a suitably high SNR, ONNs typically use a large number of quanta for each detected signal. In situations where the optical signal is limited to just a few photons, photodetectors measure and can count individual quanta. Single-photon detectors (SPDs) are highly sensitive detectors that--in the typical _click detector_ setting--report, with high fidelity, the absence of a photon (_no click_) or presence of one or more photons (_click_) during a given measurement period [34]. In the quantum-noise-dominated regime of an optical signal with an average photon number of about 1 impinging on an SPD, the measurement outcome will be highly stochastic, resulting in a very low SNR (of about 1).2 Conventional noise-aware-training algorithms are not able to achieve high accuracy with this level of noise. **Is it possible to operate ONNs in this very stochastic regime and still achieve high accuracy in deterministic classification tasks?** The answer is _yes_, and in this work we will show how. Footnote 1: The _shot-noise limit_, which is sometimes also referred to as the _standard quantum limit_[33], can be evaded if, instead of encoding the signal in a thermal or coherent state of light, a quantum state—such as an intensity-squeezed state or a Fock state—is used. In this paper we consider only the case of _classical_ states of light for which shot noise is present and the shot-noise limit applies. Footnote 2: Again, this is under the assumption that the optical signal is encoded in an optical state that is subject to the shot-noise limit—which is the case for classical states of light. The stochastic operation of neural networks has been extensively studied in computer science as part of the broader field of stochastic computing [35]. In the field of machine learning, binary stochastic neurons (BSNs) have been used to construct stochastic neural networks [36; 37; 38; 39; 40; 41; 42], with training being a major focus of study. Investigations of hardware implementations of stochastic computing neural networks, such as those in Refs. [43; 44] (with many more surveyed in Ref. [45]), have typically been for deterministic complementary metal-oxide-semiconductor (CMOS) electronics, with the stochasticity introduced by random-number generators. While many studies of binary stochastic neural networks have been conducted with standard digital CMOS processors, there have also been proposals to construct them from beyond-CMOS hardware, motivated by the desire to minimize power consumption: direct implementation of binary Figure 1: **Deterministic inference using noisy neural-network hardware.****a**, The concept of a stochastic physical neural network performing a classification task. Given a particular input image to classify, repetitions exhibits variation (represented by different traces of the same color), but the class is predicted nearly deterministically. **b**, The single-to-noise ratio (SNR) of single-photon-detection neural networks (SPDNNs) compared to conventional optical neural networks (ONNs). Conventional ONNs operate with high photon budgets (SNR \(\gg\) 1) to obtain reliable results, whereas SPDNNs operate with low photon budgets—of up to just a few detected photons per shot (SNR \(\sim\) 1). The relation between the detected optical energy (in number of photons \(N_{\mathrm{p}}\)) and SNR is SNR \(=\sqrt{N_{\mathrm{p}}}\), which is known as the shot-noise limit. stochastic neurons using bistable systems that are noisy by design--such as low-barrier magnetic tunnel junctions (MTJs)--has been explored [46; 47; 48], and there have also been proposals to realize hardware stochastic elements for neural networks that could be constructed with noisy CMOS electronics or other physical substrates [49; 50]. ONNs in which noise has been intentionally added [51; 52; 25] have also been studied. Our work with low-photon-count optics is related but distinct from many of the studies cited here in its motivating assumption: instead of desiring noise and stochastic behavior--and purposefully designing devices to have them, we are concerned with situations in which physical devices have large and unavoidable noise but where we would like to nevertheless construct deterministic classifiers using these devices because of their potential for low-energy computing (Figure 1). The **key idea** in our work is that when ONNs are operated in the approximately-1-photon-per-neuron-activation regime and the detectors are SPDs, it is natural to consider the neurons as binary stochastic neurons: the output of an SPD is binary (_click_ or _no click_) and fundamentally stochastic. Instead of trying to train the ONN as a deterministic neural network that has very poor numerical precision, one can instead train it as a binary stochastic neural network, adapting some of the methods from the last decade of machine-learning research on stochastic neural networks [39; 40; 41; 42; 53; 54] and using a physics-based model of the stochastic single-photon detection (SPD) process during training. We call this _physics-aware stochastic training_. We experimentally implemented a stochastic ONN using as a building block an optical matrix-vector multiplier [13] modified to have SPDs at its output: we call this a _single-photon-detection neural network_ (SPDNN). We present results showing that high classification accuracy can be achieved even when the number of photons per neuron activation is approximately 1, and even without averaging over multiple shots. We also studied in simulation how larger, more sophisticated stochastic ONNs could be constructed and what their performance on CIFAR-10 image classification would be. II Single-photon-detection neural networks: optical neural networks with stochastic activation from single-photon detection We consider ONNs in which one or more layers are each constructed from an optical matrix-vector multiplier followed by an array of SPDs (Figure 2a-c), and in which the optical powers used are sufficiently low that in each execution of the layer, each SPD has at most only a few photons impinging on it, leading to stochastic measurement outcomes of _no click_ or _click_. In our setting, we aim to perform _inference_ using the SPDNN--with its implementation in physical hardware--(Figure 2d) and to perform _training_ of the SPDNN _in silico_ (Figure 2e-f). That is, training is performed entirely using standard digital electronic computing.3 Footnote 3: It is not required that the training be done _in silico_ for it to succeed but is just a choice we made in this work. _Hardware-in-the-loop_ training, such as used in Ref. [24], is a natural alternative to purely _in silico_ training that even can make training easier by relaxing the requirements on how accurate the _in silico_ model of the physical hardware process needs to be. ### Physics-aware stochastic training To train an SPDNN, we perform gradient descent using backpropagation, which involves a forward pass, to compute the current error (or loss) of the network, and a backward pass, which is used to compute the gradient of the loss with respect to the network parameters; our procedure is inspired by backpropagation-based training of stochastic and binary neural networks [39; 42]. We model the forward pass (upper part of Figure 2e) through the network as a stochastic process that captures the key physics of SPD of optical signals having Poissonian photon statistics [55]: the measurement outcome of SPD is a binary random variable (_no click_ or _click_) that is drawn from the Bernoulli distribution with a probability that depends on the mean photon number of the light impinging on the detector. However, during the backward pass (lower part of Figure 2e), we employ a deterministic mean-field estimator to compute the gradients. This approach avoids the stochasticity and binarization of the SPD process, which typically pose difficulties for gradient estimation. We now give a brief technical description of our forward and backward passes for training; for full details see Methods and Supplementary Notes 1A and 2A. We denote the neuron pre-activations of the \(l\)th stochastic layer of an SPDNN as \(\mathbf{z}^{(l)}=W^{(l)}\mathbf{a}^{(l-1)}\), where \(\mathbf{a}^{(l-1)}\) is the activation vector from the previous layer (\(\mathbf{a}^{(0)}\) denotes the input vector \(\mathbf{x}\) of the data to be classified). In the physical realization of an SPDNN, \(\mathbf{z}^{(l)}\) is encoded optically (for example, in optical intensity) following an optical matrix-vector multiplier (optical MVM, which computes the product between the matrix \(W^{(l)}\) and the vector \(\mathbf{a}^{(l-1)}\)) but before the light impinges on an array of SPDs. We model the action of an SPD with a stochastic activation function, \(f_{\rm SPD}\) (Figure 2b; Eq. 1). The stochastic output of the \(l\)th layer is then \(\mathbf{a}^{(l)}=f_{\rm SPD}(\mathbf{z}^{(l)})\). For an optical signal having mean photon number \(\lambda\) and that obeys Poissonian photon statistics, the probability of a _click_ event by an SPD is \(P_{\rm SPD}(\lambda)=1-e^{\lambda}\) (Figure 2c). We define the stochastic activation function \(f_{\rm SPD}\) as follows: \[f_{\rm SPD}(z)\coloneqq\begin{cases}1&\text{with probability }p=P_{\rm SPD}( \lambda(z)),\\ 0&\text{with probability }1-p,\end{cases} \tag{1}\] where \(\lambda(z)\) is a function mapping a single neuron's pre-activation value to a mean photon number. For an incoherent optical setup where the information is directly encoded in intensity, \(\lambda(z)=z\); for a coherent optical setup where the information is encoded in field amplitude and the SPD directly measures the intensity, \(\lambda(z)=|z|^{2}\). In general, the Figure 2: **Single-photon-detection neural networks (SPDNNs):**_physics-aware stochastic training_ **and inference.** **a**, A single layer of an SPDNN, comprising an optical matrix-vector multiplier (optical MVM, in grey) and single-photon detectors (SPDs; in red), which perform stochastic nonlinear activations. Each output neuron’s value is computed by the physical system as \(a_{i}\;=\;f_{\rm SPD}(z_{i})\)), where \(z_{i}\) is the weighted sum (shown in green) of the input neurons to the \(i\)th output neuron computed as part of the optical MVM, and \(a_{i}\) is the stochastic binary output from a single-photon detector. **b**, Forward and backward propagation through the SPD activation function. The optical energy (\(\lambda\)) incident on an SPD is a function of \(z_{i}\) that depends on the encoding scheme used. Forward propagation uses the stochastic binary activation function \(f_{\rm SPD}\), while backpropagation involves the mean-field function of the probability \(P_{\rm SPD}\). **c**, Probability of an SPD detecting a click (output \(a\;=\;1\)) or not (output \(a\;=\;0\)), as a function of the incident light energy \(\lambda\). **d**, Optical inference using an SPDNN with \(L\) layers. The activation values from the SPD array of each layer are passed to light emitters for the optical MVM of the next layer. The last layer uses a conventional photodetector (PD) array instead of an SPD array, and is operated with enough optical energy that the output of this layer has high SNR. **e**, _In silico_ training of an SPDNN with \(L\) layers. Each forward propagation is stochastic, and during backpropagation, the error vector is passed to the hidden layers using the mean-field probability function \(P_{\rm SPD}\) instead of the stochastic activation function \(f_{\rm SPD}\). In this figure, \(\partial x\) is shorthand for \(\partial C/\partial x\), where \(C\) is the cost function. form of \(\lambda(z)\) is determined by the signal encoding used in the optical MVM, and the detection scheme following the MVM. We use \(f_{\text{SPD}}\) in modeling the stochastic behavior of an SPDNN layer in the forward pass. However, during the backward pass, we make a deterministic mean-field approximation of the network: instead of evaluating the stochastic function \(f_{\text{SPD}}\), we evaluate \(P_{\text{SPD}}(\lambda(z))\) when computing the activations of a layer: \(\mathbf{a}^{(l)}=P_{\text{SPD}}(\lambda(\mathbf{z}^{(l)}))\) (Figure 2b). This is an adaptation of a standard machine-learning method for computing gradients of stochastic neural networks [39]. ### Inference When performing inference (Figure 2d), we can run just a single shot of a stochastic layer or we can choose to take the average of multiple shots--trading greater energy and/or time usage for reduced stochasticity. For a single shot, a neuron activation takes on the value \(a^{[1]}=a\in\{0,1\}\); for \(K\) shots, \(a^{[K]}=\frac{1}{K}\sum_{k=1}^{K}a_{k}\in\{0,1/K,2/K,\ldots,1\}\). In the limit of infinitely many shots, \(K\rightarrow\infty\), the activation \(a^{[\infty]}\) would converge to the expectation value, \(a^{[\infty]}=\mathbb{E}[a]=P_{\text{SPD}}(\lambda(z))\). In this work we focus on the single-shot (\(K=1\)) and few-shot \(K\leq 5\) regime, since the high-shot \(K\gg 100\) regime is very similar to the high-photon-count-per-shot regime that has already been studied in the ONN literature (e.g., in Ref. [13]). An important practical point is that averaging for \(K>1\) shots can be achieved by counting the clicks from each SPD, which is what we did in the experiments we report. We can think of \(K\) as a discrete integration time, so averaging need not involve any data reloading or sophisticated control. ## III Mnist Handwritten-Digit Classification with a Single-Photon-Detection Multilayer Perceptron We evaluated the performance--both in numerical simulations and in optical experiments--of SPDNNs on the MNIST handwritten-digit-classification benchmark task with a simple, \(784\to N\to 10\) multilayer perceptron (MLP) architecture (Figure 3a). The activation values in the hidden layer were computed by SPDs. The optical power was chosen so that the SNR of the SPD measurements was \(\sim 1\), falling in the low-SNR regime (Figure 1b). The output layer was implemented either with full numerical precision on a digital electronic computer, or optically with an integration time set so that the measured signal comprised enough photons that a high SNR (Figure 1b) was achieved, as in conventional ONNs. Our use of a full-precision output layer is consistent with other works on binary neural networks [56; 42; 57]. In a shallow neural network, executing the output layer at high SNR substantially limits the overall energy efficiency gains from using small photon budgets in earlier layers, but in larger models, the relatively high energy cost of a high-SNR output layer is amortized. Nevertheless, as we will see, even with just a single-hidden-layer network, efficiency gains of \(>\)40\(\times\) are possible by performing the hidden layer in the low-SNR regime. The models we report on in this section used non-negative weights in the hidden layers and real-valued weights in the output layers. This allows the hidden layers to be straightforwardly realized with optical MVMs using incoherent light.4 In Section IV and Supplementary Note 2, we report on extensions to the case of real-valued weights in coherent optical processors. Footnote 4: A high-SNR layer with real-valued weights can be realized with an incoherent optical MVM if some digital-electronic postprocessing is allowed [58; 13]—which is the approach we take for the optical output layer executions in our experiments. However, the postprocessing strategy doesn’t directly apply in the low-SNR regime because readout becomes inseparable from the application of a nonlinear activation function, so we are constrained to non-negative weights and activations in the hidden layers. ### Simulation results First, we digitally simulated the SPDNN models shown in Figure 3a. We report the simulated test accuracies in Figure 3b for the full test dataset of 10,000 images, as a function of the number of hidden neurons \(N\) and the number of shots \(K\) of binary SPD measurements integrated to compute each activation. Due to the stochastic nature of the model, the classification output for a fixed input varies from run to run. We repeated inferences on fixed inputs from the test set 100 times; we report the mean and standard deviation of the test accuracy as data points and error bars, respectively. The standard deviations of the test accuracies are around 0.1%. The accuracy achieved by the SPDNN is substantially higher than for linear models (\(<93\%\) classification accuracy on MNIST [59]). This both shows that despite the hidden layer being stochastic, high-accuracy determistic classification is possible, and that the SPD activation function serves as a suitable nonlinearity in the neural network. The Figure 3: **Performance of a single-photon-detection neural network (SPDNN) on MNIST handwritten-digit classification.****a**, An SPDNN realizing a multilayer perceptron (MLP) architecture of \(N\) neurons in the hidden layer. The hidden layer (\(784\,\rightarrow\,N\)) was computed using an incoherent optical matrix-vector-multiplier (MVM) followed by a single-photon-detector (SPD) array. Each SPD realized a stochastic activation function for a single hidden-layer neuron. During a single inference, the hidden layer was executed a small number of times (\(1\,\leq\,K\,\leq\,5\)), yielding averaged activation values. The output layer (\(N\,\rightarrow\,10\)) was realized either optically—using an optical MVM and high photon budget to achieve high readout SNRa, as in conventional ONNs, or with a digital electronic processor, yielding a result with full numerical precision. **b**, Simulated test accuracy of MNIST handwritten-digit classification for models with different numbers of hidden neurons \(N\) and shots per activation \(K\).b Each activation value is obtained by averaging \(K\) shots of stochastic binary SPD readouts. When \(K\,\rightarrow\,\infty\), the stochastic activations \(a_{i}\) become the expectations \(\mathbb{E}[a_{i}]\), which are deterministic. The test accuracy with few shots is close to the accuracy achieved in the deterministic limit. **c**, Experimental evaluation of the SPDNN, with the output layer performed with full numerical precision on a digital computer. Results are presented for both \(K\,=\,1\) (single-shot, i.e., no averaging; top) and \(K\,=\,2\) (bottom) shots per activation. **d**, Experimental evaluation of the SPDNN, with both the hidden and the output layer executed using the optical experimental apparatus. The average number of detected photons used per inference in the hidden layer was kept fixed and the number used per inference in the output layer was varied (see main text for numbers). The number of detected photons per inference is reported both as an aggregate optical energy (top axis) and as a per-MAC quantity (bottom axis), which we obtained by dividing the number of photons per inference by the number of MACs performed in a single inference. The mean and standard deviation of the test accuracy were estimated used 100 repetitions of inference for each image in the test set. sizes of the models we simulated (in number of neurons \(N\)) are similar to those of traditional deterministic neural networks for MNIST classification [60], so the high accuracies achieved are not a simple consequence of averaging over many noisy neurons [61]. If we integrated an infinite number of SPD measurements for each activation (\(K\rightarrow\infty\))--which is infeasible in experiment, but can be simulated--then the SPDNN output would become deterministic. The test accuracy achieved in this limit can be considered as an upper bound, as the classification accuracy improves monotonically with \(K\). Notably, even with just a single SPD measurement (\(K=1\)) for each activation, the mean test accuracy is around \(97\%\). The accuracy is substantially improved with just a few more shots of averaging, and approaches the deterministic upper bound when \(K\gtrsim 5\). The mean single-photon-detection probability, averaged over all neurons, is \(\approx 0.5\), so the simulated number of detected photons per shot is very small: \(\approx 0.5N\). As we will quantify in the next section reporting the results of optical experiments, this means high accuracy can be achieved using much less optical energy than in conventional ONNs. ### Optical experimental results In our experimental demonstrations, we based our SPDNN on a free-space optical matrix-vector multiplier (MVM) that we had previously constructed for high-SNR experiments [13], and replaced the detectors with SPDs so that we could operate it with ultra-low photon budgets (see Methods). The experiments we report were, in part, enabled by the availability of cameras comprising large arrays of pixels capable of detecting single photons with low noise [62]. We encoded neuron values in the intensity of incoherent light; as a result, the weights and input vectors were constrained to be non-negative. However, this is not a fundamental feature of SPDNNs--in the next section (Section IV), we present simulations of coherent implementations that lift this restriction. A single-photon-detecting camera measured the photons transmitted through the optical MVM, producing the stochastic activations as electronic signals that were input to the following neural-network layer (see Methods and Supplementary Note 3 and 4). In our first set of optical experiments, the hidden layer was realized optically and the output layer was realized _in silico_ (Figure 3c): the output of the SPD measurements after the optical MVM was passed through a linear classifier executed with full numerical precision on a digital electronic computer. We tested using both \(K=1\) (no averaging) and \(K=2\) shots of averaging the stochastic binary activations in the hidden layer. The results agree well with simulations, which differ from the simulation results shown in Figure 3b because they additionally modeled imperfections in our experimental optical-MVM setup (see Methods, Supplementary Note 7). The test accuracies were calculated using 100 test images, with inference for each image repeated 30 times. The hidden layer (the one computed optically in these experiments) used approximately 0.0008 detected photons per MAC, which is \(\geq 6\) orders of magnitude lower than is typical in ONN implementations [11; 12; 16; 30] and \(\geq 3\) orders of magnitude lower than the lowest photons-per-MAC numbers reported to date [13; 16]. We then performed experiments in which both the hidden layer and the output layer were computed optically (Figure 3d). In these experiments, we implemented a neural network with 400 hidden neurons and used 5 shots per inference (\(N=400\), \(K=5\)). The total optical energy was varied by changing the number of photons used in the output layer; the number of photons used in the hidden layer was kept fixed. The average value of the stochastic binary activations \(a_{i}\) in the hidden layer was \(\approx 0.522\). This corresponds to a total of \(0.522\times N\times K=1044\) photons being detected in the hidden layer per inference. The total detected optical energy per inference comprises the sum of the detected optical energy in the hidden (\(784\to 400\)) layer and in the output (\(400\to 10\)) layer (see Methods, Supplementary Table 6 and Supplementary Note 9). The results show that even though the output layer was operated in the high-SNR regime (Figure 1b), the full inference computation achieved high accuracy yet used only a few femtojoules of optical energy in total (equivalent to a few thousand photons). By dividing the optical energy by the number of MACs performed in a single inference, we can infer the per-MAC optical energy efficiency achieved: with an average detected optical energy per MAC of approximately 0.001 attojoules (0.003 attojoules), equivalent to 0.003 photons (0.008 photons), the test accuracy was \(92.0\pm 2.3\%\) (\(98.0\pm 1.3\%\)). ## IV Simulation study of possible future deeper, coherent single-photon-detection neural networks We have successfully experimentally demonstrated a two-layer SPDNN, but can SPDNNs be used to implement deeper and more sophisticated models? One of the limitations of our experimental apparatus was that it used an intensity encoding with incoherent light and as a result could natively only perform operations with non-negative numbers. In this section we will show that SPDNNs capable of implementing signed numbers can be used to realize multilayer models (with up to 6 layers), including models with more sophisticated architectures than multilayer perceptrons--such as models with convolutional layers. ONNs based on coherent light can naturally encode sign information in the phase of the light and have been realized in many different physical platforms [64, 65, 66, 7, 10, 11, 6]. We propose--and study in simulation--SPDNNs using coherent light. Neuron values are encoded in optical amplitudes that are constrained to have phases that are either 0 (positive values) or \(\pi\) (negative values). With this encoding, detection by an SPD--which measures intensity and is hence insensitive to phase--results in a stochastic nonlinear activation function that is symmetric about zero (Figure 4a; see Methods). Alternative detection schemes could be employed that would modify the activation function, but we have focused on demonstrating the capabilities of this straightforward case, avoiding introducing additional experimental complexity. We performed two sets of simulation experiments: one on coherent SPDNNs trained to perform MNIST handwritten-digit classification, and one on coherent SPDNNs trained to performed CIFAR-10 image classification. Figures 4d shows the architectures tested and simulation results for the MNIST benchmark (see Methods, Supplementary Note 2B). The accuracy achieved by MLPs with either one or two hidden layers was higher than that of the single-hidden-layer MLP simulated for the incoherent case (Figure 3b), and an architecture with a single convolutional layer followed Figure 4: **Simulation study predicting the performance of proposed _coherent_ single-photon-detection neural networks (SPDNNs).****a**, The probability of detecting a photon as a function of the input light amplitude in a coherent SPDNN. Real-valued numbers are encoded in coherent light with either 0 phase (positive numbers) or \(\pi\) phase (negative numbers). Measurement by a single-photon detector (SPD) results in the probabilistic detection of a photon that is proportional to the square of the encoded value \(z\), in comparison to intensity encodings with incoherent light. **b**, Structure of a convolutional SPDNN with a kernel size of \(5\times 5\). Single-shot SPD measurements (\(K=1\)) are performed after each layer (by an SPD array), except for the output layer. Average \(2\times 2\) pooling is applied after each convolutional operation. A digital rectified linear unit (ReLU) [63] activation function can also be used in the linear layer as an alternative. **c**, Schematic of a convolutional layer with SPD activations. **d**, Simulated test accuracy of coherent SPDNNs with varying architecture performing MNIST handwritten-digit classification. The multilayer perceptron (MLP) models had 400 neurons in each hidden layer. The convolutional model consisted of a convolutional layer with 16 output channels, followed by two linear layers with an SPD activation inbetween. **e**, Simulated test accuracy of coherent SPDNNs with varying architecture performing CIFAR-10 image classification. The models have four convolutional layers, each followed by SPD activation functions. The two linear layers can either be implemented in full-precision with a ReLU activation function (in purple) or using the SPD activation function. The number of output channels for each convolutional layer is indicated above the corresponding data point. by two linear layers achieved \(>\)99% accuracy even in the single-shot (\(K=1\)) regime. Figure 4e shows the results of simulating variants of a 6-layer convolutional SPDNN (comprising 4 convolutional layers and 2 fully connected, linear layers) on CIFAR-10 image classification. All these simulation results were obtained in the single-shot (\(K=1\)) regime. The number of channels in each convolution layer was varied, which affects the total number of MACs used to perform an inference. We observed that the test accuracy increased with the size of the SPDNN, with accuracies approaching those of conventional convolutional neural networks of comparable size [67], as well as of binarized convolutional neural networks [68; 42; 69]. In the models we simulated that only used SPD as the activation function (i.e., the ones in which there are no 'Digital ReLU' blocks), the high-SNR linear output layer had only 4000 MAC operations, so the number of MACs in the high-SNR layer comprises less than 0.01% of the total MACs performed during an inference. The models we simulated are thus sufficiently large that the total optical energy cost would be dominated by the (low-SNR) layers prior to the (high-SNR) output layer. Equivalently, the optical energy cost per MAC would be predominantly determined by the cost of the low-SNR layers. These simulation results illustrate the ability of SPDNNs to scale to larger and deeper models, enabling them to perform more challenging tasks. The symmetric stochastic activation function that is realized by SPD of coherently encoded real values yields good accuracies on both MNIST and CIFAR-10 benchmarks and is straightforward to implement experimentally. ## V Discussion In this paper we have shown that it is possible to construct an optical neural network (ONN) in which one or more layers use single-photon detection (SPD) of weak optical signals to perform stochastic neuron activation, and--despite the exceptionally low signal-to-noise ratio (SNR) of around 1 in the low-optical-power layers--such single-photon-detection neural networks (SPDNNs) can achieve high accuracy in deterministic classification tasks. This is enabled by physics-aware stochastic training, in which an ONN is trained as a stochastic neural network using a model that incorporates knowledge of the physics of photodetection of optical signals with average photon number around 1 that are subject to Poissonian photon statistics. We experimentally demonstrated a two-layer ONN in which the (large) hidden layer was operated in the low-optical-power, quantum-noise-limited, highly stochastic regime (SNR \(\sim 1\)) and the (small) output layer was operated in the higher-optical-power, low-noise regime (SNR \(\gtrsim 10\)). This ONN (when run with \(N=50\) hidden neurons and \(K=5\) shots of SPD measurements per activation; see Supplementary Figure 20) achieved a test accuracy of 90.6% on MNIST handwritten-digit recognition while using only an average of 1390 detected photons per inference (corresponding to \(\sim\)0.5 fJ of detected optical energy per inference), which is a large improvement over recent state-of-the-art low-optical-power ONN experiments in the following metric: 1390 photons per inference is \(>\)40\(\times\) less than used by the ONNs in Refs. [13; 16] to achieve the same accuracy (\(>\)90%) on the same task (MNIST classification). 5 Footnote 5: We could also very favorably compare the number of photons per MAC used in our experiments versus in the experiments reported in Refs. [13; 16], but we don’t wish to emphasize this metric here for two reasons. Firstly, and most importantly, we see energy per inference as a more important metric to focus on than energy per MAC, even though picking metrics is not necessarily straightforward [70]. Secondly, dot products computed for the hidden layer in our optical experiments are read out stochastically by single-photon detectors that output just 1 bit of information, whereas the dot products computed in the experiments reported by Refs. [13; 16] are read out with more bits of precision. This difference in the nature and precision of the readout means a MAC operation in our experiments is arguably not quite the same as a MAC operation in the experiments of Refs. [13; 16], and so careful interpretation is needed when comparing their costs. While we have demonstrated a fundamental point--that ONNs can be successfully operated in the few-photon-per-activation regime in which quantum shot noise causes very low SNR--an important practical consideration for the construction of ONNs is that the energy used by optical signals within the ONN are only part of the ONN's total energy consumption, and it is the total energy per inference that is generally what one wants to optimize for [28; 60; 71]. A practical limitation of our experiments is that they were conducted with a relatively slow6 single-photon-detector array, limiting the speed at which a single execution of a layer could be carried out, and the detector array was not optimized for energy efficiency. For our fundamental approach and methods to be applied to make ONNs that offer a practical advantage over state-of-the-art electronic processors as generic neural-network accelerators, there remains important work to be done in engineering an overall system that operates sufficiently fast while minimizing total energy cost. Recent progress in the development of large, fast arrays of single-photon detectors coupled with digital logic [72] suggest that there is a path towards this goal. Ref. [73] has also pointed out the possibility of using fast superconducting-nanowire single-photon detectors for realizing spiking neural networks. Furthermore, there is a complementary path toward utility in the nearer term: if instead of aiming to use ONNs to entirely replace electronic processors, one uses ONNs as a pre-processor for input data that is already optical [9; 74; 75], operating the ONN with single-photon detectors is a natural match with scenarios in which the optical input is very weak--for example, in low-light-imaging applications. Footnote 6: 19.8 kHz maximum frame rate. Our approach is not tied to a specific architecture of ONN--the free-space matrix-vector multiplier used in our experiments is just one of many possible choices of architecture. Other ONNs could be adapted to use our approach by replacing the photodetectors typically used for readout of neurons at the end of a layer with single-photon detectors. ONNs based on diffractive optics [7; 12; 64], Mach-Zehnder interferometer (MZI) meshes [76; 6; 77], and other on-chip approaches to matrix-vector multiplication [78; 10; 11] all appear compatible. In our optical experiments, we used single-photon detectors that output an electronic signal when a photon is detected. However, in multilayer ONNs, the input to each layer is optical. One can convert an electronic detector output to an optical input by modulating an optical source--which is what we did and what is often done in ONNs more generally [9]--but an alternative is to construct a device that performs SPD with high efficiency and gives the measurement result as an _optical_ signal that can be directly used as an input to the next layer in the ONN. Designing and demonstrating such a device is an interesting potential avenue for future work in applied quantum nonlinear optics [79; 80; 81; 82; 83; 84], and could lead to both lower electronic energy consumption and higher speed for single-photon-detection ONNs. We trained our demonstration SPDNN _in silico_ using backpropagation, but if SPDNNs with high overall energy efficiency are built, it would be a boon use this efficient hardware not only for inference but also for training. To this end, it could be interesting to study how to adapt _in situ_ training [85; 86; 87; 17], including backpropagation-free (e.g., Refs. [88; 89; 90; 91]), methods for SPDNNs. An open question related to training is whether it is possible to make SPDNNs that do not involve a final high-SNR layer while preserving task accuracy; this could help to reduce the overall energy per inference. Other future work could explore the extension of our research to neural networks with larger sizes (wider and more layers, which could both improve the capability of the neural network and further amortize the energy cost of the final, high-SNR layer, if used), more sophisticated classification tasks (beyond MNIST and CIFAR-10 image classification--such as has been shown with conventional binary neural networks [92; 56; 93]), and generative or other probabilistic tasks--for which the stochasticity can be harnessed rather than merely tolerated. Beyond machine-learning tasks, an SPDNN layer could be used as the core of a single-photon-regime photonic Ising machine [94] for heuristically solving combinatorial-optimization problems, realizing an optical version of p-bit computing [48]. Our research is an example of realizing a neural network using a stochastic physical system. Beyond optics, our work is related and complementary to recent investigations in electronic, spintronic, and quantum neuromorphic computing [95; 96; 97; 98; 99; 100; 4], including in training physical systems to perform neural-network inference [102; 103; 104; 105; 106]. Noise is a fundamental feature and the ultimate limit to energy efficiency in computing with all analog physical systems. It has long been realized that noise is not always detrimental: not only does it not necessarily prevent accurate computation, but can in some cases even enable fundamentally new and more efficient algorithms or types of computation. Our work shows that using a quantum physical model of a particular hardware's noise at the software level can enable surprisingly large gains in energy efficiency. The phenomena observed in our work seemingly relies on two key physical ingredients. First, the system's available states are effectively quantized, as in the photonic quantization of energy in our ONN demonstration, or the binarization that occurs in low-barrier, stochastic magnetic tunnel junctions [96]. Second, the noise in the system results in the quantized outputs of the system being stochastic. This suggests that ultra-low-SNR physical neural networks should be possible in many physical hardware platforms beyond photonics. Systems in which shot noise dominates are natural matches with our approach and methods. Our approach could also be relevant to systems in which thermal (Johnson) noise dominates--as is typically the case in room-temperature electronics--but this will depend on not just the noise but also the system's dynamics. Which hardware platforms and system architectures can yield an overall energy benefit by being operated in a stochastic regime while maintaining computational accuracy is an important open question. While there are many reasons computer science has traditionally favored the abstraction of hardware from software, our work is part of a broad trend, spanning many different physical platforms [107; 108; 5], in which researchers engineer computations in a physics-aware manner. By short-circuiting the abstraction hierarchy--in our case, going from a physics-aware software description of a stochastic neural network directly to a physical optical realization of the constituent operations--it is possible to achieve orders-of-magnitude improvements in energy efficiency [28; 9] versus conventional CMOS computing. _Physics-aware software_, in which software directly incorporates knowledge of the physics of the underlying computing hardware--such as in the _physics-aware stochastic training_ we used in this work--is understudied compared to purely software-level or hardware-level innovations (i.e., "at the top" or "at the bottom" of the hierarchy [109]). It is thus ripe for exploration: within the domain of neural networks, there are a multitude of emerging physical platforms that could be more fully harnessed if the physical devices were not forced to conform to the standard abstractions in modern computer architecture [24]. Beyond neural-network accelerators, communities such as computational imaging [110] have embraced the opportunity to improve system performance through co-optimizing hardware and software in a physics-aware manner. We believe there is an opportunity to make gains in even more areas and applications of computing technology by collapsing abstractions and implementing physics-aware software with physical hardware that could be orders of magnitude faster or more energy efficient than current digital CMOS approaches but that doesn't admit a clean, digital, deterministic abstraction. ## Data and Code Availability All the simulation and experimental data presented in the paper, demonstration data for data gathering, as well as training data for the SPDNN models, are available at [https://doi.org/10.5281/zenodo.8188270](https://doi.org/10.5281/zenodo.8188270). An expandable demonstration code to train SPDNNs as well as other stochastic physical systems is available at [https://github.com/mcmahon-lab/Single-Photon-Detection-Neural-Networks](https://github.com/mcmahon-lab/Single-Photon-Detection-Neural-Networks). ## Acknowledgements We wish to thank NTT Research for their financial and technical support (S.-Y.M., P.L.M., T.W. and L.G.W.). Portions of this work were supported by the National Science Foundation (award no. CCF-1918549; J.L., P.L.M. and T.W.), a Kavli Institute at Cornell instrumentation grant (P.L.M. and T.W.), and a David and Lucile Packard Foundation Fellowship (P.L.M.). P.L.M. acknowledges membership of the CIFAR Quantum Information Science Program as an Azrieli Global Scholar. T.W. acknowledges partial support from an Eric and Wendy Schmidt AI in Science Postdoctoral Fellowship. We acknowledge valuable discussions with M. Anderson, F. Chen, R. Hamerly, T. Onodera, S. Prabhu, M. M. Sohoni and R. Yanagimoto. We also acknowledge Z. Eslami, V. Kremenetski, F. Presutti, C. Wan and F. Wu for helpful suggestions regarding the manuscript. ## Author Contributions S.-Y.M., L.G.W., T.W., and P.L.M. conceived the project. S.-Y.M. and T.W. designed the experiments and built the experimental setup. S.-Y.M. and J.L. performed the neural-network training. S.-Y.M. performed the experiments, the data analysis, and the numerical simulations. All authors contributed to preparing the manuscript. T.W., L.G.W. and P.L.M. supervised the project. ## References * [1] Y. LeCun, Y. Bengio, and G. Hinton, Deep learning. _Nature_**521**, 436-444 (2015). * [2] A. Canziani, A. Paszke, and E. Culurciello, An analysis of deep neural network models for practical applications. _arXiv:1605.07678_ (2016). * [3] N. C. Thompson, K. Greenewald, K. Lee, and G. F. Manso, The computational limits of deep learning. _arXiv:2007.05558_ (2020). * [4] D. Markovic, A. Mizrahi, D. Querlioz, and J. Grollier, Physics for neuromorphic computing. _Nature Reviews Physics_**2**, 499-510 (2020). * [5] D. V. Christensen, R. Dittmann, B. Linares-Barranco, A. Sebastian, M. Le Gallo, A. Redaelli, S. Slesazeck, T. Mikolajick, S. Spiga, S. Menzel et al. 2022 roadmap on neuromorphic computing and engineering. _Neuromorphic Computing and Engineering_**2**, 022501 (2022). * [6] Y. Shen, N. C. Harris, S. Skirlo, M. Prabhu, T. Baehr-Jones, M. Hochberg, X. Sun, S. Zhao, H. Larochelle, D. Englund et al. Deep learning with coherent nanophotonic circuits. _Nature Photonics_**11**, 441 (2017). * [7] X. Lin, Y. Rivenson, N. T. Yardimci, M. Veli, Y. Luo, M. Jarrahi, and A. Ozcan, All-optical machine learning using diffractive deep neural networks. _Science_**361**, 1004-1008 (2018). * [8] C. Rios, N. Youngblood, Z. Cheng, M. Le Gallo, W. H. Pernice, C. D. Wright, A. Sebastian, and H. Bhaskaran, In-memory computing on a photonic platform. _Science Advances_**5**, eaau5759 (2019). * [9] G. Wetzstein, A. Ozcan, S. Gigan, S. Fan, D. Englund, M. Soljacic, C. Denz, D. A. Miller, and D. Psaltis, Inference in artificial intelligence with deep optics and photonics. _Nature_**588**, 39-47 (2020). * [10] X. Xu, M. Tan, B. Corcoran, J. Wu, A. Boes, T. G. Nguyen, S. T. Chu, B. E. Little, D. G. Hicks, R. Morandotti, A. Mitchell, and D. J. Moss, 11 TOPS photonic convolutional accelerator for optical neural networks. _Nature_**589**, 44-51 (2021). * [11] J. Feldmann, N. Youngblood, M. Karpov, H. Gehring, X. Li, M. Stappers, M. Le Gallo, X. Fu, A. Lukashchuk, A. S. Raja et al. Parallel convolutional processing using an integrated photonic tensor core. _Nature_**589**, 52-58 (2021). * [12] T. Zhou, X. Lin, J. Wu, Y. Chen, H. Xie, Y. Li, J. Fan, H. Wu, L. Fang, and Q. Dai, Large-scale neuromorphic optoelectronic computing with a reconfigurable diffractive processing unit. _Nature Photonics_**15**, 367-373 (2021). * [13] T. Wang, S.-Y. Ma, L. G. Wright, T. Onodera, B. C. Richard, and P. L. McMahon, An optical neural network using less than 1 photon per multiplication. _Nature Communications_**13**, 1-8 (2022). * [14] R. Davis III, Z. Chen, R. Hamerly, and D. Englund, Frequency-encoded deep learning with speed-of-light dominated latency. _arXiv:2207.06883_ (2022). * [15] F. Ashtiani, A. J. Geers, and F. Aflatouni, An on-chip photonic deep neural network for image classification. _Nature_**606**, 501-506 (2022). * [16] A. Sludds, S. Bandyopadhyay, Z. Chen, Z. Zhong, J. Cochrane, L. Bernstein, D. Bunandar, P. B. Dixon, S. A. Hamilton, M. Streshinsky et al. Delocalized photonic deep learning on the internet's edge. _Science_**378**, 270-276 (2022). * [17] S. Bandyopadhyay, A. Sludds, S. Krastanov, R. Hamerly, N. Harris, D. Bunandar, M. Streshinsky, M. Hochberg, and D. Englund, Single chip photonic deep neural network with accelerated training. _arXiv:2208.01623_ (2022). * [18] S. Moon, K. Shin, and D. Jeon, Enhancing reliability of analog neural network processors. _IEEE Transactions on Very Large Scale Integration (VLSI) Systems_**27**, 1455-1459 (2019). * [19] V. Joshi, M. Le Gallo, S. Haefeli, I. Boybat, S. R. Nandakumar, C. Piveteau, M. Dazzi, B. Rajendran, A. Sebastian, and E. Eleftheriou, Accurate deep neural network inference using computational phase-change memory. _Nature Communications_**11**, 1-13 (2020). * [20] N. Semenova, L. Larger, and D. Brunner, Understanding and mitigating noise in trained deep neural networks. _Neural Networks_**146**, 151-160 (2022). * [21] M. Klachko, M. R. Mahmoodi, and D. Strukov, Improving noise tolerance of mixed-signal neural networks. In _2019 International Joint Conference on Neural Networks (IJCNN)_, 1-8 (2019). * [22] C. Zhou, P. Kadambi, M. Mattina, and P. N. Whatmough, Noisy machines: Understanding noisy neural networks and enhancing robustness to analog hardware errors using distillation. _arXiv:2001.04974_ (2020). * [23] X. Yang, C. Wu, M. Li, and Y. Chen, Tolerating Noise Effects in Processing-in-Memory Systems for Neural Networks: A Hardware-Software Codesign Perspective. _Advanced Intelligent Systems_**4**, 2200029 (2022). * [24] L. G. Wright, T. Onodera, M. M. Stein, T. Wang, D. T. Schachter, Z. Hu, and P. L. McMahon, Deep physical neural networks trained with backpropagation. _Nature_**601**, 549-555 (2022). * [25] C. Wu, X. Yang, H. Yu, R. Peng, I. Takeuchi, Y. Chen, and M. Li, Harnessing optoelectronic noises in a photonic generative network. _Science Advances_**8**, eabm2956 (2022). * [26] H. Borras, B. Klein, and H. Froning, Walking Noise: Understanding Implications of Noisy Computations on Classification Tasks. _arXiv:2212.10430_ (2022). * [27] N. Semenova and D. Brunner, Noise-mitigation strategies in physical feedforward neural networks. _Chaos: An Interdisciplinary Journal of Nonlinear Science_**32**, 061106 (2022). * [28] M. G. Anderson, S.-Y. Ma, T. Wang, L. G. Wright, and P. L. McMahon, Optical transformers. _arXiv:2302.10360_ (2023). * [29] Y. Jiang, W. Zhang, X. Liu, W. Zhu, J. Du, and Z. He, Physical Layer-aware Digital-Analog Co-Design for Photonic Convolution Neural Network. _IEEE Journal of Selected Topics in Quantum Electronics_ (2023). * [30] L. Bernstein, A. Sludds, C. Panuski, S. Trajtenberg-Mills, R. Hamerly, and D. Englund, Single-shot optical neural network. _Science Advances_**9**, eadg7904 (2023). * [31] C. Beenakker and C. Schonenberger, Quantum shot noise. _Physics Today_**56**, 37-42 (2003). * [32] G. S. Agarwal, (2012) _Quantum Optics_. (Cambridge University Press). * [33] S. Machida, Y. Yamamoto, and Y. Itaya, Observation of amplitude squeezing in a constant-current-driven semiconductor laser. _Physical Review Letters_**58**, 1000 (1987). * [34] R. H. Hadfield, Single-photon detectors for optical quantum information applications. _Nature Photonics_**3**, 696-705 (2009). * [35] A. Alaghi and J. P. Hayes, Survey of stochastic computing. _ACM Transactions on Embedded Computing Systems (TECS)_**12**, 1-19 (2013). * [36] D. H. Ackley, G. E. Hinton, and T. J. Sejnowski, A learning algorithm for Boltzmann machines. _Cognitive Science_**9**, 147-169 (1985). * [37] R. M. Neal, Learning stochastic feedforward networks. _Department of Computer Science, University of Toronto_**64**, 1577 (1990). * [38] R. M. Neal, Connectionist learning of belief networks. _Artificial Intelligence_**56**, 71-113 (1992). * [39] Y. Bengio, N. Leonard, and A. Courville, Estimating or propagating gradients through stochastic neurons for conditional computation. _arXiv:1308.3432_ (2013). * [40] C. Tang and R. R. Salakhutdinov, Learning stochastic feedforward neural networks. _Advances in Neural Information Processing Systems_**26** (2013). * [41] T. Raiko, M. Berglund, G. Alain, and L. Dinh, Techniques for learning binary stochastic feedforward neural networks. _arXiv:1406.2989_ (2014). * [42] I. Hubara, M. Courbariaux, D. Soudry, R. El-Yaniv, and Y. Bengio, Binarized neural networks. _Advances in Neural Information Processing Systems_**29** (2016). * [43] Y. Ji, F. Ran, C. Ma, and D. J. Lilja, A hardware implementation of a radial basis function neural network using stochastic logic. In _2015 Design, Automation & Test in Europe Conference & Exhibition (DATE)_, 880-883 (2015). * [44] V. T. Lee, A. Alaghi, J. P. Hayes, V. Sathe, and L. Ceze, Energy-efficient hybrid stochastic-binary neural networks for near-sensor computing. In _Design, Automation & Test in Europe Conference & Exhibition (DATE), 2017_, 13-18 (2017). * [45] Y. Liu, S. Liu, Y. Wang, F. Lombardi, and J. Han, A survey of stochastic computing neural networks for machine learning applications. _IEEE Transactions on Neural Networks and Learning Systems_**32**, 2809-2824 (2020). * [46] D. Vodenicarevic, N. Locatelli, A. Mizrahi, J. S. Friedman, A. F. Vincent, M. Romera, A. Fukushima, K. Yakushiji, H. Kubota, S. Yuasa et al. Low-energy truly random number generation with superparamagnetic tunnel junctions for unconventional computing. _Physical Review Applied_**8**, 054045 (2017). * Hassan et al. [2019] O. Hassan, R. Faria, K. Y. Camsari, J. Z. Sun, and S. Datta, Low-barrier magnet design for efficient hardware binary stochastic neurons. _IEEE Magnetics Letters_**10**, 1-5 (2019). * Chowdhury et al. [2023] S. Chowdhury, A. Grimaldi, N. A. Aadit, S. Niazi, M. Mohseni, S. Kanai, H. Ohno, S. Fukami, L. Theogarajan, G. Finocchio et al. A full-stack view of probabilistic computing with p-bits: devices, architectures and algorithms. _IEEE Journal on Exploratory Solid-State Computational Devices and Circuits_ (2023). * Hylton et al. [2021] T. Hylton, T. M. Conte, and M. D. Hill, A vision to compute like nature: Thermodynamically. _Communications of the ACM_**64**, 35-38 (2021). * Coles et al. [2023] P. J. Coles, C. Szczepanski, D. Melanson, K. Donatella, A. J. Martinez, and F. Sbahi, Thermodynamic AI and the fluctuation frontier. _arXiv:2302.06584_ (2023). * Wu et al. [2022] C. Wu, X. Yang, Y. Chen, and M. Li, Photonic Bayesian neural network using programmed optical noises. _IEEE Journal of Selected Topics in Quantum Electronics_**29**, 1-6 (2022). * Ma et al. [2023] B. Ma, J. Zhang, X. Li, and W. Zou, Stochastic photonic spiking neuron for Bayesian inference with unsupervised learning. _Optics Letters_**48**, 1411-1414 (2023). * Gu et al. [2015] S. Gu, S. Levine, I. Sutskever, and A. Mnih, Muprop: Unbiased backpropagation for stochastic neural networks. _arXiv:1511.05176_ (2015). * Liu et al. [2018] Y. Liu, S. Liu, Y. Wang, F. Lombardi, and J. Han, A stochastic computational multi-layer perceptron with backward propagation. _IEEE Transactions on Computers_**67**, 1273-1286 (2018). * Gerry and Knight [2005] C. Gerry and P. L. Knight, (2005) _Introductory Quantum Optics_. (Cambridge University Press). * Rastegari et al. [2016] M. Rastegari, V. Ordonez, J. Redmon, and A. Farhadi, Xnor-net: Imagenet classification using binary convolutional neural networks. In _Computer Vision-ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part IV_, 525-542 (2016). * Zhou et al. [2016] S. Zhou, Y. Wu, Z. Ni, X. Zhou, H. Wen, and Y. Zou, Dorefa-net: Training low bitwidth convolutional neural networks with low bitwidth gradients. _arXiv:1606.06160_ (2016). * Hayasaki et al. [1992] Y. Hayasaki, I. Tohyama, T. Yatagai, M. Mori, and S. Ishihara, Optical learning neural network using Selfoc microlens array. _Japanese Journal of Applied Physics_**31**, 1689 (1992). * LeCun et al. [1998] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, Gradient-based learning applied to document recognition. _Proceedings of the IEEE_**86**, 2278-2324 (1998). * Hamerly et al. [2019] R. Hamerly, L. Bernstein, A. Sludds, M. Soljacic, and D. Englund, Large-scale optical neural networks based on photoelectric multiplication. _Physical Review X_**9**, 021032 (2019). * Laydevant et al. [2021] J. Laydevant, M. Ernoult, D. Querlioz, and J. Grollier, Training dynamical binary neural networks with equilibrium propagation. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 4640-4649 (2021). * Dhimitri et al. [2022] K. Dhimitri, S. M. Fullerton, B. Coyle, K. E. Bennett, T. Miura, T. Higuchi, and T. Maruno, Scientific CMOS (sCMOS) camera capabilities with a focus on quantum applications. In _Photonics for Quantum 2022_, PC122430L (2022). * Agarap [2018] A. F. Agarap, Deep learning using rectified linear units (ReLU). _arXiv:1803.08375_ (2018). * Chang et al. [2018] J. Chang, V. Sitzmann, X. Dun, W. Heidrich, and G. Wetzstein, Hybrid optical-electronic convolutional neural networks with optimized diffractive optics for image classification. _Scientific Reports_**8**, 12324 (2018). * Spall et al. [2020] J. Spall, X. Guo, T. D. Barrett, and A. Lvovsky, Fully reconfigurable coherent optical vector-matrix multiplication. _Optics Letters_**45**, 5752-5755 (2020). * Miscuglio et al. [2020] M. Miscuglio, Z. Hu, S. Li, J. K. George, R. Capanna, H. Dalir, P. M. Bardet, P. Gupta, and V. J. Sorger, Massively parallel amplitude-only Fourier neural network. _Optica_**7**, 1812-1819 (2020). * Lee et al. [2016] C.-Y. Lee, P. W. Gallagher, and Z. Tu, Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In _Artificial Intelligence and Statistics_, 464-472 (2016). * Esser et al. [2016] S. K. Esser, P. A. Merolla, J. V. Arthur, A. S. Cassidy, R. Appuswamy, A. Andreopoulos, D. J. Berg, J. L. McKinstry, T. Melano, D. R. Barch, C. di Nolfo, P. Datta, A. Amir, B. Taba, M. D. Flickner, and D. S. Modha, Convolutional networks for fast, energy-efficient neuromorphic computing. _Proceedings of the National Academy of Sciences_**113**, 11441-11446 (2016). * Qin et al. [2020] H. Qin, R. Gong, X. Liu, X. Bai, J. Song, and N. Sebe, Binary neural networks: A survey. _Pattern Recognition_**105**, 107281 (2020). * Sze et al. [2020] V. Sze, Y.-H. Chen, T.-J. Yang, and J. S. Emer, How to evaluate deep neural network processors: TOPS/W (alone) considered harmful. _IEEE Solid-State Circuits Magazine_**12**, 28-41 (2020). * Nahmias et al. [2019] M. A. Nahmias, T. F. De Lima, A. N. Tait, H.-T. Peng, B. J. Shastri, and P. R. Prucnal, Photonic multiply-accumulate operations for neural networks. _IEEE Journal of Selected Topics in Quantum Electronics_**26**, 1-18 (2019). * Bruschini et al. [2023] C. Bruschini, S. Burri, E. Bernasconi, T. Milanese, A. C. Ulku, H. Homulle, and E. Charbon, LinoSPAD2: A 512x1 linear SPAD camera with system-level 135-ps SPTR and a reconfigurable computational engine for time-resolved single-photon imaging. In _Quantum Sensing and Nano Electronics and Photonics XIX_ Vol. 12430, 126-135 (2023). * Shainline et al. [2017] J. M. Shainline, S. M. Buckley, R. P. Mirin, and S. W. Nam, Superconducting optoelectronic circuits for neuromorphic computing. _Physical Review Applied_**7**, 034013 (2017). * Wang et al. [2023] T. Wang, M. M. Sohoni, L. G. Wright, M. M. Stein, S.-Y. Ma, T. Onodera, M. G. Anderson, and P. L. McMahon, Image sensing with multilayer nonlinear optical neural networks. _Nature Photonics_**17**, 408-415 (2023). * Huang et al. [2023] L. Huang, Q. A. Tanguy, J. E. Froch, S. Mukherjee, K. F. Bohringer, and A. Majumdar, Photonic Advantage of Optical Encoders. _arXiv:2305.01743_ (2023). * Carolan et al. [2015] J. Carolan, C. Harrold, C. Sparrow, E. Martin-Lopez, N. J. Russell, J. W. Silverstone, P. J. Shadbolt, N. Matsuda, M. Oguma, M. Itoh et al. Universal linear optics. _Science_**349**, 711-716 (2015). * Bogaerts _et al._ [2020]W. Bogaerts, D. Perez, J. Capmany, D. A. B. Miller, J. Poon, D. Englund, F. Morichetti, and A. Melloni, Programmable photonic circuits. _Nature_**586**, 207-216 (2020). * Tait _et al._ [2015]A. N. Tait, J. Chang, B. J. Shastri, M. A. Nahmias, and P. R. Prucnal, Demonstration of WDM weighted addition for principal component analysis. _Optics Express_**23**, 12758-12765 (2015). * Mazets and Kurizki [2007]I. Mazets and G. Kurizki, Multiatom cooperative emission following single-photon absorption: Dicke-state dynamics. _Journal of Physics B: Atomic, Molecular and Optical Physics_**40**, F105 (2007). * Pinotsi and Imamoglu [2008]D. Pinotsi and A. Imamoglu, Single photon absorption by a single quantum emitter. _Physical Review Letters_**100**, 093603 (2008). * Sotier _et al._ [2009]F. Sotier, T. Thomay, T. Hanke, J. Korger, S. Mahapatra, A. Frey, K. Brunner, R. Bratschitsch, and A. Leitenstorfer, Femtosecond few-fermion dynamics and deterministic single-photon gain in a quantum dot. _Nature Physics_**5**, 352-356 (2009). * Kiilerich and Molmer [2019]A. H. Kiilerich and K. Molmer, Input-output theory with quantum pulses. _Physical Review Letters_**123**, 123604 (2019). * Li _et al._ [2023]Q. Li, K. Orcutt, R. L. Cook, J. Sabines-Chesterking, A. L. Tong, G. S. Schlau-Cohen, X. Zhang, G. R. Fleming, and K. B. Whaley, Single-photon absorption and emission from a natural photosynthetic complex. _Nature_ pp. 1-5 (2023). * Roques-Carmes _et al._ [2023]C. Roques-Carmes, Y. Salamin, J. Sloan, S. Choi, G. Velez, E. Koskas, N. Rivera, S. E. Kooi, J. D. Joannopoulos, and M. Soljacic, Biasing the quantum vacuum to control macroscopic probability distributions. _arXiv:2303.03455_ (2023). * Zhou _et al._ [2020]T. Zhou, L. Fang, T. Yan, J. Wu, Y. Li, J. Fan, H. Wu, X. Lin, and Q. Dai, In situ optical backpropagation training of diffractive optical neural networks. _Photonics Research_**8**, 940-953 (2020). * Guo _et al._ [2021]X. Guo, T. D. Barrett, Z. M. Wang, and A. Lvovsky, Backpropagation through nonlinear units for the all-optical training of neural networks. _Photonics Research_**9**, B71-B80 (2021). * Pai _et al._ [2023]S. Pai, Z. Sun, T. W. Hughes, T. Park, B. Bartlett, I. A. Williamson, M. Minkov, M. Milanizadeh, N. Abebe, F. Morichetti et al. Experimentally realized in situ backpropagation for deep learning in photonic neural networks. _Science_**380**, 398-404 (2023). * Bengio _et al._ [2015]Y. Bengio, D.-H. Lee, J. Bornschein, T. Mesnard, and Z. Lin, Towards biologically plausible deep learning. _arXiv:1502.04156_ (2015). * Lillicrap _et al._ [2020]T. P. Lillicrap, A. Santoro, L. Marris, C. J. Akerman, and G. Hinton, Backpropagation and the brain. _Nature Reviews Neuroscience_**21**, 335-346 (2020). * Hinton [2023]G. Hinton, The concept of mortal computation. Keynote address presented at the Neural Information Processing Systems conference, New Orleans (2023). * Stern and Murugan [2023]M. Stern and A. Murugan, Learning without neurons in physical systems. _Annual Review of Condensed Matter Physics_**14**, 417-441 (2023). * Bulat and Tzimiropoulos [2019]A. Bulat and G. Tzimiropoulos, Xnor-net++: Improved binary neural networks. _arXiv:1909.13863_ (2019). * Bulat _et al._ [2019]A. Bulat, J. Kossaifi, G. Tzimiropoulos, and M. Pantic, Matrix and tensor decompositions for training binary neural networks. _arXiv:1904.07852_ (2019). * Mohseni _et al._ [2022]N. Mohseni, P. L. McMahon, and T. Byrnes, Ising machines as hardware solvers of combinatorial optimization problems. _Nature Reviews Physics_**4**, 363-379 (2022). * Torrejon _et al._ [2020]J. Torrejon, M. Riou, F. A. Araujo, S. Tsunegi, G. Khalsa, D. Querlioz, P. Bortolotti, V. Cros, K. Yakushiji, A. Fukushima et al. Neuromorphic computing with nanoscale spintronic oscillators. _Nature_**547**, 428-431 (2017). * Grollier _et al._ [2020]J. Grollier, D. Querlioz, K. Camsari, K. Everschor-Sitte, S. Fukami, and M. D. Stiles, Neuromorphic spintronics. _Nature Electronics_**3**, 360-370 (2020). * Cai _et al._ [2020]F. Cai, S. Kumar, T. Van Vaerenbergh, X. Sheng, R. Liu, C. Li, Z. Liu, M. Foltin, S. Yu, Q. Xia et al. Power-efficient combinatorial optimization using intrinsic noise in memristor Hopfield neural networks. _Nature Electronics_**3**, 409-418 (2020). * Harabi _et al._ [2023]K.-E. Harabi, T. Hirtzlin, C. Turck, E. Vianello, R. Laurent, J. Droulez, P. Bessiere, J.-M. Portal, M. Bocquet, and D. Querlioz, A memristor-based Bayesian machine. _Nature Electronics_**6**, 52-63 (2023). * Islam _et al._ [2023]A. N. M. N. Islam, K. Yang, A. K. Shukla, P. Khanal, B. Zhou, W.-G. Wang, and A. Sengupta, Hardware in Loop Learning with Spin Stochastic Neurons. _arXiv:2305.03235_ (2023). * Markovic and Grollier [2020]D. Markovic and J. Grollier, Quantum neuromorphic computing. _Applied Physics Letters_**117** (2020). * Cerezo _et al._ [2022]M. Cerezo, G. Verdon, H.-Y. Huang, L. Cincio, and P. J. Coles, Challenges and opportunities in quantum machine learning. _Nature Computational Science_**2**, 567-576 (2022). * Prezioso _et al._ [2015]M. Prezioso, F. Merrikh-Bayat, B. D. Hoskins, G. C. Adam, K. K. Likharev, and D. B. Strukov, Training and operation of an integrated neuromorphic network based on metal-oxide memristors. _Nature_**521**, 61-64 (2015). * Hughes _et al._ [2019]T. W. Hughes, I. A. Williamson, M. Minkov, and S. Fan, Wave physics as an analog recurrent neural network. _Science Advances_**5**, eaay6946 (2019). * Mitarai _et al._ [2018]K. Mitarai, M. Negoro, M. Kitagawa, and K. Fujii, Quantum circuit learning. _Physical Review A_**98**, 032309 (2018). * Cramer _et al._ [2022]B. Cramer, S. Billaudelle, S. Kanya, A. Leibfried, A. Grubl, V. Karasenko, C. Pehle, K. Schreiber, Y. Stradmann, J. Weis et al. Surrogate gradients for analog neuromorphic computing. _Proceedings of the National Academy of Sciences_**119**, e2109194119 (2022). * Chen _et al._ [2020]T. Chen, J. van Gelder, B. van de Ven, S. V. Amitonov, B. De Wilde, H.-C. Ruiz Euler, H. Broersma, P. A. Bobbert, F. A. Zwanenburg, and W. G. van der Wiel, Classification with a disordered dopant-atom network in silicon. _Nature_**577**, 341-345 (2020). * Berggren _et al._ [2020]K. Berggren, Q. Xia, K. K. Likharev, D. B. Strukov, H. Jiang, T. Mikolajick, D. Querlioz, M. Salinga, J. R. Erickson, S. Pi et al. Roadmap on emerging hardware and technology for machine learning. _Nanotechnology_**32**, 012002 (2020). * [108] G. Finocchio, S. Bandyopadhyay, P. Lin, G. Pan, J. J. Yang, R. Tomasello, C. Panagopoulos, M. Carpentieri, V. Puliafito, J. Akerman et al. Roadmap for unconventional computing with nanotechnology. _arXiv:2301.06727_ (2023). * [109] C. E. Leiserson, N. C. Thompson, J. S. Emer, B. C. Kuszmaul, B. W. Lampson, D. Sanchez, and T. B. Schardl, There's plenty of room at the Top: What will drive computer performance after Moore's law? _Science_**368**, eaam9744 (2020). * [110] M. Kellman, M. Lustig, and L. Waller, How to do physics-based learning. _arXiv:2005.13531_ (2020). * [111] G. Hinton, _Neural networks for machine learning_. Coursera, Video Lectures (2012). * [112] P. Yin, J. Lyu, S. Zhang, S. Osher, Y. Qi, and J. Xin, Understanding straight-through estimator in training activation quantized neural nets. _arXiv:1903.05662_ (2019). * [113] R. J. Williams, Simple statistical gradient-following algorithms for connectionist reinforcement learning. _Machine learning_**8**, 229-256 (1992). * [114] L. Bottou, Stochastic gradient descent tricks. _Neural Networks: Tricks of the Trade: Second Edition_ pp. 421-436 (2012). * [115] I. Loshchilov and F. Hutter, Decoupled weight decay regularization. _arXiv:1711.05101_ (2017). * [116] P. De Chazal, J. Tapson, and A. Van Schaik, A comparison of extreme learning machines and back-propagation trained feed-forward networks processing the mnist database. In _2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_, 2165-2168 (2015). * [117] A. Krizhevsky, Learning multiple layers of features from tiny images. (2009). ## Methods ### Stochastic optical neural networks using single-photon detection as the activation function In the single-photon-detection neural networks (SPDNNs), the activation function is directly determined by the stochastic physical process of single-photon detection (SPD). The specific form of the activation function is dictated by the detection process on a single-photon detector (SPD). Each SPD measurement produces a binary output of either 0 or 1, with probabilities determined by the incident light intensity. Consequently, each SPD neuron activation, which corresponds to an SPD measurement in experiments, is considered as a binary stochastic process [37; 44; 54]. Following the Poisson distribution, the probability of an SPD detecting a photon click is given by \(P_{\text{SPD}}(\lambda)=1-e^{-\lambda}\) when exposed to an incident intensity of \(\lambda\) photons per detection. Note that this photon statistics may vary based on the state of light (e.g. squeezed light), but here we only consider the Poissonian light. Therefore, the SPD process can be viewed as a Bernoulli sampling of that probability, expressed as \(f_{\text{SPD}}(z)=\mathbf{1}_{l<P_{\text{SPD}}(\lambda(z))}\), where \(t\) is a uniform random variable \(t\sim U[0,1]\) and \(\mathbf{1}_{x}\) is the indicator function that evaluates to 1 if \(x\) is true. This derivation leads to Equation 1 in the main text. In our approach, the pre-activation value \(z\) is considered as the direct output from an optical matrix-vector multiplier (MVM) that encodes the information of a dot product result. For the \(i\)th pre-activation value in layer \(l\), denoted as \(z_{i}^{(l)}\), the expression is given by: \[z_{i}^{(l)}=\sum_{j=1}^{N_{l-1}}w_{ij}^{(l)}\cdot a_{j}^{(l-1)}, \tag{1}\] where \(N_{l-1}\) is the number of neurons in layer \(l-1\), \(w_{ij}^{(l)}\) is the weight between the \(i\)th neuron in layer \(l\) and the \(j\)th neuron in layer \(l-1\), \(a_{j}^{(l-1)}\) is the activation of the \(j\)th neuron in layer \(l\). The intensity \(\lambda(z)\) is a function of \(z\) that depends on the detection scheme employed in the optical MVM. In optical setups using incoherent light, the information is directly encoded in the intensity, resulting in \(\lambda=z\). If coherent light were used in a setup, where 0 and \(\pi\) phases represent the sign of the amplitude, the intensity is determined by squaring the real-number amplitude if directly measured, resulting in \(\lambda=z^{2}\). While more sophisticated detection schemes can be designed to modify the function of \(\lambda(z)\), we focused on the simplest cases to illustrate the versatility of SPDNNs. During the inference of a trained model, in order to regulate the level of uncertainty inherent in stochastic neural networks, we can opt to conduct multiple shots of SPD measurements during a single forward propagation. In the case of a \(K\)-shot inference, each SPD measurement is repeated \(K\) times, with the neuron's final activation value \(a^{[K]}\) being derived from the average of these \(K\) independent stochastic binary values. Consequently, for a single shot, \(a^{[1]}=a\in\{0,1\}\); for \(K\) shots, \(a^{[K]}=\frac{1}{K}\sum_{k=1}^{K}a_{k}\in\{0,1/K,2/K,\ldots,1\}\). By utilizing this method, we can mitigate the model's stochasticity, enhancing the precision of output values. Ideally, with an infinite number of shots (\(K\rightarrow\infty\)), the activation \(a^{[\infty]}\) would equate to the expected value without any stochasticity, that is, \(a^{[\infty]}=\mathbb{E}[a]=P_{\text{SPD}}(\lambda(z))\). The detailed process of an inference of SPDNNs is described in Algorithm 2 in Supplementary Note 1A. The training of our stochastic neuron models takes inspiration from recent developments in training stochastic neural networks. We have created an effective estimator that trains our SPDNNs while accounting for the stochastic activation determined by the physical SPD process. To train our SPDNNs, we initially adopted the idea of the "straight-through estimator" (STE) [111; 112], which enables us to bypass the stochasticity and discretization during neural network training. However, directly applying STE to bypass the entire SPD process led to subpar training performance. To address this, we adopted a more nuanced approach by breaking down the activation function and treating different parts differently. The SPD process can be conceptually divided into two parts: the deterministic probability function \(P_{\text{SPD}}\) and the stochasticity introduced by the Bernoulli sampling. For a Bernoulli distribution, the expectation value is equal to the probability, making \(P_{\text{SPD}}\) the expectation of the activation. Instead of applying the "straight-through" method to the entire process, we chose to bypass only the Bernoulli sampling process. At the same time, we incorporate the gradients induced by the probability function, aligning them with the expectation values of the random variable. In this way, we obtained an unbiased estimator [113] for gradient estimation, thereby enhancing the training of our SPDNNs. In the backward propagation of the \(l\)th layer, the gradients of the pre-activation \(z^{(l)}\) can be computed as (the gradient with respect to any parameter \(x\) is defined as \(g_{x}=\partial C/\partial x\) where \(C\) is the cost function): \[g_{z^{(l)}}=\frac{\partial a^{(l)}}{\partial\lambda^{(l)}}\circ\frac{\partial \lambda^{(l)}}{\partial z^{(l)}}\circ g_{a^{(l)}}=P_{\text{SPD}}^{\prime}( \lambda^{(l)})\circ\frac{\partial\lambda^{(l)}}{\partial z^{(l)}}\circ g_{a^{ (l)}}, \tag{2}\] where \(a^{(l)}=f_{\text{SPD}}(z^{(l)})=\mathbf{1}_{l<P_{\text{SPD}}(\lambda(z^{(l)}))}\) and the gradients \(g_{a^{(l)}}\) is calculated from the next layer (previous layer in the backward propagation). Using this equation, we can evaluate the gradients of the weights \(W^{(l)}\) as \(g_{W^{(l)}}=g_{z^{(l)}}^{\top}a^{(l-1)}\) where \(a^{(l-1)}\) is the activation values from the previous layer. By employing this approach, SPDNNs can be effectively trained using gradient-based algorithms (such as SGD [114] or AdamW [115]), regardless of the stochastic nature of the neuron activations. For detailed training procedures, please refer to Algorithm 1 and 3 in Supplementary Notes 1A and 2A, respectively. ### Simulation of incoherent SPDNNs for deterministic classification tasks The benchmark MNIST (Modified National Institute of Standards and Technology database) [116] handwritten digit dataset consists of 60,000 training images and 10,000 testing images. Each image is a grayscale image with \(28\times 28=784\) pixels. To adhere to the non-negative encoding required by incoherent light, the input images are normalized so that pixel values range from 0 to 1. To assess the performance of the SPD activation function, we investigated the training of the MLP-SPDNN models with the structure of \(784\xrightarrow{W^{(1)}}N\xrightarrow{W^{(2)}}10\), where \(N\) represents the number of neurons in the hidden layer, \(W^{(1)}\) (\(W^{(2)}\)) represents the weight matrices of the hidden (output) layer. The SPD activation function is applied to the \(N\) hidden neurons, and the resulting activations are passed to the output layer to generate output vectors (Figure 3a). To simplify the experimental implementation, biases within the linear operations were disabled, as the precise control of adding or subtracting a few photons poses significant experimental challenges. We have observed that this omission has minimal impact on the model's performance. In addition, after each weight update, we clamped the elements of \(W^{(1)}\) in the positive range in order to comply with the constraint of non-negative weights of an incoherent optical setup. Because SPD is not required at the output layer, the constraints on the last layer operation are less stringent. Although our simulations indicate that the final performance is only marginally affected by whether the elements in the last layer are also restricted to be non-negative, we found that utilizing real-valued weights in the output layer provided increased robustness against noise and errors during optical implementation. As a result, we chose to use real-valued weights in \(W^{(2)}\). During the training process, we employed the LogSoftmax function on the output vectors and used cross-entropy loss to formulate the loss function. Gradients were estimated using the unbiased estimator described in the previous section and Algorithm 1. For model optimization, we found that utilizing the SGD optimizer with small learning rates yields better accuracy compared to other optimizers such as AdamW, albeit at the cost of slower optimization speed. Despite the longer total training time, the SGD optimizer leads to a better optimized model. The models were trained with a batch size of 128, a learning rate of 0.001 for the hidden layer and 0.01 for the output layer, over 10,000 epochs to achieve optimized parameters. To prevent gradient vanishing in the plateau of the probability function \(P_{\text{SPD}}\), pre-activations were clamped at \(\lambda_{\text{max}}=3\) photons. It should be noted that due to the inherent stochasticity of the neural networks, each forward propagation generates varying output values even with identical weights and inputs. However, we only used one forward propagation in each step. This approach effectively utilized the inherent stochasticity in each forward propagation as an additional source of random search for the optimizer. Given the small learning rate and the significant noise in the model, the number of epochs exceeded what is typically required for conventional neural network training processes. The training is performed on a GPU (Tesla V100-PCIE-32GB) and takes approximately eight hours for each model. We trained incoherent SPDNNs with a varying number of hidden neurons \(N\) ranging from 10 to 400. The test accuracy of the models improved as the number of hidden neurons increased (see Supplementary Note 1B for more details). During inference, we adjusted the number of shots per SPD activation \(K\) to tune the SNR of the activations within the models. For each model configuration with \(N\) hidden neurons and \(K\) shots of SPD readouts per activation, we repeated the inference process 100 times to observe the distribution of stochastic output accuracies. Each repetition of inference on the test set, which comprises 10,000 images, yielded a different test accuracy. The mean values and standard deviations of these 100 repetitions of test accuracy are plotted in Figure 3b (see Supplementary Table 1 for more details). It was observed that increasing either \(N\) or \(K\) led to higher mean values of test accuracy and reduced standard deviations. ### Experimental implementation of SPDNNs The optical matrix-vector multiplier setup utilized in this work is based on the design presented in [13]. The setup comprises an array of light sources, a zoom lens imaging system, an light intensity modulator, and a photon-counting camera. For encoding input vectors, we employed an organic light-emitting diode (OLED) display from a commercial smartphone (Google Pixel 2016 version). The OLED display features a \(1920\times 1080\) pixel array, with individually controllable intensity for each pixel. In our experiment, only the green pixels of the display were used, arranged in a square lattice with a pixel pitch of \(57.5\,\mathrm{\SIUnitSymbolMicro m}\). To perform intensity modulation as weight multiplication, we combined a reflective liquid-crystal spatial light modulator (SLM, P1920-500-1100-HDMI, Meadowdark Optics) with a half-wave plate (HWP, WPH10ME-532, Thorlabs) and a polarizing beamsplitter (PBS, CCM1-PBS251, Thorlabs). The SLM has a pixel array of dimensions \(1920\times 1152\), with individually controllable transmission for each pixel measuring \(9.2\times 9.2\,\,\mathrm{\SIUnitSymbolMicro m}\). The OLED display was imaged onto the SLM panel using a zoom lens system (Resolv4K, Navitar). The intensity-modulated light field reflected from the SLM underwent further de-magnification and was focused onto the detector using a telescope formed by the rear adapter of the zoom lens (1-81102, Navitar) and an objective lens (XLFLUOR4x/340, Olympus). We decompose a matrix-vector multiplication in a batch of vector-vector dot products that are computed optically, either by spatial multiplexing (parallel processing) or temporal multiplexing (sequential processing). To ensure a more accurate experimental implementation, we chose to perform the vector-vector dot products in sequence in most of the data collection. For the computation of an optical vector-vector dot product, the value of each element in either vector is encoded in the intensity of the light emitted by a pixel on the OLED and the transmission of an SLM pixel. The imaging system aligned each pixel on the OLED display with its corresponding pixel on the SLM, where element-wise multiplication occurred via intensity modulation. The modulated light intensity from pixels in the same vector was then focused on the detector to sum up the element-wise multiplication values, yielding the vector-vector dot product result. Since the light is incoherent, only non-negative values can be allowed in both of the vectors. For more details for the incoherent optical MVM, please refer to Supplementary Note 3. The calibration of the vector-vector dot products on the optical MVM is detailed in Supplementary Note 5. In this experiment, we used a scientific CMOS camera (Hamamatsu ORCA-Quest qCMOS Camera C15550-20UP) [62] to measure both conventional light intensity measurement and SPD. This camera, with \(4096\times 2304\) effective pixels of \(4.6\times 4.6\,\,\mathrm{\SIUnitSymbolMicro m}\) each, can perform SPD with ultra-low readout noise in its photon counting mode. This scientific CMOS camera is capable of carrying out the SPD process with ultra-low readout noise. When utilized as an SPD in the photon-counting mode, the camera exhibits an effective photon detection efficiency of 68% and a dark count rate of approximately 0.01 photoelectrons per second per pixel (Supplementary Note 4). We typically operate with an exposure time in the millisecond range for a single shot of SPD readout. For conventional intensity measurement that integrates higher optical energy for the output layer implementation, we chose another operation mode that used it as a common CMOS camera. Further details on validating the stochastic SPD activation function measured on this camera are available in Supplementary Note 6. We also adapted our SPDNNs training methods to conform to the real-world constraints of our setup, ensuring successful experimental implementation (see Supplementary Note 7). First, we conducted the implementation of the hidden layers and collect the SPD activations experimentally by the photon-counting camera as an SPD array. Then we performed the output layer operations digitally on a computer. This aims to verify the fidelity of collecting SPD activations from experimental setups. Supplementary Figure 16 provides a visual representation of the distribution of some of the output vectors. For the experiments with 1 shot per activation (\(K=1\)), we collected 30 camera frames from the setup for each fixed input images and weight matrix, which are regarded as 30 independent repetitions of inference. They were then used to compute 30 different test accuracies by performing the output linear layer on a digital computer. For the experiments with 2 shots per activation (\(K=2\)), we divided the 30 camera frames into 15 groups, with each group containing 2 frames. The average value of the 2 frames within each group serves as the activations, which are used to compute 15 test accuracies. For additional results and details, please refer to Supplementary Note 8. Second, to achieve the complete optical implementation of the entire neural networks, we utilized our optical matrix-vector multiplier again to carry out the last layer operations. For example, we first focused on the data from the model with 400 hidden neurons and performed 5 shots per inference. In this case, for the 30 binary SPD readouts obtained from 30 frames, we performed an averaging operation on every 5 frames, resulting in 6 independent repetitions of the inference. These activation values were then displayed on the SLM as the input for the last layer implementation. For the 5-shot activations, the possible values included 0, 0.2, 0.4, 0.6, 0.8, and 1. When the linear operation were performed on a computer with full precision, the mean test accuracy was approximately 99.17%. To realize the linear operation with real-valued weight elements on our incoherent optical setup, we divided the weight elements into positive and negative parts. Subsequently, we projected these two parts of the weights onto the OLED display separately and performed them as two different operations. The final output value was obtained by subtracting the results of the negative weights from those of the positive weights. This approach requires at least double the photon requirement for the output layer and offers room for optimization to achieve higher energy efficiency. Nevertheless, even with these non-optimized settings, we demonstrated a photon budget that is lower than any other ONN implementations known to us for the same task and accuracy. For additional data and details, please refer to Supplementary Note 9. ### Deeper SPDNNs operating with coherent light Optical processors with coherent light have the ability to preserve the phase information of light and have the potential to encode complex numbers using arbitrary phase values. In this work, we focused on coherent optical computing utilizing real-number operations. In this approach, positive and negative values are encoded in the light amplitudes corresponding to phase 0 and \(\pi\), respectively. As the intensity of light is the square of the amplitude, direct detection of the light amplitude, where the information is encoded, would involve an additional square operation, i.e., \(\lambda(z)=|z|^{2}\). This leads to a "V-shape" SPD probability function with respect to the pre-activation \(z\), as depicted in Figure 4a. We chose to focus on the most straightforward detection case to avoid any additional changes to the experimental setup. Our objective is to demonstrate the adaptability and scalability of SPDNN models in practical optical implementations without the need for complex modifications to the existing setup. #### Coherent SPDNNs for MNIST classification Mlp-SpdnnClassifying MNIST using coherent MLP-SPDNNs was simulated utilizing similar configurations as with incoherent SPDNNs. The only difference was the inclusion of the coherent SPD activation function and the use of real-valued weights. Contrary to the prior scenario with incoherent light, the input values and weights do not need to be non-negative. The models were trained using the SGD optimizer [114] with a learning rate of 0.01 for the hidden layers and 0.001 for the last linear layer, over a period of 10,000 epochs. Convolutional SPDNNsThe convolutional SPDNN model used for MNIST digit classification, illustrated in Figure 4b, consists of a convolutional layer with 16 output channels, a kernel size of \(5\times 5\), a stride size of 1, and padding of 2. The SPD activation function was applied immediately after the convolutional layer, followed by average pooling of \(2\times 2\). The feature map of \(14\times 14\times 16=3136\) was then flattened into a vector of size 3136. After that, the convolutional layers were followed by a linear model of \(3136\to 400\to 10\), with the SPD activation function applied at each of the 400 neurons in the first linear layer. The detailed simulation results of the MNIST test accuracies of the coherent SPDNNs can be found in Supplementary Table 2 with varying model structures and shots per activation \(K\). For additional information, see Supplementary Note 2B. #### Coherent convolutional SPDNNs for Cifar-10 classification The CIFAR-10 dataset [117] has 60,000 images, each having \(3\times 32\times 32\) pixels with 3 color channels, that belong to 10 different categories, representing airplanes, automobiles, birds, cats, deers, dogs, frogs, horses, ships and trucks. The dataset is partitioned into a training set with 50,000 images and a test set with 10,000 images. The pixel values have been normalized using the mean value of \((0.4914,0.4822,0.4465)\) and standard deviation of \((0.2471,0.2435,0.2616)\) for each of the color channels. To boost performance, data augmentation techniques including random horizontal flips (50% probability) and random \(32\times 32\) crops (with 4-pixel padding) were implemented during training. The convolutional SPDNN models for Cifar-10 classification have deeper structures. Same as the convolutional models trained for MNIST, the convolutional layers use a kernel size of \(5\times 5\), a stride size of 1 and padding of 2. Each convolutional layer is followed by the SPD activation function, average pooling of \(2\times 2\), as well as batch normalization. After \(N_{\text{conv}}\) convolutional layers (\(N_{\text{conv}}=4\) in Figure 4e) with the number of output channels of the last one to be \(N_{\text{chan}}^{\text{last}}\), the feature map of \((32/2^{N_{\text{conv}}})^{2}\times N_{\text{chan}}^{\text{last}}\) is flattened to a vector, followed by two linear layers of \((32/2^{N_{\text{conv}}})^{2}N_{\text{chan}}^{\text{last}}\to 400\to 10\). In the first linear layer, either SPD or ReLU [63] activation function were used for each of the 400 neurons, as depicted in Figure 4e. We vary the number of convolutional layers and number of output channels of them to change the different model size (Figure 4e and Supplementary Figure 5). In these results, we only used a single shot of SPD measurement (\(K=1\)) to compute the SPD activations in the models, including the convolutional and linear layers. For additional information, please refer to Supplementary Note 2C. # Quantum-noise-limited optical neural networks operating at a few quanta per activation Shi-Yuan Ma [email protected] School of Applied and Engineering Physics, Cornell University, Ithaca, NY 14853, USA Tianyu Wang School of Applied and Engineering Physics, Cornell University, Ithaca, NY 14853, USA Jeremie Laydevant School of Applied and Engineering Physics, Cornell University, Ithaca, NY 14853, USA USRA Research Institute for Advanced Computer Science, Mountain View, CA 94035, USA Logan G. Wright Department of Applied Physics, Yale University, New Haven, Connecticut 06511, USA NTT Physics and Informatics Laboratories, NTT Research, Inc., Sunnyvale, CA 94085, USA Peter L. McMahon [email protected] School of Applied and Engineering Physics, Cornell University, Ithaca, NY 14853, USA Kavli Institute at Cornell for Nanoscale Science, Cornell University, Ithaca, NY 14853, USA ###### Abstract The quantum-noise-limited optical neural networks operating at a few quanta per activation is a key ingredient for quantum-noise-limited optical neural networks operating at a few quanta per activation. The quantum-noise-limited optical neural networks operating at a few quanta per activation are shown to be a key ingredient for quantum-noise-limited optical neural networks operating at a few quanta per activation. The quantum-noise-limited optical neural networks operating at a few quanta per activation are shown to be a key ingredient for quantum-noise-limited optical neural networks operating at a few quanta per activation. The quantum-noise-limited optical neural networks operating at a few quanta per activation are shown to be a key ingredient for quantum-noise-limited optical neural networks operating at a few quanta per activation. The quantum-noise-limited optical neural networks operating at a few quanta per activation are shown to be a key ingredient for quantum-noise-limited optical neural networks operating at a few quanta per activation. The quantum-noise-limited optical neural networks operating at a few quanta per activation are shown to be a key ingredient for quantum-noise-limited optical neural networks operating at a few quanta per activation. The quantum-noise-limited optical neural networks operating at a few quanta per activation are shown to be a key ingredient for quantum-noise-limited optical neural networks operating at a few quanta per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation per activation Discussion Supplementary Note 10. Robustness tests of SPDNNs Supplementary Note 11. Noise resilience compared to conventional models Supplementary Note 12. Distribution of expectation values for SPD activations ## Part I Simulation results In this part, we introduce the details of simulation of the single-photon-detection neural networks (SPDNNs). Each neuron activation in an SPDNN, as corresponding to a readout on a single-photon detector (SPD) in experiment, is modelled as a binary stochastic process [1, 2, 3]. For each SPD measurement, the single-shot output is either 0 or 1, with probabilities determined by the incident optical energy. The exact form of the activation function is defined by the actual physical process of single detection. For an incident beam with optical energy of \(\lambda\) photons per detection, due to Poissonian photon statistics, the probability for an SPD to detect a click \(P_{\mathrm{SPD}}(\lambda)=1-e^{-\lambda}\), as shown in Figure 2 in the main text. The detected binary results are used to compute the activation values. However, due to the stochasticity and discretization in the single-photon-detection process, estimation of the gradients in the loss function is challenging, so that conventional back propagation algorithms fail to train these models. Training stochastic neuron models has been investigated for many years. One of the major family of algorithms depend on such neurons being the Boltzmann machine [4, 5]. REINFORCE algorithms (RA) [6] update the weights along the direction of the gradients of expected reinforcement without explicitly computing them. They have been investigated and applied in different tasks to train stochastic neural networks effectively [7, 8]. In [9], many methods of estimating gradients through stochastic neurons are studied. They found that the fastest training in their experiments was achieved by the "straight-through estimator" (STE), which was previously introduced in Hinton's course in lecture 15b [10]. In our simulation of SPDNNs, we were inspired by both methods and found the estimator that trained our SPDNNs effectively, with the activation induced by the physical single-photon detection process. When using STE in a binary neural network, the binarization process, either deterministic or stochastic, is regarded as identity function during back propagation. However, if we directly use the STE to go "straight through" the whole SPD process, the training performance is not very good. That is because the STE is a biased estimator of the gradients [9, 11], which means the expectation value of the estimator is not the same as the true expectation value of the real random variable. The biased estimation of gradients harms the training accuracy over a large number of epochs. Then we try to look for an unbiased estimator inspired by RA [6, 9]. We can conceptually break the single-photon detection process into two parts, a deterministic probability function \(P_{\mathrm{SPD}}\), and the Bernoulli sampling that introduces the stochasticity. For Bernoulli distribution, the expectation value is the probability of 1 itself, so that \(P_{\mathrm{SPD}}\) is also the expectation value of the activation. Instead of going "straight through" the whole SPD process, we only skip the Bernoulli sampling process to avoid the uncertainty in backpropagation, and include the gradients induced by the probability function to meet the expectation values of the random variable. To enhance training effectiveness in certain cases, we introduced a slope variable, \(\eta\), which modifies the intensity value within the SPD activation function: \(P_{\mathrm{SPD}}^{\eta}(\lambda)=P_{\mathrm{SPD}}(\eta\lambda)\). The incorporation of a technique called "slope annealing" [12] allows controlled alteration of the gradients of the activation function, leading to more efficient navigation of the model's parameter space. Additionally, we impose an upper limit on the intensity by clamping it to a maximum value \(\lambda_{\mathrm{max}}\). This prevents the occurrence of vanishing gradients resulting from excessively large values and the plateauing probability function. Both the application of the slope variable and intensity clamping can be exclusively utilized during the training phase. In optical implementation, the annealing factor can be absorbed in the mapping from the trained weights to the controlled parameters on the experimental setup. The details of the back-propagation and training process are shown in Algorithm 1 and 3, with the exact activation functions of incoherent and coherent optical setups, respectively. In the following sections, we introduce the two SPDNN setups in details and test their performance with different tasks and architectures. ### Supplementary Note 1. SPDNN with incoherent optical setups ### Modelling and training When an optical neural network (ONN) operates with incoherent light, the values of the vector elements are encoded intensity of light. The encoded values are non-negative and the operations are performed by modulating the intensity of light. So for an optical matrix-vector multiplier (optical MVM) operating with incoherent light, the values in a output vector \(z\) is readily the intensity to be measured by the detector, i.e. \(\lambda=z\). The probability to have the SPD measurement of 1 is then \(P_{\mathrm{SPD}}(\lambda(z))=P_{\mathrm{SPD}}(z)\). This probability \(P_{\mathrm{SPD}}\) is determined by the pre-activation value \(z\). Thus, the SPD activation is a Bernoulli sampling of the probability \(P_{\mathrm{SPD}}\), \(f_{\mathrm{SPD}}^{\mathrm{Inch}}(z)=\mathbf{1}_{t<P_{\mathrm{SPD}}(z)}\), where \(t\) is a uniform random varible \(t\sim U[0,1]\) and \(\mathbf{1}_{x}\) is the indicator function on the true value of \(x\), i.e. \[f_{\mathrm{SPD}}^{\mathrm{Incoh}}(z)=\begin{cases}1&\text{with probability }p=P_{\mathrm{SPD}}(z),\\ 0&\text{with probability }1-p,\end{cases} \tag{1}\] where the probability function \(P_{\mathrm{SPD}}(z)=1-e^{-z}\). The activation in the forward propagation is calculated by \(a=f_{\mathrm{SPD}}^{\mathrm{Incoh}}(z)\). ``` 1:Input:\(\mathbf{1}_{x}\), \(\mathbf{1}_{x}\), \(\mathbf{1}_{y}\), \(\mathbf{1}_{z}\), \(\mathbf{1 Algorithm 1. With \(L\) layers in the neural network, the SPD activation function is applied after every layer except for the output layer. In the \(l\)th layer (\(l\neq L\)), \(z^{(l)}=a^{(l-1)}W^{(l)}\) is the direct output of the optical MVM that encodes the information of the dot product results. In an incoherent optical setup, the output values are directly encoded in light intensity, \(\lambda^{(l)}=z^{(l)}\). In the training process, we clamp the intensity to a maximum value \(\lambda_{\max}\) to avoid the vanishing gradient with a large value. Meanwhile, the clamped intensity vector \(\lambda^{(l)}\) is multiplied by the slope variable \(\eta\) to compute the probability of detecting a click on the SPDs according to \(P_{\text{SPD}}\), \(p^{(l)}=P_{\text{SPD}}(\eta\lambda^{(l)})\). Then the activation values are the Bernoulli sampling of the computed probabilities, \(a^{(l)}=\mathbf{1}_{t<p^{(l)}}\), which are sent to the next layer in the forward propagation. In backward propagation, our gradient estimator assumes that the gradients of the stochastic sampling process is \(1\), \(\partial a^{(l)}/\partial p^{(l)}=1\). Thus, during the backward pass of the \(l\)th layer, given the gradient with respect to \(a^{(l)}\), \(g_{a^{(l)}}=\partial C/\partial a^{(l)}\), calculated from the next layer (previous layer in the backward propagation), the gradient with respect to the pre-activation \(z^{(l)}\) is calculated as \[g_{z^{(l)}}=\frac{\partial a^{(l)}}{\partial z^{(l)}}\circ g_{a^{(l)}}=\frac{ \partial a^{(l)}}{\partial p^{(l)}}\circ\frac{\partial p^{(l)}}{\partial \lambda^{(l)}}\circ\frac{\partial\lambda^{(l)}}{\partial z^{(l)}}\circ g_{a^{ (l)}}=1\circ P^{\prime}_{\text{SPD}}(\lambda^{(l)})\circ 1\circ g_{a^{(l)}}=P^{ \prime}_{\text{SPD}}(z^{(l)})\circ g_{a^{(l)}}, \tag{2}\] so that the gradients with respect to the weights \(W^{(l)}\) is \(g_{W^{(l)}}=g_{z^{(l)}_{l^{(l)}}}^{\top}a^{(l-1)}\). In this way, the gradients can be efficiently calculated to optimize the weights with a gradient-based optimizer with a learning rate. Note that for an incoherent optical setup, the elements in the weights (realized by intensity modulations) are also non-negative, so that the updated weights need to be clamped to non-negative values after each optimization step. After each optimization step, the slope variable is updated by multiplying a factor \(\theta\), as the "slope annealing" trick [12] to improve the training performance when necessary. During the inference of a trained model, the forward pass of test inputs is similar to the training process, with the exception that the maximum clamping \(\lambda_{\max}\) is not applied. Additionally, to control the level of uncertainty in the stochastic neural networks, we can choose to use multiple shots of SPD measurements during each inference. In a "\(K\)-shot" inference, we use \(K\) shots of binary SPD readouts to the final activation value of the neuron, denoted as \(a^{[K]}\), is the average of the \(K\) independent stochastic binary values. This process is basically integrating a few more photons using the SPD, as people usually do for conventional ONN implementation [16]. For a single shot of SPD measurement per activation, \(a^{[1]}=a\in\{0,1\}\), while for \(K\) shots, \(a^{[K]}=\frac{1}{K}\sum_{k=1}^{K}a_{k}\in\{0,1/K,2/K,\ldots,1\}\). This approach reduces the uncertainty in the models, resulting in more precise output values. In the ideal case where an infinite number of shots are integrated (\(K\to\infty\)), the activation \(a^{[\infty]}\) would converge to the expectation value without stochasticity, denoted as \(a^{[\infty]}=\mathbb{E}[a]=P_{\text{SPD}}(z)\). As we will see in Supplementary Note 1.2, the SPDNN models have higher test accuracy as the shots per activation \(K\) increases. The detailed inference procedure is explained in Algorithm 2. ``` 0: A batch of test inputs \(a^{(0)}\) (\(N_{\text{batch}}\times N_{0}\)) and trained weights \(W^{(l)}\) (\(N_{l}\times N_{l-1}\), \(l\in\{0,1,\ldots,L\}\)), slope annealing factor \(\eta\). 0: The output \(a^{(L)}\). 1:for\(l=1\) to \(L\)do 2:\(z^{(l)}\gets a^{(l-1)}W^{(l)\top}\)\(\triangleright\) Linear operation to compute the pre-activation 3:\(\lambda^{(l)}\gets z^{(l)}\)\(\triangleright\) For incoherent light, intensity is directly modulated 4:if\(l<L\)then\(\triangleright\) SPD activation process 5:\(p^{(l)}\gets P_{\text{SPD}}(\eta\cdot\lambda^{(l)})\)\(\triangleright\) The probability of detecting a click, with the slope \(\eta\) applied 6:for\(k=1\) to \(K\)do\(\triangleright\)\(K\) shots in one inference 7:\(a^{(l),k}\leftarrow\text{Sampling}(p^{(l)})\)\(\triangleright\) SPD output for each shot 8:endfor 9:\(a^{(l)}\leftarrow\frac{1}{K}\sum_{k=1}^{K}a^{(l),k}\)\(\triangleright\) Average over all \(K\) shots for the activation values 10:endif 11:endfor 12:\(a^{(L)}\leftarrow\lambda^{(L)}\)\(\triangleright\) Use the output intensity directly in the inference ### MNIST classification task To illustrate the capability of the SPD activation function, we first use it in a simple multi-layer perceptron (MLP) architecture and train the models for the benchmark MNIST classification task. The models have the structure of \(784\to N\to 10\) with \(N\) neurons in the hidden layer, as discussed in the main text. The MNIST dataset has 60,000 images for training and 10,000 images for testing. Each image is grayscale and has \(28\times 28=784\) pixels. To meet the non-negative encoding of incoherent light, the input images are normalized to have pixel values of range 0 to 1. The models consist of two linear layers, the \(784\to N\) hidden layer has the weight matrix \(W^{(1)}\) with a shape of \(N\times 784\) and the \(N\to 10\) output layer has the weight matrix \(W^{(2)}\) with a shape of \(10\times N\). The SPD activation function is applied for each hidden neuron after the linear operation of \(W^{(1)}\) to compute the neuron activations, then the computed activation values are passed to the output layer to produce the output vectors. The elements in the first linear operation, \(W^{(1)}\), are clamped to be non-negative to meet the requirement of an incoherent optical setup. In general, real-valued weights can be realized with an incoherent optical MVM if some digital-electronic postprocessing is available. In our case that the activations are measured by SPDs, the activation function is directly applied in the single-photon detection process, which makes digital postprecessing impossible. Similarly, biases of the linear operations are also disabled. If we want to get away with digital postprocessing by applying a bias term directly to the optical intensity, at the level of a few photons the approach is also challenging in experiments. However, as the output layer is implemented as the conventional optical computing with a higher single-to-noise ratio (SNR), we can effectively implement real-values weight of \(W^{(2)}\). In optical implementation, this would involve extra operations to map these values onto the incoherent setup. During the training process, we apply the LogSoftmax function to the output vectors and use cross-entropy loss to construct the loss function. To avoid the issue of gradient vanishing, we clamp the pre-activations at \(\lambda_{\text{max}}=3\) photons. It is important to note that due to the stochastic nature of the neural networks, each forward pass generates different output values even with the same weights and inputs. However, we only use a single forward pass in each training epoch, which has shown to be the most efficient training approach. The stochasticity introduced in each forward propagation could add to the random search of the stochastic optimizer itself so that it helps with the training process. We have found that using the SGD optimizer [13] with small learning rates leads to better accuracy compared to other optimizers such as AdamW [15]. Although training with SGD takes longer overall, it helps us achieve a better optimized model in the end. For our final results, we used a batch size of 128 and a learning rate of 0.001 for the hidden layer and 0.01 for the output layer in the SGD optimizer. We trained each SPDNN model for 10,000 epochs to obtain optimized parameters, and a even higher number of epoches can be needed to achieve a better accuracy. Given the small learning rate and the significant amount of noise in the model, the number of epochs required is much larger than what is typically seen in common neural network training processes. The training and test errors for an incoherent MLP SPDNN with a structure of \(784\to 400\to 10\) are shown in Supplementary Figure 1. The training process was performed on a GPU (Tesla V100-PCIE-32GB) and took approximately eight hours to complete. **Supplementary Figure 1. Training curves of an incoherent SPDNN model for MNIST classification.** The plot illustrates the progression of test and training errors throughout the training process of an incoherent SPDNN model with an MLP architecture of \(784\to 400\to 10\). The optimization is conducted using an SGD optimizer with learning rates of 0.001 for the hidden layer and 0.01 for the output layer. The final trained model is obtained at 10,000 epochs. The magnitude of the weight element values in the first linear layer, \(W^{(1)}\), is influenced by the range of the input vectors, \(a^{(0)}\), and the specific form of the SPD activation function, \(f_{\text{SPD}}^{\text{Incoh}}(z)\). In the forward pass of an incoherent SPDNN, the pre-activation values, \(z^{(1)}\), are computed as \(z^{(1)}=a^{(0)}W^{(1)\top}\). The activation function, \(f_{\text{SPD}}^{\text{Incoh}}(z)\), is defined as \(\mathbf{1}_{t<P_{\text{SPD}}(z)}\), where \(t\) is a random variable uniformly distributed between 0 and 1, and \(P_{\text{SPD}}(z)\) represents the probability of photon detection. When the input vectors, \(a^{(0)}\), are normalized to the range of 0 to 1, the weight elements in \(W^{(1)}\) are optimized based on the specific form of \(P_{\text{SPD}}\) because it depends on the exact value of pre-activations \(z\). In our simulation of an incoherent SPDNN, the elements of \(z^{(1)}\) are represented in terms of photon numbers, where the value 1 corresponds to 1 photon. When \(z\gtrsim 3\), \(P_{\text{SPD}}\) reaches the plateau part of the probability function. Thus, we have to make sure the value of the pre-activation to be around 1 photon to ensure an effective forward pass. When considering a uniform bright image where each element has the maximum value of 1, and with an input vector size of \(28\times 28=784\), if we aim for an output value of approximately 1 photon, the average value of the weight elements in \(W^{(1)}\) should be around \(1/784\approx 0.0013\). The average pixel value in the MNIST dataset is approximately 0.13 (when each pixel value is normalized to the range of 0 to 1). Based on this, we can estimate that to achieve an output value of approximately 1 photon, the average weight element value should be around 0.01. Taking into account that both the input images and weight matrices tend to be sparse, this estimation may be slightly lower than the actual scenario. Supplementary Figure 2 illustrates the matrix elements of \(W^{(1)}\) for a model with \(N=100\) hidden neurons. The weight elements range from 0 to 5.18, with an average value of 0.07. Each block represents a row vector of size 784, rearranged in the form of \(28\times 28\). The average value of \(W^{(1)}\) may vary slightly in different network structures, ranging from 0.06 to 0.08. During the inference of SPDNNs, the pixel values of the test images are normalized to the range of 0 to 1 as well. This will be correspond to the dynamical range on the optical setup. We trained incoherent MLP-SPDNN models with varying numbers of hidden neurons, \(N\), ranging from 10 to 400. As discussed in Section Supplementary Note 1 A, we can adjust the number of SPD measurements per activation, denoted as \(K\), to control the level of stochasticity in the models. The results of the MNIST test accuracy for different combinations of \(N\) and \(K\) are summarized in Supplementary Table 1. The values of \(N\) include 10, 20, 50, 100, 200, 300, and 400, while \(K\) takes on the values of 1, 2, 3, 5, 7, 10, and \(\infty\). In the case of \(K\rightarrow\infty\), we use the expectation of the activation values, \(P_{\text{SPD}}\), as the activation function, which is equivalent to integrating an infinite number of shots per SPD detection. This serves as an upper bound to approach with the increase of \(K\). Due to the stochastic nature of SPDNNs, the output vectors vary across different repetitions of inference. To capture the overall behavior of the models, we repeated the full inference process 100 times for each structure with \(N\) hidden neurons and \(K\) shots per activation. This allows us to calculate the mean test accuracy and standard deviation, representing the distribution of test accuracies. Each independent repetition of inference involves the MNIST test dataset, consisting of 10,000 images. We observe that as either \(N\) or \(K\) increases, the mean test accuracy tends to improve while the standard deviation decreases. \begin{tabular}{|c||c|c|c|c|c|c|c|} \hline Model & \(K=1\) & \(K=2\) & \(K=3\) & \(K=5\) & \(K=7\) & \(K=10\) & \(K\rightarrow\infty\) \\ \hline \hline 784–10–10 & \(78.03\pm 0.32\) & \(83.18\pm 0.26\) & \(84.79\pm 0.22\) & \(86.13\pm 0.17\) & \(86.65\pm 0.17\) & \(87.08\pm 0.16\) & \(87.91\pm 0.00\) \\ 784–20–10 & \(86.74\pm 0.24\) & \(89.98\pm 0.18\) & \(90.96\pm 0.15\) & \(91.71\pm 0.13\) & \(92.00\pm 0.13\) & \(92.22\pm 0.13\) & \(92.66\pm 0.00\) \\ 784–50–10 & \(93.04\pm 0.16\) & \(94.49\pm 0.15\) & \(94.92\pm 0.12\) & \(95.24\pm 0.11\) & \(95.38\pm 0.10\) & \(95.47\pm 0.09\) & \(95.73\pm 0.00\) \\ 784–100–10 & \(95.20\pm 0.16\) & \(96.24\pm 0.11\) & \(96.53\pm 0.10\) & \(96.75\pm 0.09\) & \(96.85\pm 0.07\) & \(96.91\pm 0.07\) & \(97.02\pm 0.00\) \\ 784–200–10 & \(96.62\pm 0.12\) & \(97.33\pm 0.10\) & \(97.54\pm 0.08\) & \(97.70\pm 0.08\) & \(97.75\pm 0.08\) & \(97.80\pm 0.06\) & \(97.98\pm 0.00\) \\ 784–300–10 & \(97.00\pm 0.12\) & \(97.61\pm 0.08\) & \(97.80\pm 0.08\) & \(97.93\pm 0.07\) & \(97.97\pm 0.06\) & \(98.01\pm 0.05\) & \(98.12\pm 0.00\) \\ 784–400–10 & \(97.31\pm 0.11\) & \(97.85\pm 0.10\) & \(98.01\pm 0.09\) & \(98.15\pm 0.06\) & \(98.20\pm 0.06\) & \(98.27\pm 0.05\) & \(98.41\pm 0.00\) \\ \hline \end{tabular} **Supplementary Table 1. Test accuracy (%) of incoherent MLP-SPDNNs on MNIST with varying hidden layer size \(N\) and shots per activation \(K\).** These models have an MLP structure of \(784\to N\to 10\), where \(N\) represents the number of hidden neurons. Each hidden neuron uses \(K\) shots of binary SPD readouts to compute its activation. The reported test accuracy values are obtained by averaging the mean value and standard deviation over 100 repetitions of inferences on the MNIST test set, which comprises 10,000 images. **Supplementary Figure 2. Visualization of weight elements in the first linear layer of an incoherent SPDNN.** The architecture of this model is \(784\to 100\to 10\), and we display the weight matrix \(W^{(1)}\) of the first layer (with dimensions \(100\times 784\)). Each block represents a row vector in \(W^{(1)}\) containing \(784\) elements. These column vectors are rearranged to form a 2D block with dimensions \(28\times 28\), matching the original shape of the MNIST input images. The 100 rows in \(W^{(1)}\), corresponding to the 100 hidden neurons in the neural network, are arranged in a \(10\times 10\) grid to be visualized. The average value of each block is indicated at the top, and the overall average value of the weight matrix is approximately \(\sim 0.07\). ## Supplementary Note 2 SPDNNs with coherent optical setups ### Modelling and training ``` 1:A batch of inputs \(a^{(0)}\) (\(N_{\text{batch}}\times N_{0}\)) with corresponding targets \(y\) (\(N_{L}\times 1\)), current weights \(W^{(l)}\) (\(N_{l}\times N_{l-1}\), \(l\in\{0,1,\dots,L\}\)), current slope variable \(\eta\), slope annealing factor \(\theta\), current learning rate \(\alpha\), learning rate decay coefficient \(\gamma\) and the clamped photon number \(\lambda_{\max}\). 2:Updated weights \(W^{(l)}\) (\(l\in\{0,1,\dots,L\}\)), slope \(\eta\) and learning rate \(\alpha\). 3:\(\boldsymbol{I}\). Forward pass 4:for\(l=1\) to \(L\)do 5:\(z^{(l)}\gets a^{(l-1)}W^{(l)\top}\)\(\triangleright\) Linear operation to compute the pre-activation 6:\(\lambda^{(l)}\leftarrow(z^{(l)})^{2}\)\(\triangleright\) For coherent light, intensity is the square of the amplitude 7:\(\lambda^{(l)}\leftarrow\min(\lambda^{(l)},\lambda_{\max})\)\(\triangleright\) Clamp the maximum intensity 8:if\(l<L\)then 9:\(p^{(l)}\gets P_{\text{SPD}}(\lambda^{(l)})\)\(\triangleright\) The probability of detecting a click on the SPDs 10:\(a^{(l)}\leftarrow\text{Sample}(p^{(l)})\)\(\triangleright\) SPD output for each shot 11:endif 12:endfor 13:\(a^{(L)}\leftarrow\text{Output}(\lambda^{(L)})\)\(\triangleright\) Final output function 14:\(\boldsymbol{II}\). Backward pass 15:Compute \(g_{a^{(L)}}=\frac{\partial C}{\partial a^{(L)}}\) knowing \(a^{(L)}\) and \(y\). 16:\(g_{z^{(L)}}\leftarrow\frac{\partial a^{(L)}}{\partial z^{(L)}}\circ g_{a^{(L)}}\) 17:for\(l=L\)do 18:if\(l<L\)then 19:\(g_{p^{(l)}}\gets g_{a^{(l)}}\)\(\triangleright\) "Straight-through" here, skip the Bernoulli process 20:\(g_{z^{(l)}}\gets 2z^{(l)}\circ P^{\prime}_{\text{SPD}}\left((z^{(l)})^{2} \right)\circ g_{p^{(l)}}\)\(\triangleright\)\(\frac{\partial p^{(l)}}{\partial z^{(l)}}=\frac{\partial p^{(l)}}{\partial\lambda^{(l)}} \circ\frac{\partial\lambda^{(l)}}{\partial z^{(l)}}=2z^{(l)}\circ P^{\prime}_{ \text{SPD}}\left((z^{(l)})^{2}\right)\) 21:endif 22:\(g_{a^{(l-1)}}\gets g_{z^{(l)}}W^{(l)}\)\(\triangleright\) The gradients with respect to \(W^{(l)}\) 23:\(\boldsymbol{W}\). Parameter update 24:for\(l=1\) to \(L\)do 25:\(W^{(l)}\leftarrow\text{Update}(W^{(l)},g_{W^{(l)}},\alpha)\)\(\triangleright\) Update the weights 26:endfor 27:\(\eta\leftarrow\theta\eta\)\(\triangleright\) Update the slope 28:\(\alpha\leftarrow\gamma\alpha\)\(\triangleright\) Update the learning rate ``` **Algorithm 3** Physics-aware stochastic training of an SPDNN with coherent light. \(N_{\text{batch}}\) is the batch size, \(N_{l}\) denotes the number of neurons in layer \(l\) and \(N_{0}\) is the input size. \(C\) is the loss function. \(L\) is the number of layers. \(P_{\text{SPD}}(\lambda)\) is the function of the probability to detect a click on the single-photon detector (SPD) with respect to the incident light intensity \(\lambda\) (in number of photons). Sample() is a probabilistic sampling of the probability. In SPDNNs, it refers to Bernoulli sampling, Sample(\(p\)) has a probability of \(p\) to be \(1\) and a probability of \(1-p\) to be \(0\) (i.e. Sample(\(p\)) \(\equiv\)\(\mathbf{1}_{t<p}\), \(t\sim U[0,1]\)). For a coherent setup, \(\lambda=z^{2}\) where \(z\) is the pre-activation, output of a matrix-vector multiplier. Output() determines the function applied to the pre-activation right before the final output, such as Softmax or LogSoftmax. Update() specifies how to update the parameters given the calculated gradients, using optimizers such as SGD [13], Adam [14] or AdamW [15]. **Require:** A batch of inputs \(a^{(0)}\) (\(N_{\text{batch}}\times N_{0}\)) with corresponding targets \(y\) (\(N_{L}\times 1\)), current weights \(W^{(l)}\) (\(N_{l}\times N_{l-1}\), \(l\in\{0,1,\dots,L\}\)), current slope variable \(\eta\), slope annealing factor \(\theta\), current learning rate \(\alpha\), learning rate decay coefficient \(\gamma\) and the clamped photon number \(\lambda_{\max}\). **Ensure:** Updated weights \(W^{(l)}\) (\(l\in\{0,1,\dots,L\}\)), slope \(\eta\) and learning rate \(\alpha\). **Algorithm 4** Physics-aware stochastic training of an SPDNN with coherent light. \(N_{\text{batch}}\) is the batch size, \(N_{l}\) denotes the number of neurons in layer \(l\) and \(N_{0}\) is the input size. \(C\) is the loss function. \(L\) is the number of layers. \(P_{\text{SPD}}(\lambda)\) is the function of the probability to detect a click on the single-photon detector (SPD) with respect to the incident light intensity \(\lambda\) (in number of photons). Sample() is a probabilistic sampling of the probability. In SPDNNs, it refers to Bernoulli sampling, Sample(\(p\)) has a probability of \(p\) to be \(1\) and a probability of \(1-p\) to be \(0\) (i.e. Sample(\(p\)) \(\equiv\)\(\mathbf{1}_{t<p}\), \(t\sim U[0,1]\)). For a coherent setup, \(\lambda=z^{2}\) where \(z\) is the pre-activation, output of a matrix-vector multiplier. Output() determines the function applied to the pre-activation right before the final output, such as Softmax or LogSoftmax. Update() specifies how to update the parameters given the calculated gradients, using optimizers such as SGD [13], Adam [14] or AdamW [15]. In coherent optical MVMs [17; 18; 19; 20; 21; 22; 23], the information is conveyed through both the amplitude and phase of light states. These multipliers have the potential to encode complex numbers using arbitrary phase, but in most applications, only phases of \(0\) and \(\pi\) are used for positive and negative real-number values, to align with conventional machine learning models. Our work focuses on real-valued coherent optical MVMs. Now that the information is encoded in the amplitude and phase instead of the intensity, the photon detection process involves measuring the square modulus of the complex number, which adds an extra square function to the pre-activation values. Thus, the coherent SPD activation function is \(f^{\text{Coh}}_{\text{SPD}}(z)=\mathbf{1}_{t<P_{\text{SPD}}(z^{2})}\), where \(t\) is a uniform random variable \(t\sim U[0,1]\) and \(\mathbf{1}_{x}\) is the indicator function on the true value of \(x\), i.e. \[f_{\text{SPD}}^{\text{Coh}}(z)=\begin{cases}1&\text{with probability }p=P_{\text{SPD}}(z^{2}),\\ 0&\text{with probability }1-p,\end{cases} \tag{3}\] where \(P_{\text{SPD}}(z^{2})=1-e^{-z^{2}}\). The activation in the forward propagation is calculated by \(a=f_{\text{SPD}}^{\text{Coh}}(z)\). The expectation of the coherent SPD activation is \(\mathbb{E}[f_{\text{SPD}}^{\text{Coh}}]=P_{\text{SPD}}(z^{2})\). The coherent activation function, depicted in Figure 4a of the main text, exhibits a distinct "V" shape due to the additional square operation, which is symmetric about the y axis. It could be problematic as an activation function [24]. One possible solution is to modify the information encoding and detection scheme to alter the exact form of \(\lambda(z)\) (e.g. [21]). However, in this section, we have chosen to employ the most straightforward intensity-detection scenario, which does not necessitate modifications to conventional ONN implementation. Remarkably, despite its simplicity, this activation function delivers comparable performance and demonstrates impressive results. By adopting this approach, we alleviate experimental complexities while ensuring reliable inference in our SPDNN models. ### MNIST classification task The MNIST handwritten-digit classification task was performed using the same simulation configurations as with incoherent SPDNNs, but with the difference of the coherent SPD activation function and real-number operations. Unlike the previous case, no clamping of the weights was necessary. The models were trained using the SGD optimizer with a learning rate of 0.01 for the hidden layers and 0.001 for the last linear layer, over a period of 10,000 epochs. To evaluate the impact of model size, we trained models with both one and two hidden layers. The training curves of the model with the structure of \(784\to 400\to 400\to 10\) are shown in Supplementary Figure 3. The results for models with different structures and shots of SPD measurements per activation can be found in Supplementary Table 2, and the weights of a model with the structure of \(784\to 100\to 10\) are illustrated in Supplementary Figure 4. Furthermore, convolutional SPDNNs were also used for MNIST classification. The architecture included a convolutional layers with 16 output channels, a kernel size of \(5\times 5\) and a stride of 1. An SPD activation function was immediately applied after each convolution layer, without the use of batch normalization. Average pooling of \(2\times 2\) was performed after each of the SPD activations. After the convolution layer, the total number of features was 3136, then the convolution layers were followed by a linear model of \(3136\to 400\to 10\), with the SPD activation function applied at each of the 400 hidden neurons as well. This structure is depicted in Figure 4b in the main text. For optimization, we used a SGD optimizer with a learning rate of 0.01 for the entire model. The convolutional SPDNN model can be optimized easily without fine tuning the parameters. After 200 epochs, the accuracy quickly reached 99.4%. **Supplementary Figure 4. Visualization of weight elements in the first linear layer of a coherent SPDNN.** The architecture of this model is \(784\to 100\to 10\), and we display the weight matrix \(W^{(1)}\) of the first layer (with dimensions \(100\times 784\)). Each block represents a row vector in \(W^{(1)}\) containing 784 elements. These column vectors are rearranged to form a 2D block with dimensions \(28\times 28\), matching the original shape of the MNIST input images. The 100 rows in \(W^{(1)}\), corresponding to the 100 hidden neurons in the neural network, are arranged in a \(10\times 10\) grid to be visualized. The average value and standard deviation of the elements in each block is indicated at the top. ### CIFAR-10 classification task The CIFAR-10 dataset [25] has 60,000 images, each having \(3\times 32\times 32\) pixels with 3 color channels, that belong to 10 different categories, representing airplanes, automobiles, birds, cats, deers, dogs, frogs, horses, ships and trucks. The dataset is partitioned into a training set with 50,000 images and a test set with 10,000 images. The pixel values have been normalized using the mean value of \((0.4914,0.4822,0.4465)\) and standard deviation of \((0.2471,0.2435,0.2616)\) for each of the color channels. To boost performance, data augmentation techniques including random horizontal flips (50% probability) and random \(32\times 32\) crops (with 4-pixel padding) were implemented during training. We used the AdamW optimizer [15] with a learning rate of 0.0005 and betas of (0.99, 0.98). The models were trained for thousands of epochs. The convolutional SPDNNs have a structure where the SPD activation function is applied after each convolution layer and before an average pooling of \(2\times 2\). The final architecture consists of a series of convolution layers followed by a linear layer of 400 neurons, and a last layer of \(400\to 10\) for the output. Same as the convolutional models trained for MNIST, the convolutional layers use a kernel size of \(5\times 5\), a stride size of 1 and padding of 2. Batch normalization was used in the models after each convolutional layer. Either SPD or ReLU activation function was applied to each of the 400 neurons in the first linear layer, as depicted in Figure 4e in the main text. After \(N_{\text{conv}}\) convolutional layers (\(N_{\text{conv}}=2\), 3 or 4 in this case) with the number of output channels of the last one to be \(N_{\text{chan}}^{\text{last}}\) (either 128 or 256 in this case), the feature map of \((32/2^{N_{\text{conv}}})^{2}\times N_{\text{chan}}^{\text{last}}\) is flattened to a vector, followed by two linear layers of \((32/2^{N_{\text{conv}}})^{2}N_{\text{chan}}^{\text{last}}\to 400\to 10\). In addition do the results presented in Figure 4e in the main text, we experimented with more architectures ranging from 2 to 4 convolution layers, and the results are displayed in Supplementary Figure 5. In these models, only SPD activation function was used. The x-axis of the plot represents the number of multiply-accumulate (MAC) operations in the convolutional layers. The layout of the number of channels for each convolution layer is noted around each data point for each model. For example, "64-128" indicates that there are two convolution layers each with 64 and 128 output channels, respectively. These mean values (data points) and standard deviations (shaded area) of the test accuracies are obtained using 100 repeated inference, and the activations only involved a single shot of SPD measurement (\(K=1\)) in all the neurons, including the convolutional and linear layers. We further investigated the effects of multiple shots of SPD measurments per activation in the SPD activations in convolutional (\(K_{\text{conv}}\)) and linear (\(K_{\text{lin}}\)) layers, respectively. We chose to test model with four convolutional layers of 128, 256, 256, and 256 output channels and varied the \(K_{\text{lin}}\) and \(K_{\text{conv}}\) to see the test accuracies. The results are summarized in Supplementary Table 3. In these SPDNNs, the number of operations in the output layer is negligible compared to the entire models. In terms of number of MAC operations (dot products, DPs), \(N_{\text{MAC}}^{\text{out}}=4000\) (\(N_{\text{DP}}^{\text{out}}=10\)). The number of dot products, or the activation size, is directly related to the number of optical detections in ONN implementations. The output layer is the only layer that needs to be implemented with a "high SNR" (see Figure 1 in the main text). The small portion of operations in this layer indicates the capability of low-SNR stochastic layers in a deeper model. This further suggests the potential to leverage stochastic physical systems with low SNRs to perform reliable neural-network inference. **Supplementary Table 3. Test accuracy (%) of the convolutional SPDNN on CIFAR-10 with varying shots per activation \(K\) in the convolutional and linear layers.** The SPDNN model in this table consists of four convolutional layers with 128, 256, 256, and 256 output channels, respectively. The convolutional layers are followed by a linear layer with 400 neurons and an output layer with 10 neurons. The SPD activation function is applied to each of the 400 neurons in the first linear layer. \(K_{\text{conv}}\) represents the number of shots of SPD readouts per activation in the convolutional layers, while \(K_{\text{lin}}\) represents the shots per activation in the linear layer. The mean accuracy and standard deviation are calculated based on 100 repetitions of inferences using the CIFAR-10 test set of 10,000 images. ## Part II Experimental setup ### Supplementary Note 3. INCOHERENT OPTICAL MATRIX-VECTOR MULTIPLIER The optical matrix-vector multiplier (optical MVM) setup is based on the setup designed in [26]. It consists of an array of light sources, a zoom lens imaging system, an intensity modulator, and a photodetector. We used an organic light-emitting diode (OLED) display of a commercial smartphone (Google Pixel 2016 version) as the light source for encoding input vectors. The OLED display consist of a \(1920\times 1080\) pixel array, with individually controllable intensity for each pixel. There are pixels of three colors on the display, only the green pixels (light wavelength of \(\sim 532\) nm) are used in the experiment. The green pixels are arranged in a square lattice with a pixel pitch of \(57.5\)\(\upmu\)m. A reflective liquid-crystal spatial light modulator (SLM, P1920-500-1100-HDMI, Meadowlark Optics) was combined with a half-wave plate (HWP, WPH10ME-532, Thorlabs) and a polarizing beamsplitter (PBS, CCM1-PBS251, Thorlabs) to perform intensity modulation as weight multiplication. The SLM has a pixel array of dimensions \(1920\times 1152\), with individually controllable transmission for each pixel. Each pixel has the size of \(9.2\times 9.2\)\(\upmu\)m. A zoom lens system (Resolv4K, Navitar) was used to image the OLED display onto the SLM panel (Supplementary Figure 6). The intensity-modulated light field reflected from the SLM was further de-magnified and imaged onto the detector, by a telescope formed by the rear adapter of the zoom lens (1-81102, Navitar) and an objective lens (XLFLUOR4x/340, Olympus). An additional band-pass filter (BPF, FF01-525/15-25, Semrock) and polarizer (LPVISE100-A, Thorlabs) were inserted into the telescope in order to reduce the bandwidth and purify the polarization of the light reflected by the PBS. A scientific CMOS camera (ORCA-Quest qCMOS Camera C15550-20UP) is used to measure the light intensity, as well as single-photon detection. The qCMOS camera has 4096 effective pixels with the size of \(4.6\times 4.6\)\(\upmu\)m. During the computation of a vector-vector dot product, the value of each element in either vector is encoded in the intensity of the light emitted by a pixel on the OLED and the transmission of an SLM pixel. Via the imaging system, each pixel on the OLED display is aligned to a corresponding pixel on the SLM, where element-wise multiplication takes place by the intensity modulation. The modulated light intensity from pixels in the same vector is then focused on the detector to sum up the element-wise multiplication values to be the dot product result. Since the light is incoherent, only non-negative values can be represented. Matrix-vector multiplication is realized by doing a batch of this kind of vector-vector multiplications in parallel, either multiplex in space or time. OLED pixels feature a high extinction ratio and high dynamic range in intensity, which are ideal for characterizing the accuracy of vector-vector dot products. A commercial-grade integrated OLED panel with a high pixel count is readily available at a low cost, which made it possible to encode very large vectors that were essential for demonstrating vector-vector dot products on our setup. The true darkness of OLED pixels allowed us to achieve high dynamic range in intensity modulation and to reduce noise caused by background light pollution. The intensity of each individual pixel can be controlled independently with 256 (8-bit) control levels. However, since the actual output intensity was not linear with the pixel control level, we calibrated a linear look-up table (LUT) that contains 124 distinct intensity levels (\(\sim\)7 bits, Supplementary Figure 7a). We converted a phase-only SLM into an intensity modulator with a half-wave plate (HWP) and a polarizing beam splitter (PBS). The SLM pixels are made of birefringent liquid crystal layers, whose refractive index can be tuned by applying voltage across them. By controlling the refractive index of extraordinary light, the SLM pixels introduce a phase difference between the extraordinary and ordinary light, whose polarizations are perpendicular to each other. When a PBS and HWP were placed in front of a reflective SLM, the light field passed the components twice, once during the trip towards the SLM and once after being reflected by the SLM (Supplementary Figure 6). One of the functions of PBS was to separate the output from the input light: the input light (incident to the SLM) was horizontally polarized and transmitted by the PBS, while the output light (reflected from the SLM) was vertically polarized, and therefore reflected by the PBS. The other function of the PBS is to convert the polarization state of the output light to its amplitude: the light modulated by the SLM was in general elliptically polarized, controlled by the phase difference. The amplitude of the light field (and intensity in this case too) was modulated by selecting only the vertical component of the SLM-modulated light at the output port of the PBS. The HWP was placed with its fast axis rotated 22.5 degrees from the extraordinary axis of the SLM such that the intensity transmission could be tuned from 0 to 100%. In the experiment, each of the SLM pixels can be independently controlled for intensity modulation with a 256 (8-bit) LUT(Supplementary Figure 7b). The maximum extinction ratio of the transmission intensity was measured to be \(\sim\)50. Alternatively, instead of using a phase-modulation SLM, the intensity modulator can be more compactly implemented with a monolithic LCD panel in a transmission geometry. Figure 6: **A photo of the experimental setup.** Input light source (OLED display) and detection parts are not included in this photo. PBS: polarizing beam splitter; HWP: half-wave plate; BPF: band-pass filter; SLM: spatial light modulator. ## Supplementary Note 4 Single-photon detection by a scientific CMOS camera Single-photon detection is the core to implementing an SPDNN. In our experiment, we use a scientific CMOS camera, the Hamamatsu ORCA-Quest qCMOS Camera, to realize the function of single-photon detectors. CMOS cameras usually cannot detect single photons due to the relatively high readout noise compared to the signals induced by individual photons. The ORCA-Quest qCMOS camera, however, has well-controlled readout noise as low as 0.3 equivalent photoelectrons. This makes viewing the individual spikes of photon response possible on the output of the camera. An example of the distribution of pixel values from the camera is shown in Supplementary Figure 8a. These pixel values are from a sequence of frames collected with some intensity of input light. The output pixel values have a digital bias of \(\sim 200\), and the analog gain is \(\sim 7.94\) pixel values per photoelectron. We can see the individual spikes corresponding to different number of detected photons, with the first peak referring to no detected photons. Due to readout noise of the camera, we can still see a near-Gaussian distribution around the peak value of each detected photon number. To do single-photon detection, a threshold can be set to tell if there is a photon (or more photons) detected. If the pixel value is larger than the threshold, we record a click; otherwise, there is no click. In this way, although the camera has already completed analog-to-digital conversion (ADC) before thresholding, the qCMOS camera can still emulate the function of a single-photon detector. This camera has an overall quantum efficiency of \(\sim 86\%\) at the our working wavelength of 532 nm and a dark count rate of 0.006 photoelectrons per second per pixel at the working temperature of -35\({}^{\circ}\)C with air cooling. Due to the readout noise, the thresholding process may add additional errors because of the overlap of the signal peaks. Supplementary Figure 8b shows the pixel value distribution of dark frames without input signals. We can see that using the same threshold, there is a small tail of the distribution on the right side of the threshold. This small portion of the pixel values from dark frames would trigger a photon click as well, which further adds to the dark count rate. Similarly, the output pixel values from detected photons also have a small probability to fall in the "no click" region, which make the effective photon detection efficiency a bit lower. In our experiment, we calibrated the qCMOS camera in the single-photon detection mode and found that the effective dark count of around 0.01 photoelectrons per second per pixel, and the effective photon detection efficiency to be 68%, on average. Note that there are also variations among different pixels. In the experiment implementation, only one single pixel is used for each vector-vector dot product. We will see that the photon detection efficiency and dark counts do not significantly influence the results, as discussed in Supplementary Note 10. Figure 7: **Look-up tables (LUTs) of the components in the incoherent optical matrix-vector multiplier (optical MVM) setup.****a,** The 7-bit LUT of the OLED display to control the pixel intensity. **b,** The 8-bit LUT of the SLM for intensity modulation. The minimum transmission was measured to be \(\sim 2\%\) of the maximum transmission. ## Supplementary Note 5 Validation of the Optical Vector-Vector Multiplications The major part of computation in the ONN implementation is the linear operations. The accuracy of matrix-vector multiplication is essential in a successful ONN inference. In this section, we calibrate the accuracy of our optical MVM. We use the setup either for single-photon detection and conventional intensity measurement that involves a much higher intensity. Focusing the lights to one pixel of \(\sim 5\) um is challenging, which reduces the dot product precision slightly. However, as we will see in Supplementary Note 10 that the SPDNN is very robust to this amount of errors. To generate a test dataset representative of general dot products, we randomly generated vector pairs \(\vec{x}\) and \(\vec{w}\) based on natural scene images from the STL10 dataset. Each vector was generated from a single color channel of one or more images patched together, depending on the target vector size (each image of size \(L\times L\) contributes \(N=L^{2}\) elements to the vector). We chose natural images since they are more representative of the inputs in image classification with globally inhomogeneous and locally smooth features. To adjust the sparsity of the vectors, different thresholds were applied to the image pixel values such that the dot product results cover a wider range of possible values. This was achieved by shifting the original pixel values (float point numbers normalized to the range 0-1) in the entire image up or down by a certain amount, unless the value was already saturated at 1 (the maximum) or 0 (dark). For example, a shift of -1 would make the whole image dark. A shift of +0.2 would make all the pixel values that were originally larger than 0.8 saturated, and would increase all other pixel values by 0.2. This method allowed us to tune the overall intensity of the modulated images without losing the randomness of the distribution. Calibration curves of vector-vector dot product results are shown in Supplementary Figure 9. The results are averaged over a large number of repetitions to get rid of the photon noise to see the systematic errors in the optical MVM. The vectors are randomly generated to cover the full range of the light intensity from the minimum to maximum transmission, as discussed in [26]. The vector size is \(28\times 28\), which is equivalent to the size of the first layer in MNIST classification. ## Supplementary Note 6 Validation of the SPD activation function To validate the SPD activation function in the SPDNN implementation, we need to consider not only the precision of the linear operations, but also the non-linear activation function. As the incident light onto the qCMOS camera is attenuated to just a few photons, the photon noise becomes significant and the measurement less accurate. To address this, we first measure a higher light intensity with long exposure times and estimate the exact light intensity with a shorter exposure time using the ratio of exposure times. We then use the shorter exposure time to perform single-photon detection to output a photon click (value 1) or no photon click (value 0). The probability of a photon click is estimated by averaging over a large number of repetitions. The intensity is tuned by adjusting both the exposure time and neutral density (ND) filters that attenuated the light. The expected theoretical curve is also plotted for comparison. The results are shown in Supplementary Figure 10. **Supplementary Figure 10. Validation of the SPD activation function.** The theory curve is the expected function of \(f(\lambda)=1-e^{\lambda}\) with the intensity \(\lambda\) in photon numbers. This experiment data was taken by the Hamamatsu ORCA-Quest qCMOS camera. ## Part III Implementation of SPDNNs ### Supplementary Note 7Adaption to experimental limitations The implementation of SPDNNs on an optical MVM can be challenged by experimental restrictions that affect the precision of the network inference. Some of these limitations include the non-negative encoding resulting from the use of an incoherent light source and limitations in precision of the setup. In this section, we describe how these limitations can be addressed to successfully implement SPDNNs on our setup. As discussed in Supplementary Note 3, our incoherent optical MVM has systematic errors in the dot-product results, even in the absence of photon noise. Additionally, the SLM used in the system has a finite extinction ratio of approximately 50 (Supplementary Figure 7b). These limitations present a significant challenge in the implementation of the SPDNNs because, in the models, both the input vectors and weights have many small values close to 0. This is problematic because, within the full range of 0 to 1, having a minimum value of 0.02 instead of 0 has a non-trivial effect on the accuracy of the dot product calculation. These small values are accumulated over many elements, leading to a relatively large value compared to the final dot product result. As a result, the performance of the SPDNNs is severely impacted by these limitations. Supplementary Figure 11a demonstrates the results of implementing the neural network models using the real LUTs from our setup. The test accuracy significantly drops, making the experimental implementation a failure. To address this issue, we used error-aware training techniques (as discussed in [27]) to train our models with an understanding of these experimental restrictions. During the error-aware training process, the real LUTs were used in the implementation of the models. The results of this error-aware training are shown in the red curves in Supplementary Figure 11a. It can be seen that, with error-aware training, the SPDNNs models are highly robust to changes in the input range, especially with a relatively large number of hidden neurons. ### Supplementary Figure 11Simulation of SPDNN performance with different experimental settings. **a,** MNIST test accuracy of SPDNN models considering experimental restrictions. The models have a structure of \(784\to 400\to 10\) (\(N=400\), \(K=1\)). **b,** MNIST test accuracy as a function of input light intensity. The intensity was varied by adjusting the range of input values using a constant factor. Both panels show results obtained with an incoherent setup and a single shot of SPD readout per activation (\(K=1\)). Conventional ONN inferences can operate effectively at various light intensity levels, as long as the intensity is sufficiently high to suppress photon noise and maximize detection precision. These systems can in principle integrate arbitrarily high light intensities to enhance detection precision. However, in the optical implementation of SPDNN inferences, the SPD activation function relies on the precise number of photons detected. As a result, controlling the operating intensity in the setup becomes crucial to ensure accurate quantization of the detected optical energy. Calibrating the intensity to the appropriate level for the SPD activation function presents a challenge, especially considering the inherent significant noise in intensity measurements at low photon counts. Despite these challenges, our simulation results demonstrate the robust performance of SPDNNs even with slight variations in input intensities. We systematically varied the input intensity across a range from 0.1 to 100 times the original expected intensity used during training. Supplementary Figure 11b illustrates that the model's performance remains stable within a wide range of intensities. The test accuracy remains nearly consistent, even when the input energy deviates significantly from the original training intensity. This observed stability highlights the resilience of SPDNNs to variations in input intensity levels. It further suggests that these SPDNN models can be successfully implemented with lower photon budgets, which is promising for practical applications where minimizing optical energy usage is desirable. Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix Appendix A Appendix A Appendix A Appendix A Appendix Appendix A Appendix A Appendix A Appendix A Appendix Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix A Appendix Appendix A Appendix A Appendix A Appendix A Appendix A Appendix Appendix A Appendix A Appendix A Appendix A Appendix Appendix A Appendix A Appendix Appendix A Appendix A Appendix A Appendix Appendix A Appendix A Appendix Appendix A Appendix A Appendix Appendix A Appendix Appendix A Appendix Appendix A Appendix Appendix A Appendix Appendix A Appendix Appendix A Appendix A Appendix Appendix A Appendix Appendix A Appendix Appendix A Appendix Appendix A Appendix A Appendix Appendix A Appendix Appendix A Appendix Appendix A Appendix Appendix A Appendix A Appendix Appendix A Appendix Appendix A Appendix Appendix A Appendix Appendix A Appendix Appendix A Appendix A Appendix Appendix A Appendix Appendix A Appendix A Appendix Appendix A Appendix Appendix A Appendix Appendix A Appendix Appendix A Appendix Appendix A Appendix Appendix A Appendix Appendix A Appendix Appendix A Appendix Appendix Appendix A Appendix Appendix A Appendix Appendix A Appendix Appendix A Appendix Appendix Appendix A Appendix Appendix A Appendix Appendix Appendix A Appendix Appendix A Appendix Appendix A Appendix Appendix Appendix A Appendix Appendix A Appendix Appendix A Appendix Appendix A Appendix Appendix A Appendix Appendix A Appendix Appendix A Appendix Appendix A Appendix Appendix A Appendix Appendix Appendix A Appendix Appendix A Appendix Appendix A Appendix Appendix Appendix A Appendix Appendix Appendix A Appendix Appendix A Appendix Appendix Appendix A Appendix Appendix A Appendix Appendix Appendix A Appendix Appendix A Appendix Appendix Appendix A Appendix Appendix Appendix A Appendix Appendix Appendix A Appendix Appendix Appendix A Appendix Appendix Appendix A Appendix Appendix Appendix A Appendix Appendix Appendix A Appendix Appendix Appendix Appendix A Appendix Appendix Appendix A Appendix Appendix Appendix A Appendix Appendix Appendix A Appendix Appendix Appendix A Appendix Appendix Appendix Appendix A Appendix Appendix Appendix A Appendix Appendix Appendix A Appendix Appendix Appendix A Appendix Appendix Appendix A Appendix Appendix Appendix A Appendix Appendix Appendix A Appendix Appendix Appendix A Appendix Appendix Appendix A Appendix Appendix Appendix Appendix A Appendix Appendix Appendix Appendix A Appendix Appendix Appendix Appendix A Appendix Appendix Appendix Appendix A Appendix Appendix Appendix Appendix A Appendix Appendix Appendix A Appendix Appendix Appendix Appendix A Appendix Appendix Appendix A Appendix Appendix Appendix Appendix A Appendix Appendix Appendix Appendix A Appendix Appendix Appendix Appendix A Appendix Appendix Appendix Appendix A Appendix Appendix Appendix Appendix A Appendix Appendix Appendix Appendix A Appendix Appendix Appendix Appendix A Appendix Appendix Appendix Appendix Appendix A Appendix Appendix Appendix Appendix Appendix A Appendix Appendix Appendix Appendix Appendix A Appendix Appendix Appendix Appendix Appendix Appendix Appendix A Appendix Appendix Appendix Appendix Appendix Appendix Appendix Appendix A Appendix **Supplementary Figure 12. An example of weight values to be displayed on the OLED display during the experiment.** This model has \(N=100\) hidden neurons, and we display the weight matrix \(W^{(1)}\) of the first layer (with dimensions \(100\times 784\)). Each block represents a row vector in \(W^{(1)}\) containing 784 elements. These column vectors are rearranged to form a 2D block with dimensions \(28\times 28\), matching the original shape of the MNIST input images. The 100 rows in \(W^{(1)}\), corresponding to the 100 hidden neurons in the neural network, are arranged in a \(10\times 10\) grid to be visualized. The weight values have been normalized to a range of 0 to 1 to fit the intensity range of the OLED display. The average value of each block is indicated at the top. The color map used in the plot has been selected to emulate the actual color on the OLED display, as only green pixels are utilized (\(\sim 532\) nm), thereby presenting what could be observed on the OLED display in the experimental setup. **Supplementary Figure 13. Test accuracy of individual images with 1 SPD measurement per activation (\(K=1\)).** The MLP-SPDNN models with a varying number of hidden neurons \(N\) from 50 to 400 (panels **a**-**e**) were evaluated using a single shot per SPD activation (\(K=1\)). Test accuracy was evaluated for each individual image, and each data point represents the average accuracy obtained from 30 inferences. We have not plotted error bars for experimental data, but the variance is directly related to the data shown: since the outcome of each inference is a Bernoulli random variable with \(p\), the variance is \(p(1-p)\). The accuracy values in the legends are averaged over all test images. The simulation results used the same SPDNN models as in the experiment and considered the experimental restrictions. The error bars were calculated by repetitions of the whole process (average accuracy of 30 inferences). **Supplementary Figure 14. Test accuracy of individual images with 2 SPD measurements per activation (\(K=2\)).** The MLP-SPDNN models with a varying number of hidden neurons \(N\) from 50 to 400 (panels **a**-**e**) were evaluated using 2 shots per SPD activation (\(K=2\)). Test accuracy was evaluated for each individual image, and each data point represents the average accuracy obtained from 15 inferences. We have not plotted error bars for experimental data, but the variance is directly related to the data shown: since the outcome of each inference is a Bernoulli random variable with \(p\), the variance is \(p(1-p)\). The accuracy values in the legends are averaged over all test images. The simulation results used the same SPDNN models as in the experiment and considered the experimental restrictions. The error bars were calculated by repetitions of the whole process (average accuracy of 15 inferences). **Supplementary Figure 15. Test accuracy of individual images with 3 SPD measurements per activation (\(K=3\)).** The MLP-SPDNN models with a varying number of hidden neurons \(N\) from 50 to 400 (panels **a**-**e**) were evaluated using 3 shots per SPD activation (\(K=3\)). Test accuracy was evaluated for each individual image, and each data point represents the average accuracy obtained from 10 inferences. We have not plotted error bars for experimental data, but the variance is directly related to the data shown: since the outcome of each inference is a Bernoulli random variable with \(p\), the variance is \(p(1-p)\). The accuracy values in the legends are averaged over all test images. The simulation results used the same SPDNN models as in the experiment and considered the experimental restrictions. The error bars were calculated by repetitions of the whole process (average accuracy of 10 inferences). a total of 30 repetitions performed. For each repetition, if the prediction made was accurate, it was recorded as a 1; otherwise, it was recorded as a 0. To visualize the distribution of the output accuracy, we then calculated and plotted the mean values and standard deviations of the test accuracy based on the 30 repetitions. We conducted simulations of the same inference process on a digital computer using the same models and input images. To ensure a closer simulation to reality, we also incorporated realistic experimental restrictions, such as the limited extinguish value of the SLM, the dynamic range and precision of the LUT in both the SLM and the OLED display, and the systematic errors in the optical MVM. Similarly, we examined the results of the inferences with \(K=2\) (\(K=3\)) shots per activation, which are illustrated in Supplementary Figure 14 (15). In this setup, we combine every 2 (3) frames to be averaged to compute a neuron activation, and we repeated this process 15 (10) times to obtain the final results. By comparing the simulation results with the experimental results obtained from the collected SPD activation values, we aimed to validate the performance of the latter. The comparison revealed that, for the majority of input images, the predictions are highly resilient to the inherent stochasticity in the model. Interestingly, the results are not as unpredictable as one might expect, as a closer examination shows that most of the errors stem from a limited number of specific input images (see Supplementary Figure 13-15). The close correspondence between the experimental and simulated results for these specific "problematic" input images further validates the reliability of our experimental implementation. Although the experimental results are slightly inferior to the simulation results, the distribution of accuracy per input image is highly comparable. In particular, input images that exhibit high sensitivity to the model's stochasticity tend to result in larger deviations in the experimental results, while input images that are robust to the model stochasticity exhibit high accuracy both in simulations and in experiments. These results provide strong evidence of the reliability of the experimental implementation and demonstrate the robustness and noise-resilience of SPDNN implementations. To further understand the characteristics of the stochastic neural-network inference, we examined the output vectors of each input test image. As depicted in Supplementary Figure 16, the 30 output vectors from different repetition of each input image are plotted together to demonstrate the stochasticity in the neural network. These output vectors were computed by the experimentally measured SPD activations and digitally implemented output layer, with \(N=400\) hidden neurons and \(K=1\) shot of SPD measurement per activation (Supplementary Figure 13e). No additional operations were performed after the linear operation of the output layer (see Algorithm 2). Each of the 10 values in the output vector corresponds to the classes in MNIST digit classification, ranging from 0 to 9, as indicated at the bottom. The curves of the 30 output vectors were plotted with 10% transparency to show the distribution density. As shown in Supplementary Figure 13-15, most of the test images have very high accuracy and are predicted correctly by SPDNN with high certainty, such as the image 0 of digit "7" (depicted in the upper left in Supplementary Figure 16). Despite the stochastic distribution of the output values among the 30 repetitions, the value of class "7" remains consistently higher than the other elements, resulting in a 100% test accuracy for this image (see Figure 1a in the main text). We also examined these "problematic" images, such as the image 8 of digit "5" (lower left), which is predicted to be digit "6" for nearly half of the chance. This misclassification is not surprising to human observers, as the image shares features with both digits "5" and "6". Interestingly, the output values for class 8 in this case are relatively high but not the highest, which also aligns with human intuition. Similar phenomenon can be found for the other "problematic" images as well, indicating that the model has indeed learned meaningful features from the dataset. These findings solidify the fact that stochastic neural networks can perform reliable deterministic classification tasks, and the inherent stochasticity in the model does not compromise its ability to make accurate predictions. **Supplementary Figure 16. Visualization of the output vectors of the SPDNN with given input images.** In this figure, the SPDNN model has \(N=400\) hidden neurons and \(K=1\) shot of SPD measurement per activation (Supplementary Figure 13e). The prediction of a single inference of a particular test image is stochastic and is either correct or incorrect. For each test image, we performed 30 inferences for each test image and report the average accuracy. We have not plotted error bars, but the variance is directly related to the data shown: since the outcome of each inference is a Bernoulli random variable with \(p\), the variance is \(p(1-p)\). The output vectors from the 30 repetitions of inference with the fixed corresponding image are plotted together, with a 10% transparency on the curves to show the density. These output vectors are computed from the experimentally collected SPD activations by performing the output layer digitally on a computer (Supplementary Table 5). ## Supplementary Note 9 Full-optical implementation of the entire SPDNN models In this section, we showcase a full-optical implementation of a neural network by demonstrating the implementation of the last linear layer optically as well, using the SPD activation values obtained from the inference of the first layer. This provides a comprehensive illustration of the feasibility of optical implementation for the entire network. It is important to note that, in conventional binarized neural networks, the last layer is usually implemented using full precision, as demonstrated in previous studies such as [28, 29, 30, 31]. Our results demonstrate that SPDNNs can be implemented entirely using optics with remarkably low energy requirements. This capability holds promise for further advancements, especially with the integration of coherent optical computing platforms, which will be discussed later. Similar to the first layer, we use the same setup to perform the optical matrix-vector multiplication. The difference is that now we do not need to perform single-photon detection that has to control the light intensity at a few photons per detection. In fact, the inference of the last linear layer can be implemented just as the conventional ONNs, where we accumulate a sufficiently high number of photons to reach a high SNR of each detection. The collected SPD activation values, as described in Supplementary Note 8, are used as inputs to the last linear layer. In the experimental implementation, we choose the data from the model with \(N=400\) hidden neurons and \(K=5\) shots per activation. For the 30 frames of one-shot binary SPD activations, every 5 frames of them are averaged to obtain the 6 independent repetitions of the inference. The input activation values to be displayed on the SLM are shown in Supplementary Figure 17. The possible values for the 5-shot activations are 0, 0.2, 0.4, 0.6, 0.8, and 1. If the linear operation was performed in full-precision on a computer, the mean test accuracy would be approximately \(99.2\%\). To perform the linear operation with real-valued weight elements on our incoherent setup, we divide the weight elements into positive and negative parts. We perform the operation separately for each part, and finally obtain the output value by subtracting the results with negative weights from those with positive weights. The two sets of weights to be projected onto the OLED display are shown in Supplementary Figure 18, where the ten blocks of weights corresponds to the ten output nodes. This approach at least double the photon budget required for the last layer and has the potential to be optimized for greater energy efficiency. However, even with these non-optimized settings, our results demonstrate that the optical energy budget is already several orders of magnitude lower than the start-of-the-art ONN implementations. **Supplementary Table 6. Optical energy consumption in SPDNN inference with varying photon budgets in the optical implementation of the output layer.** The first column displays the exposure time of the camera, which determines the amount of detected photons. The average photons per detection for both positive (pos.) and negative (neg.) output are calculated from the 6000 dot products derived from 100 input images, 6 repetitions in the first layer inference, and 10 output nodes. The total photons in the output layer are determined by averaging 600 inferences of the last layer, each computing 10 output values. The total detected photons in a full inference are the sum of photons detected in both layers. The average photons per multiplication is calculated by dividing the total number of multiplications by the total detected photons. Standard deviations are calculated based on 30 repetitions of the last layer detection. The total optical energy of a full inference along with photon numbers are displayed in the fifth and sixth columns, with standard deviations omitted for simplicity. These results add the \(\sim 1043.7\) photons used in the first layer. The last column shows the test accuracy of the inferences at each photon budget. In the implementation, we adjust the exposure time of the camera to control the optical energy per detection. In order to perform the inference on the 100 input images and 10 output nodes, along with 6 repetitions of the activation values and 2 sets of weights, we need to perform a total of \(100\times 6\times 10\times 2=12000\) vector-vector dot products, each with a size of 400. Each vector-vector dot product detection is repeated 100 times. The results are presented in Supplementary Table 6. The photons per detection of either positive or negative output are each averaged over \(100\times 6\times 10=6000\) dot products. The total photons detected in the last layer per inference are averaged over the 100 input images and 6 repetitions, totaling \(100\times 6=600\) inferences. The standard deviation of the photon numbers are calculated based on the 100 repeated detections for each dot product. The total detected photons in a full inference is the sum of those in the last layer and the first layer. The average value of the binary activations collected for the \(N=400\) model is \(\sim\)0.52186, resulting in a total of \(0.52186\times 400\times 5\approx 1043.7\) detected photons per inference in the first layer, with 5 shots per activation. This number is then combined with the total detected photons in the last layer to obtain the overall photon count for a full inference. We can see that the photon budget can be reduced by 5 folds if we only have one shot per inference. In a full inference with \(N=400\) hidden neurons and \(K=5\) shots per activation, the total number of vector-vector products in the first layer is \(400\times 5=2000\) and that in the last layer is 10 for the 10 output nodes. With dot products of size 784 in the first layer and 400 in the last layer, the total number of multiplications in one inference process is equal to 1,572,000 (\(2000\times 784+10\times 400\)). To calculate the number of detected photons per multiplication, we divide the total number of detected photons in a full inference by the total number of multiplications. The prediction of a given inference is made by directly evaluating the output values of each of the 10 output nodes. The output values is calculated as the difference between the positive and **Supplementary Figure 17. Visualization of activation values on the SLM during the last layer experiment.** This figure displays the activations obtained from the data collected for the model of \(N=400\) hidden neurons and \(K=5\) shots per activation. The possible values for the 5-shot activations are 0, 0.2, 0.4, 0.6, 0.8, and 1. The activations of size 400 are rearranged into a \(20\times 20\) shape, which corresponds to their physical layout on the SLM. Panels **a** to **i** display the activations of test images with indices 0, 1, 2, 25, 50, 75, 97, 98, and 99, respectively, each with 6 repetitions of inference. The average value of the activations in each block is indicated at the top. The overall average activation value of the 100 test images is \(\sim 0.5219\). negative output intensity. The label of the node with the highest output value is then determined to be the predicted label. The test accuracy on the 100 test images is presented with its mean and standard deviation in the final column of Supplementary Table 6. The standard deviation is determined by considering both the 6 repetitions of the first layer's inference and the 100 repetitions of detections in the last layer. To visualize the impact of photon noise on accuracy in ONN inferences with a limited photon budget, the data collected from the last layer inference is depicted in Supplementary Figure 19. In each panel, 6000 data points are plotted for either positive or negative output, considering the 100 input images, 6 repetitions in the first layer inference, and 10 output nodes. The ground truth dot product values are computed with high-precision operations on a computer. Both the raw camera pixel values and the corresponding photon count are shown on the vertical axes. As the number of detected photons per detection increases, the detected values becomes less noisy, resulting in a test accuracy that is closer to the ground truth of 99.2% (Supplementary Table 5). Similar to conventional optical neural networks, the decrease in accuracy is primarily due to shot noise. In addition, we performed the output layer optically for other configurations as well. The results are represented in Supplementary Figure 20). The activation values collected in experiments of other choices of number of hidden neuron \(N\) and shots of SPD readouts \(K\) are used as the input for the output layer. If the output layer is implemented with full numerical precision, the test accuraies were shown in Supplementary Table 5. These accuracies are the upper bound for the full-optical implementation with the presence of noise in optical implementation. For these configurations of numbers of hidden neurons (\(N\)) and shots of SPD measurements per activation (\(K\)), one inference through the \(784\to N\) hidden layer involves \(N\times K\) SPD measurements to compute the activation vector in the hidden layer of size \(N\). The detected number of photons for the SPD activation computation in the hidden layer of each configuration is denoted in the corresponding panel in Supplementary Figure 20. The total number of detecte photons per inference is the summation of this number and the total number of photons detected in the \(N\to 10\) output layer, similar to the procedure we discussed above for the configuration of \(N=400\) and \(K=5\). Similar to the plot in Figure 3d in the main text, the test accuracies increase with the detected optical energy in **Supplementary Figure 19. Calibration of collected experimental data of the last layer inference.** This figure shows the raw data of "high-SNR" optical measurement on the qCMOS camera with various exposure times, each depicted in a separate panel. For each exposure time, one output value was obtained by measuring the output from both positive and negative weights. Each plot includes 6,000 data points, representing 100 test images, 6 repetitions in the hidden layer activation, and 10 output nodes. The ground truth values were computed using full-precision operations on a digital computer. Both the raw camera pixel values and the corresponding detected photon numbers are displayed on the y-axis, with the average detected photon numbers for the 6,000 data points noted in each plot. a similar trend to that of the \(N=400\), \(K=5\) we discussed in detail above. We can also see that with a smaller number of neuron \(N\), the model seems to be more noise-resilient with a similar number of photons in the output layer implementation. Although the models with smaller \(N\) and \(K\) suffer from a lower noise-free accuracy due to a smaller network size and higher stochasticity, as shown in Supplementary Table 5. The final test accuracy is a combination of these two factors. Figure 20: **Experimental results of full-optical implementation with different SPDNN configurations.** In addition to Figure 3d in the main text, this figure shows the results obtained with different numbers of hidden neurons (\(N\)) and shots of SPD measurements per activation (\(K\)). Each model uses experimentally collected activation values as input for the optical implementation of the output layer. The number of detected photons in the first \(784\to N\) layer to compute the SPD activations in each configuration is denoted in the corresponding plot. The noise-free test accuracies with full-precision output layer are shown in Supplementary Table 5. The number of detected photons in the \(N\to 10\) output layer is varied to control the noise in the optical implementation, which is reflected in the resultant test accuracy. The total number of detected photons per inference is the sum of the photon budgets in the two layers. ## Part IV Discussion ### Supplementary Note 10. Robustness tests of SPDNNs The first thing to check is the errors induced by the single-photon detectors. The two key parameters to consider when choosing commercial SPDs are photon detection efficiency and dark count rate. Photon detection efficiency refers to the amount of incident light that can be detected by the SPD. Although low photon detection efficiency is a common issue in many photon experiments, it does not add extra noise to our SPDNN models. This is because any attenuation to the light still follows a Poisson distribution and cannot be noisier than a single-photon detector. Hence, a low photon detection efficiency will only add to the overall transmission loss in the setup, and the input light power is usually redundant, so it will not affect the performance much. On the other hand, dark counts, or false clicks, could pose a greater challenge in experiments with SPDs. False clicks are hard to distinguish from real signals, and the output of the detection is binary. The dark count rate of a functional SPD is typically between \(10^{-5}\) and \(10^{-2}\) false clicks per signal, depending on the experimental configuration. In some extreme circumstances, such as when the exposure time is very long or when it is hard to remove ambient light, the dark count rate could be as high as one false click in tens of detections, ruining the results of the experiment. However, our SPDNN models are resilient to high dark count rates. As shown in Supplementary Figure 21a, even with a false click in fewer than 10 measurements, we still obtain relatively good accuracy. The common range of \(<10^{-2}\) barely affects the performance of the SPDNNs. As introduced in Supplementary Note 4, the dark count rate with our SPD setting is 0.01 per second per pixel. Given the exposure time of milliseconds, the effects due to dark counts is trivial in the experimental implementation. In summary, the robustness of SPDNN models to noise obviates the need for selecting specialized SPDs for experimental realization. Cost-effective SPDs can be employed for implementing SPDNNs with high performance. Furthermore, considering the significant power consumption of cooling systems for state-of-the-art SPDs, relaxing the dark current requirement can greatly reduce the power consumption of the detection system. The precision of linear operations is a crucial factor in neural network inferences. As discussed in Supplementary Note 5, the accuracy of vector-vector multiplication may not be optimal when using a single-pixel camera for single-photon detection. To assess the effect of errors in dot product calculations on the performance, we conducted a simulation test by adding different levels of random noise to the dot product results in the first layer, which serve as the pre-activations to the SPD activation function. The results, shown in Supplementary Figure 21b, indicate that SPDNNs are robust to errors in linear operations, even with up to 20% relative noise. This robustness ensures the reliability of the experimental implementation. ## Supplementary Note 11 Noise Resilience Compared to Conventional Models In our SPD activation function, two key features set it apart from conventional neural networks: the quantization of activation values and the stochastic activation process. Both of these processes occur naturally through the detection of single photons. The intrinsic quantization of energy and detection process results in a nonlinear response to the input light intensity, eliminating the need for additional nonlinear operations in the neural network. This nonlinearity is evident in the higher MNIST classification test accuracy of SPDNNs compared to linear models. Additionally, the intrinsic photon noise in the activation function makes the output values stochastic. With more averaging, the stochasticity is reduced, resulting in a more precise output as seen in the implementation of SPD activations in the fully-connected layers. This may imply that the noise is unwanted in the neural network inferences. However, the stochastic inference is inevitable in many real-world tasks with a physical device, our stochastic models demonstrated a high noise-resilience that can still yield reliable outputs regardless this amount of stochasticity. To evaluate the noise resilience of our SPDNNs against conventional continuous-variable models, we conducted experiments to compare the test accuracy of the models under varying levels of photon noise. We adopted quantization-aware training (QAT) as a popular noise-aware training method, which involves quantizing the weights during training to make the model more noise-resilient. We trained deterministic QAT models with the same multi-layer perceptron (MLP) structure of \(784\to 400\to 10\) and quantized the weight precision to a specific number of bits. We then compared the MNIST test accuracy of these models to SPDNNs with the same level of photon noise added during the neural network inference of the hidden layer. For the real-valued QAT models that are compared to the coherent SPDNNs, we chose to use the ReLU activation functions. The QAT models adopted a deterministic quantization function and quantized the weights to the corresponding precision. During inferences, we performed computations with full precision, with the photon noise added to the pre-activation values of the hidden neurons. Supplementary Figure 22a shows that the ReLU models exhibit high noise resilience, and harsh quantization does not significantly enhance the noise resilience but harms the overall precision. In fact, decreasing the quantization levels leads to decreased model performance at this photon noise level. The accuracy almost converges at a precision of 5 bits or higher. For the non-negative QAT models that are compared to the incoherent SPDNNs, the non-negativity of the weights renders ReLU activation functions less effective. Hence, we use the Sigmoid activation function, more rigorously, the positive half of it, to train the QAT models. However, the models are not as noise-resilient as with real-number operations, and stronger quantization is required to enhance the model robustness. As the simulation results show, the performance of models of precision 3 bits or more almost converges. It is worth noting that, despite having over 98% test accuracy without photon noise, the performance of these models with 3-bit precision or more is worse under such noise levels. Decreasing the quantized precision is a tradeoff between noise resilience and overall accuracy. We observed that the 2-bit QAT performs the best over other precisions. These results showed that all the QAT models are inferior to SPDNNs in terms of accuracy under the same or lower photon budget. This finding indicates that SPDNNs are more effective in achieving high accuracy in photon-starved environments. Our results suggest that natural quantization of optical energy enhances noise resilience in neural networks, and that stochasticity could aid in searching for more accurate and noise-resilient models. However, we do not claim that the SPD activation function is the best way to train a noisy neural network, and we are open to exploring other noise-aware training methods that could further improve resilience. Our findings demonstrate that with appropriate training that takes into account the stochastic and quantized nature of optical energy in the realistic physical computing system, ONNs can achieve high performance even at very high noise levels, which was not previously possible. What makes it more intriguing about our approach is that it exploits the natural single-photon detection process. ## Supplementary Note 12 Distribution of expectation values for SPD activations In this study, we explored the use of highly stochastic SPDNN models to achieve high performance in deterministic classification tasks. At first glance, this may seem counter-intuitive, as deterministic classification typically requires stable and reliable outputs, while stochastic models introduce inherent uncertainty. However, a closer examination of the characteristics of the activation values in SPDNN inferences provides a more intuitive understanding of how this approach can achieve such high accuracy. In Supplementary Figure 23a, we present the distribution of expectation values for hidden neuron activations. This distribution is obtained using a single shot of SPD readout (\(K=1\)). Since the activations are binary (either 0 or 1), the expectation value represents the probability of the activation being 1. We constructed this histogram by considering the inferences for all input images in the test set and all hidden neurons' activation values, so that the distribution is averaged over many different samples to show the overall picture of the general behavior of the network inference. For example, a layer with 400 hidden neurons and 10,000 test input images would yield \(400\times 10,000=4\times 10^{6}\) expectation values included in the histogram. We utilized an optimized SPDNN model with an MLP structure of \(784\to 400\to 400\to 10\) to generate this histogram, and we also found that this distribution is consistent across models with varying numbers of hidden neurons or layers, as well as coherent or incoherent SPD detection schemes. Interestingly, we observed that the majority of neuron activations exhibit more deterministic expectation values rather than pure randomness. While some models trained with experimental limitations cannot reach absolute zero values, the peak at zero value shifts to a less sharp bump close to zero, still distributing towards either end rather than the middle of value of 0.5. In Bernoulli sampling, an expectation value of 0.5 signifies that the probability of being 0 or 1 is equivalent, indicating that there is no useful information in the process, and the entropy is at its maximum. Noisy channels with such characteristics cannot carry valuable information for neural network inference. Consequently, during the training process, the model should strive to learn from the training set and update the neural network weights accordingly to capture the essential features. This process involves storing information in the trained model, which can be reflected by decreasing the entropy of each stochastic binary neuron. In Supplementary Figure 23b, we observe that as the model undergoes more training epochs, the expectation value distribution of activations becomes more concentrated towards 0 or 1. This indicates that the model retains more information and generates more reliable outputs. However, it is important to note that while the entropy of each individual neuron decreases, at the network level, the average activation still tends to be around 0.5 photons when considering all the neurons, denoting maximum entropy. This suggests that the neural network is effectively utilizing its capacity to extract information using all its neurons by increasing the overall network entropy. In fact, a network with all neurons having the same expectation value (entropy of 0) would not be able to learn any meaningful features. In summary, while SPDNNs are inherently stochastic, the distribution of expectation values for hidden neuron activations leans towards deterministic outcomes, allowing the model to effectively learn features and achieve high accuracy in deterministic classification tasks. The training process shapes the probabilistic distribution of the neurons and allocates different neurons close to either 0 or 1 to learn the patterns of input images and output reliable inferences. Remarkably, the implementation of this allocation is exceptionally efficient in optical energy, as each activation only involves a photon click.
2308.05384
Enhancing Deep Reinforcement Learning: A Tutorial on Generative Diffusion Models in Network Optimization
Generative Diffusion Models (GDMs) have emerged as a transformative force in the realm of Generative Artificial Intelligence (GenAI), demonstrating their versatility and efficacy across various applications. The ability to model complex data distributions and generate high-quality samples has made GDMs particularly effective in tasks such as image generation and reinforcement learning. Furthermore, their iterative nature, which involves a series of noise addition and denoising steps, is a powerful and unique approach to learning and generating data. This paper serves as a comprehensive tutorial on applying GDMs in network optimization tasks. We delve into the strengths of GDMs, emphasizing their wide applicability across various domains, such as vision, text, and audio generation. We detail how GDMs can be effectively harnessed to solve complex optimization problems inherent in networks. The paper first provides a basic background of GDMs and their applications in network optimization. This is followed by a series of case studies, showcasing the integration of GDMs with Deep Reinforcement Learning (DRL), incentive mechanism design, Semantic Communications (SemCom), Internet of Vehicles (IoV) networks, etc. These case studies underscore the practicality and efficacy of GDMs in real-world scenarios, offering insights into network design. We conclude with a discussion on potential future directions for GDM research and applications, providing major insights into how they can continue to shape the future of network optimization.
Hongyang Du, Ruichen Zhang, Yinqiu Liu, Jiacheng Wang, Yijing Lin, Zonghang Li, Dusit Niyato, Jiawen Kang, Zehui Xiong, Shuguang Cui, Bo Ai, Haibo Zhou, Dong In Kim
2023-08-10T07:02:24Z
http://arxiv.org/abs/2308.05384v2
Beyond Deep Reinforcement Learning: A Tutorial on Generative Diffusion Models in Network Optimization ###### Abstract Generative Diffusion Models (GDMs) have emerged as a transformative force in the realm of Generative Artificial Intelligence (GAI), demonstrating their versatility and efficacy across a variety of applications. The ability to model complex data distributions and generate high-quality samples has made GDMs particularly effective in tasks such as image generation and reinforcement learning. Furthermore, their iterative nature, which involves a series of noise addition and denoising steps, is a powerful and unique approach to learning and generating data. This paper serves as a comprehensive tutorial on applying GDMs in network optimization tasks. We delve into the strengths of GDMs, emphasizing their wide applicability across various domains, such as vision, text, and audio generation. We detail how GDMs can be effectively harnessed to solve complex optimization problems inherent in networks. The paper first provides a basic background of GDMs and their applications in network optimization. This is followed by a series of case studies, showcasing the integration of GDMs with Deep Reinforcement Learning (DRL), incentive mechanism design, Semantic Communications (SemCom), Internet of Vehicles (IoV) networks, etc. These case studies underscore the practicality and efficacy of GDMs in real-world scenarios, offering insights into network design. We conclude with a discussion on potential future directions for GDM research and applications, providing major insights into how they can continue to shape the future of network optimization. Diffusion model, deep reinforcement learning, generative AI, AI-generated content, network optimization ## I Introduction ### _Background_ The emergence of Generative Artificial Intelligence (GAI) has marked a significant milestone, offering a transformative potential that extends beyond the traditional boundaries of Artificial Intelligence (AI) [1]. Unlike conventional AI (also so-called discriminative AI) models that focus primarily on analyzing or classifying existing data, GAI can create new data, including text, image, audio, synthetic time-series data, and more [1]. This potential of GAI has far-reaching implications across diverse sectors, from business and science to society at large [2, 3]. For instance, in the business sector, GAI can power customer service bots or generate product designs, thereby maximizing efficiency and boosting competitive advantages [4]. According to Accenture's 2023 Technology Vision report [5], 97% of global executives agree that GAI will revolutionize how AI is used, enabling connections across data types and industries. In the natural science research community, GAI can aid in generating synthetic data for research, e.g., protein sequences for disease prediction models [6], and accelerating the pace of discoveries [3]. Furthermore, GAI can augment human creativity in our society, enabling the creation of new art, music, and literary work, thereby enriching our cultural heritage [7]. GAI is not a singular technique but a collection of various models and methods, each of which is with its unique strengths and applications. Each of these models has contributed to the advancement of AI in different ways, forming the backbone of the current GAI landscape, in which major examples include: * **Transformers:** Transformers [8] have revolutionized Natural Language Generation (NLG) tasks, as exemplified by OpenAI's ChatGPT [9]. They excel in applying context, a critical aspect of language understanding, and allow for greater parallelization of computing during training and inference. * **Generative Adversarial Networks (GANs):** GANs [10] have been instrumental in the field of image synthesis. They consist of a generative model and a discriminative model that interact and compete against each other, leading to continuous improvement in performance. * **Variational Autoencoders (VAEs):** VAEs [11] transform input data into a set of parameters in a latent space, which are then used to generate new data that closely aligns with the original distribution. * **Flow-based Generative Models:** Flow-based models [12] use probabilistic flows for data generation. They employ back-propagation for gradient computation, enhancing learning efficiency. Their ability to directly compute the probability density function during generation makes them computationally efficient, especially in mobile edge networks. * **Energy-based Generative Models:** Energy-based models [13] represent data using energy values. They define an energy function and optimize it to minimize the input data's energy value. These models are intuitive, flexible, and capable of capturing dependencies by associating an non-normalized probability scalar with each configuration of observed and latent variables. * **Generative Diffusion Models (GDMs):** Initially proposed in [14], the concept of GDMs drew inspiration from the thermodynamic diffusion process. This thermodynamic correlation not only sets GDMs apart from other generative models but also establishes intriguing associations with score-based models [15] and stochastic differential equations [16], thereby enabling unique avenues for further research and applications. Amidst these techniques, GDMs stand out due to their unique approach to data generation and their ability to model complex data distributions [17]. Recently, the versatility and potency of GDMs have been demonstrated in numerous applications, particularly in AI-generated Content (AIGC) domains. For instance, Stable Diffusion [18], a diffusion-model based image generation application, has amassed over 10 million daily users, showcasing the practical utility and popularity of diffusion models. Furthermore, GDMs have been leveraged in various fields. In Computer Vision (CV), they have been used to generate high-quality images from noise, with models such as Denoising Diffusion Probabilistic Models (DDPM) [19] and Denoising Diffusion Implicit Models (DDIM) [20]. They have also been employed in text generation tasks, enhancing the controllability and coherence of the generated text [21]. In the audio domain, GDMs have been used for tasks like symbolic music generation and text-to-speech conversion [22, 23]. Beyond traditional domains, GDMs have been utilized in graph generation [24, 25, 26], molecular and material generation [27, 28, 29], and in synthesizing tabular data to electrocardiogram signal synthesis [30, 31, 32]. The widespread adoption of GDMs can be attributed to several key advantages over other GAI methods. * **High-quality data generation ability.** GDMs employ a forward and reverse diffusion process [33], enabling them to accurately capture complex data distributions and embrace high-quality. This stands in contrast to GANs, which can suffer from mode collapse, and VAEs, which can yield blurry results due to their Gaussian assumption [34]. * **Flexibility.** GDMs are adaptable to various types of data and applications due to their reliance on stochastic differential equations [17]. This flexibility is a significant advantage over Transformer-based models, which, while powerful, are primarily designed for sequence data. * **Simplicity of Implementation.** GDMs' structure, featuring a fixed bottom-up path defined by a diffusion process and a top-down path parameterized by Deep Neural Networks (DNNs), simplifies their implementation [35, 36]. This is a notable advantage over GANs and VAEs, which often require complex architectures and training procedures [37]. ### _Motivations_ The successful applications of GDMs across a diverse range of domains have inspired us to support intelligent network optimization [38, 39]. However, future intelligent networks such as Integrated Sensing and Communications (ISAC) [40], Semantic Communications (SemCom) [41], and Internet of Vehicles (IoV) [42] are characterized by high-dimensional configurations, non-linear relationships, and intricate decision-making processes that are tightly linked with semantics and interpretations [43]. For example, SemCom networks require a deep understanding of semantic information to facilitate efficient and accurate communication [44], and IoV networks involve the interaction of numerous highly-mobile entities with heterogeneous communication capabilities [42, 45]. In all these cases, they exhibit complex dynamics with significant dependencies on prior and current states, as well as the environment, leading to high dimensional and multimodal state distributions [46]. This calls for sophisticated network management models, like GDMs. GDMs in this context are capable of capturing such high-dimensional and complex structures, and effectively dealing with numerous decision-making processes and optimization problems, understanding and capturing the nuances of the complex trade-offs involved in the operation and optimization of intelligent networks [47]. GDMs have been increasingly recognized for their potential in optimization tasks, particularly in enhancing _decision making_ and _Deep Reinforcement Learning (DRL)._ In decision-making scenarios, GDMs have been adapted to represent complex dynamics, incorporating additional conditioning variables such as constraints, and demonstrating scalability over long time horizons [48, 49]. In the realm of DRL, GDMs have been employed as policy representations, capturing multi-modal action distributions and improving performance in offline RL tasks [50]. GDMs have also been used to introduce diffusion-based generative behavior models, demonstrating superior performance compared to conventional offline RL methods [51]. These initial explorations highlight the versatility and potential of GDMs in complex optimization tasks, setting the stage for more detailed discussions in Section II and Section III. Despite the promising advantages of GDMs in network optimization, we acknowledge that GDMs also come with their own set of challenges, e.g., the computational complexity introduced by the iterative nature of GDMs. This complexity could potentially pose difficulties in large-scale DRL tasks, such as those involving the optimization of extensive communication networks [52]. Additionally, GDMs might face challenges when dealing with data distributions that are characterized by high levels of noise or irregularities. This is particularly relevant in the context of real-world network traffic data [33]. Nevertheless, these challenges should not overshadow the potential of GDMs in network optimization. Instead, the challenges should be viewed as areas of opportunity for further research and development. The refinement and adaptation of traditional GDMs to address these issues effectively could pave the way for significant advancements in the field of network optimization. ### _Contributions_ The continuous advancements of GDMs in addressing optimization problems have inspired researchers to use them in specific design challenges within intelligent networks, such as optimizing incentive mechanisms [38] and selecting service providers [64]. Despite these developments, we believe that the full potential of GDMs has yet to be explored, in which GDMs are expected to revolutionize the paradigm of AI-driven intelligent network management. While there are several surveys on GDMs, as shown in Table I, these works either provide a broad overview or focus on a specific area such as CV or Natural Language Processing (NLP), leaving a gap in the comprehensive understanding of GDMs in the context of network optimization. This tutorial bridges this gap by providing an extensive introduction to GDMs, emphasizing their applications in network optimization challenges. Crucially, we present specific case studies drawn from several significant intelligent network scenarios. The contributions of our tutorial are listed below: * We provide a comprehensive tutorial on the applications of GDMs, particularly in intelligent network optimization. This tutorial aims to offer a broad understanding of the origin, development, and major strength of GDMs, and to detail how the GDMs can be effectively implemented to solve complex optimization problems in the dynamic wireless environment. * We provide several case studies regarding the integra \begin{table} \begin{tabular}{p{56.9pt}|p{142.3pt}|p{142.3pt}} \hline \hline **Survey** & **Contributions** & **Emphasis** \\ \hline [17] & Discuss generative diffusion models and their applications in CV, speech, bioinformatics, and NLP & General review of GAMs \\ \hline [33] & Provide an overview of diffusion models research, categorized into efficient sampling, improved likelihood estimation, and handling data with special structures & \\ \hline [53] & Discuss use of diffusion models for medical image analysis and various applications & \\ \hline [54] & Discuss diffusion models in image generation from text and recent advancements in GAI models & Focus on the applications of GDMs on CV \\ \hline [55] & Survey efficient diffusion models for vision and their applications in CV tasks & \\ \hline [34] & Survey diffusion models in vision and their applications in various vision tasks & \\ \hline [56] & Provide an overview of diffusion models in NLP, discussing text generation, translation, and summarization & Focus on NLP \\ \hline [57] & Discuss diffusion models in non-autoregressive text generation for improving text generation efficiency & Focus on non-autoregressive text generation \\ \hline [58] & Analyze the applications of diffusion models for time series data crucial in finance, weather, and healthcare & Focus on time series data \\ \hline [59] & Discuss knowledge distillation in diffusion models, transferring complex knowledge to simplify models & Focuses on knowledge distillation \\ \hline [60] & Focuse on using diffusion models for generating molecules, proteins, and materials in drug discovery and materials science & Focus on several specific scientific applications \\ \hline [61] & Discuss audio diffusion models in speech synthesis and recent advancements in GAI models & Focus on audio and speech \\ \hline [62] & Provide an overview of diffusion models in bioinformatics, including key concepts and various applications & Focus on the applications in bioinformatics \\ \hline [63] & Present a survey on generative diffusion models on graphs, providing a state-of-the-art overview & Focus on the applications of GDMs on graphs \\ \hline \hline \end{tabular} \end{table} TABLE I: Overview of survey papers on GDMs: Supplementary reading for our tutorial tion of GDMs with future intelligent network scenarios, e.g., _DRL_, _Incentive Mechanism Design_, _ISAC_, _SemCom_, and _IoV Networks_. These case studies demonstrate the practicality and efficacy of GDMs in emerging network technologies. * We discuss potential directions for GDM research and applications, providing insights into how GDMs can evolve and continue to influence future intelligent network design. As shown in Fig. 1, the rest of the tutorial is structured as follows: We first study the applications of GDM in network optimization in Section II. The role of GDM in DRL is then explored in Section III. In Section IV, we present GDM's role in incentive mechanism design. SemCom enhanced by GDMs are discussed in Section V, and Section VI focuses on applying GDMs in IoV Networks. In Section VII, we discuss the applications of GDM to several other network issues, i.e., channel estimation, error correction coding, and channel denoising. Furthermore, we outline potential research directions in Section VIII. Section IX concludes this tutorial. ## II Network Optimization via Generative Diffusion Models This section presents an overview of GDMs, their applications, principles, and extensions to facilitate network optimization. A step-by-step tutorial is provided, using a simple, yet representative, sum rate maximization problem as a demonstrative example, to illustrate the applications of GDMs in wireless environments. ### _Applications of GDMs_ The distinct capability of GDMs, combined with their theoretical elegance and the recent advancements in their training and sampling efficiency, has led to the widespread adoption of GDMs across a spectrum of domains. Specifically, #### Ii-A1 Computer Vision The evolution and applications of GDMs in the field of vision have been marked by a series of interconnected advancements. Beginning with the DDPM [19] and DDIM [20], the field has shifted towards dynamic and flexible frameworks that can generate high-quality images from noise. Building on this foundation, the reflected diffusion models [65] integrated constraints into the generative process, leading to more faithful samples and expanding the potential applications of GDMs. This concept of flexibility and adaptability was further extended by the DiffCollage model [66], which demonstrated the ability of GDMs to generate large-scale content in parallel. The latent flow diffusion models [67] then bridged the gap between image and video generation, synthesizing optical flow sequences [68] in the latent space to create videos with realistic spatial details and temporal motion. Furthermore, the video diffusion models [69] marked a significant milestone in generative modeling research, showcasing the potential of GDMs in generating temporally coherent, high-fidelity videos. #### Ii-A2 Text Unlike Transformer-based models such as GPT, which focus primarily on sequence data, GDMs offer a unique advantage in their ability to model complex data distributions, making them more versatile for various tasks. Integrating language models into the diffusion process by Diffusion-LM [21] has enhanced the controllability and coherence of the generated text, demonstrating the adaptability of GDMs to different text generation tasks. This adaptability was further evidenced by the latent diffusion energy-based model [70], which introduced an energy-based model into the diffusion process, thereby improving the interpretability and quality of text modeling. The versatility of GDMs was showcased by the DiffuSeq [71] and DiffuSum [72] models, which applied GDMs to diverse tasks such as sequence-to-sequence generation and extractive summarization. Lastly, the innovative approach of the DiffuSR model [73] in formulating text editing as a diffusion process further expanded the scope of GDM applications, demonstrating their potential in complex text editing tasks. #### Ii-A3 Audio GDMs have been leveraged to create a transformative shift in audio generation. The symbolic music generation model [22] demonstrated the potential of GDMs in generating complex symbolic music. The ProDiff model [23] further showcases the ability of GDMs to generate high-quality text-to-speech outputs rapidly. The MM-Diffusion model [74] further extended the versatility of GDMs, demonstrating their capability to generate joint audio and video content. The DiffWave model [75] and the DiffSinger model [76] enhanced audio synthesis by generating high-fidelity waveforms and expressive singing voices, respectively. Moreover, the CRASH model [77] used the GDM in raw audio synthesis, demonstrating GDMs' ability to generate high-resolution percussive sounds, offering a more flexible generation capability compared to traditional methods. #### Ii-A4 Others GDMs were also applied widely to other application domains. In graph generation, GDMs have been utilized to generate intricate graph structures, as demonstrated by the works in [24, 25, 26]. These models have effectively harnessed the power of GDMs to handle discrete data types, showcasing their adaptability in representing complex relationships and structures inherent in graph data. This adaptability extends to the field of molecular and material generation, where models like MolDiff [27], DiffDock-PP [28], and MDM [29] demonstrated how GDMs can be utilized to generate intricate molecular structures, such as proteins in the field of molecular biology and material science. GDMs have shown great potential in handling heterogeneous features and synthesizing diverse tabular and time-series data types. The models presented in CoDi [30], TabDDPM [31], and DiffECG [32] have demonstrated the versatility of GDMs in tasks ranging from synthesizing tabular data to ECG signal synthesis. The exceptional performance and broad applicability of GDMs can be attributed to their unique design. This has garnered significant attention, particularly in generating diverse high-resolution images, with large-scale models such as GLIDE [78], DALLE-2 [79], Imagen [80], and the fully open-source Stable Diffusion [18] being developed by leading organizations like OpenAI, Nvidia, and Google. Given the widespread use and success of GDMs in the CV domain, we introduce the principles and theory of GDMs in this context in Section II-B. This is a foundation for our subsequent discussion on how GDMs can be extended to facilitate network optimization in Section II-C. ### _Principles of the GDMs_ Unlike GANs that generate samples from a latent vector in a single forward pass through the Generator network [81], GDMs utilize a denoising network to iteratively converge to Fig. 1: Structure of Our Tutorial: We initiate our discussion with the foundational knowledge of GDM and the motivation behind their applications in network optimization. This is followed by exploring GDM’s wide applications and fundamental principles and a comprehensive tutorial outlining the steps for using GDM in network optimization. In the context of intelligent networks, we study the impact of GDM on algorithms, e.g., DRL, and its implications for key scenarios, e.g., incentive mechanism design, SemCom, IoV networks, channel estimation, error correction coding, and channel denoising. We conclude our tutorial by discussing potential future research directions and summarizing the key contributions. an approximation of a real sample \(x\sim q(x)\) over a series of estimation steps [82], where \(q(x)\) is the data distribution. This unique design has made GDMs emerge as a powerful tool in the field of generative modeling [54]. As shown in Fig. 2, the underlying principle of GDMs is simple. With an initial input, GDMs progressively introduce Gaussian noise through a series of steps, i.e., the forward diffusion process, which generates the targets for the denoising neural network. Subsequently, the neural network is trained to reverse the noise process and recover the data and content [19]. The reverse diffusion process allows for the generation of new data. We explain the detailed process with an example of image generation for illustration. #### Iii-B1 Forward Diffusion Process The forward diffusion process can be modeled as a Markov chain with \(T\) steps. Let \(\mathbf{x}_{0}\) denote the original image. At each step, i.e., \(t\), in the Markov chain, a Gaussian noise with a variance of \(\beta_{t}\) is added to \(\mathbf{x}_{t-1}\) to yield \(\mathbf{x}_{t}\) with the distribution \(q\left(\mathbf{x}_{t}|\mathbf{x}_{t-1}\right)\). This process is represented as \[q\left(\left.\mathbf{x}_{t}\right|\mathbf{x}_{t-1}\right)=\mathcal{N}\left( \mathbf{x}_{t};\boldsymbol{\mu}_{t}=\sqrt{1-\beta_{t}}\mathbf{x}_{t-1}, \boldsymbol{\Sigma}_{t}=\beta_{t}\mathbf{I}\right), \tag{1}\] where \(q\left(\left.\mathbf{x}_{t}\right|\mathbf{x}_{t-1}\right)\) is a normal distribution, characterized by the mean \(\boldsymbol{\mu}_{t}\) and the variance \(\boldsymbol{\Sigma}\), and \(\mathbf{I}\) is the identity matrix indicating that each dimension has the same standard deviation \(\beta_{t}\). Then, from the original data \(\mathbf{x}_{0}\) to the final \(\mathbf{x}_{T}\), the posterior probability can be expressed in a tractable form as \[q\left(\left.\mathbf{x}_{1:T}\right|\mathbf{x}_{0}\right)=\prod_{t=1}^{T}q \left(\left.\mathbf{x}_{t}\right|\mathbf{x}_{t-1}\right) \tag{2}\] However, according to (2), sampling \(\boldsymbol{x}_{t}\) (\(t\in\{0,1,\ldots,T\}\)) necessitates \(t\) times of calculation, which becomes computationally intensive when \(t\) is large. To avoid this, we define \(\alpha_{t}=1-\beta_{t}\) and \(\bar{\alpha}_{t}=\int\limits_{j=0}^{t}\Omega_{j}\), enabling us to express \(\mathbf{x}_{t}\) as \[\mathbf{x}_{t} =\sqrt{1-\beta_{t}}\mathbf{x}_{t-1}+\sqrt{\beta_{t}}\boldsymbol{ \epsilon}_{t-1}=\sqrt{\alpha_{t}}\mathbf{x}_{t-2}+\sqrt{1-\alpha_{t}} \boldsymbol{\epsilon}_{t-2}\] \[=\cdots=\sqrt{\bar{\alpha}_{t}}\mathbf{x}_{0}+\sqrt{1-\bar{ \alpha}_{t}}\boldsymbol{\epsilon}_{0}, \tag{3}\] where \(\boldsymbol{\epsilon}_{0},\ldots,\boldsymbol{\epsilon}_{\mathbf{t-1}}\sim \mathcal{N}\left(\mathbf{0},\mathbf{I}\right)\). Consequently, \(\mathbf{x}_{t}\) can be obtained using the following distribution: \[\mathbf{x}_{t}\sim q\left(\mathbf{x}_{t}\mid\mathbf{x}_{0}\right)=\mathcal{N} \left(\mathbf{x}_{t};\sqrt{\bar{\alpha}_{t}}\mathbf{x}_{0},\left(1-\bar{ \alpha}_{t}\right)\mathbf{I}\right). \tag{4}\] Given that \(\beta_{t}\) is a hyperparameter, we can precompute \(\alpha_{t}\) and \(\bar{\alpha}_{t}\) for all timesteps. This allows us to sample noise at any timestep \(t\) and obtain \(\mathbf{x}_{t}\). Therefore, we can sample our latent variable \(\mathbf{x}_{t}\) at any arbitrary timestep. The variance parameter \(\beta_{t}\) can be fixed to a constant or chosen under a \(\beta_{t}\)-schedule [19] over \(T\) timesteps. #### Iii-B2 Reverse Diffusion Process When \(T\) is large, \(x_{T}\) approximates an isotropic Gaussian distribution [19]. If we can learn the reverse distribution \(q\left(\left.\mathbf{x}_{t-1}\right|\mathbf{x}_{t}\right)\), we can sample \(\mathbf{x}_{T}\) from \(\mathcal{N}\left(\mathbf{0},\mathbf{I}\right)\), execute the reverse process, and obtain a sample from \(q\left(x_{0}\right)\). However, statistical estimates of \(q\left(\left.\mathbf{x}_{t-1}\right|\mathbf{x}_{t}\right)\) require computations involving the data distribution, which is practically intractable. Therefore, our aim is to estimate \(q\left(\mathbf{x}_{t-1}\right|\mathbf{x}_{t})\) with a parameterized model \(p_{\theta}\) as follows: \[p_{\theta}(\mathbf{x}_{t-1}|\mathbf{x}_{t})=\mathcal{N}\left(\mathbf{x}_{t-1}; \boldsymbol{\mu}_{\theta}(\mathbf{x}_{t},t),\boldsymbol{\Sigma}_{\theta}( \mathbf{x}_{t},t)\right). \tag{5}\] Subsequently, we can obtain the trajectory from \(\mathbf{x}_{T}\) to \(\mathbf{x}_{0}\) as \[p_{\theta}\left(\mathbf{x}_{0:T}\right)=p_{\theta}\left(\mathbf{x}_{T}\right) \prod_{t=1}^{T}p_{\theta}\left(\mathbf{x}_{t-1}\mid\mathbf{x}_{t}\right). \tag{6}\] By conditioning the model on timestep \(t\), it can learn to predict the Gaussian parameters, i.e., the mean \(\boldsymbol{\mu}_{\theta}(\mathbf{x}_{t},t)\) and the covariance matrix \(\boldsymbol{\Sigma}_{\theta}(\mathbf{x}_{t},t)\) for each timestep. The training of the GDM involves an optimization of the negative log-likelihood of the training data. According to [19], adding the condition information, e.g., \(\boldsymbol{g}\), in the denoising process, \(p_{\theta}(\mathbf{x}_{t-1}|\mathbf{x}_{t},\boldsymbol{g})\) can be modeled as a noise prediction model with the covariance matrix fixed as \[\boldsymbol{\Sigma}_{\theta}\left(\mathbf{x}_{t},\mathbf{g},\mathbf{t}\right)= \beta_{t}\mathbf{I}, \tag{7}\] and the mean is constructed as \[\boldsymbol{\mu}_{\theta}\left(\boldsymbol{x}_{t},\boldsymbol{g},t\right)= \frac{1}{\sqrt{\bar{\alpha}_{t}}}\left(\boldsymbol{x}_{t}-\frac{\beta_{t}}{ \sqrt{1-\bar{\alpha}_{t}}}\boldsymbol{\epsilon}_{\theta}\left(\boldsymbol{x}_{t}, \boldsymbol{g},t\right)\right). \tag{8}\] Fig. 2: Illustration of the forward and reverse diffusion processes. The forward diffusion process involves the addition of noise, typically Gaussian noise, to the existing training data. Subsequently, the reverse diffusion process, also referred to as “denoising,” aims to recover the original data from the noise-added version. We first sample \(\mathbf{x}^{T}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\) and then from the reverse diffusion chain parameterized by \(\theta\) as \[\mathbf{x}_{t-1}\mid\mathbf{x}_{t}=\frac{\mathbf{x}_{t}}{\sqrt{\alpha_{t}}}-\frac{\beta_{t} }{\sqrt{\alpha_{t}\left(1-\bar{\alpha}_{t}\right)}}\mathbf{\epsilon}_{\theta}\left( \mathbf{x}_{t},\mathbf{g},t\right)+\sqrt{\beta_{t}}\mathbf{\epsilon}, \tag{9}\] where \(\mathbf{\epsilon}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\) and \(t=1,\dots,T\). Furthermore, the authors in [19] introduced simplifications to the original loss function by disregarding a specific weighting term: \[\mathcal{L}_{t}=\mathbb{E}_{\mathbf{x}_{0},t,\mathbf{\epsilon}}\left[\left\|\mathbf{ \epsilon}-\mathbf{\epsilon}_{\theta}\left(\sqrt{\bar{a}_{t}}\mathbf{x}_{0}+\sqrt{1 -\bar{a}_{t}}\mathbf{\epsilon},t\right)\right\|^{2}\right]. \tag{10}\] This effectively shows that instead of predicting the mean of the distribution, the model predicts the noise \(\mathbf{\epsilon}\) at each timestep \(t\). ### _Motivations of using GDMs in Network Optimization_ The motivation of using GDMs in network optimization, particularly in intelligent networks, stems from their unique characteristics and capabilities. _First, GDMs possess a robust generative capability, which is suitable in dynamic network optimization with or without expert datasets, i.e., labeled optimal solutions._ Unlike conventional applications of GDMs, such as in image or text domains, network optimization does not typically have access to large datasets suitable for offline training [83]. The lack of an expert dataset presents challenges when applying GDMs to facilitate network optimization. Fortunately, in addressing this challenge, the reverse diffusion process of GDMs, involving a denoising network, can be effectively utilized. Specifically, instead of relying on the standard loss function as illustrated in (10), the denoising network can be trained to maximize the _value_ of the final generated solution output [38]. Here, the _value_ is related to the optimization objective function, which is designed to either maximize or minimize a specific outcome based on the given application. In network optimization, the _value_ can be a performance metric like sum rate, latency, or energy efficiency. This training process can be achieved by executing the generated solution within the network environment, followed by network parameter adjustments based on the received feedback. Thus, the obstacle presented by the absence of a suitable dataset transmutes into an opportunity for dynamic online learning and optimization [64]. Notably, when expert datasets are accessible, adjustments can be made to minimize the loss between the expert and the generated solutions. These adjustments enable the GDM to continuously refine its output based on loss, leading to progressively more optimized network solutions with higher objective values. _Second, GDMs can easily incorporate conditioning information into the denoising process._ In intelligent networks, optimal solutions, e.g., power allocation schemes and incentive mechanism designs, typically change with the dynamic wireless environment [84]. Therefore, the wireless environment information, such as path loss and small-scale fading channel parameters, can be used as the conditioning information in the denoising process [85]. After sufficient training, the denoising network should be able to generate the optimal solution given any dynamic wireless environment condition [38]. This ability to adapt to dynamic environments and generate optimal solutions is valuable in wireless network optimization. _Furthermore, the relationship between GDMs and DRL in intelligent network optimization is not just the substitution or competition but rather a compliment and/or supplement of each other that allows for mutual enhancement and learning._ Specifically, training the denoising network in GDMs, which is guided by feedback from the external environment, embodies a reinforcement learning paradigm [38]. Thus, techniques such as Q-networks can facilitate more effective training of the denoising network [86]. Moreover, GDMs can be leveraged to enhance the performance of various DRL algorithms [64]. For instance, the robust generative capabilities of GDMs can be harnessed in imitation learning, thereby augmenting the performance of offline DRL [35, 52]. In addition, GDMs can substitute the action network in DRL algorithms, where actions are treated as the output of the denoising process [50]. ### _Tutorial with an Example_ In this part, we representatively formulate an optimization problem in a wireless network and show a step-by-step tutorial to solve it by using GDMs. We compare the solutions generated by GDMs with the traditional DRL methods, such as Soft Actor-Critic (SAC) [87] and Proximal Policy Optimization (PPO) [88]. The code is available at [https://github.com/HongyangDu/GDMOPT](https://github.com/HongyangDu/GDMOPT). #### Iii-D1 Problem Formulation Consider a wireless communication network where a base station with total power \(P_{T}\) serves a set of users over multiple orthogonal channels. The objective is to maximize the sum rate of all channels by optimally allocating power among the channels. Let \(g_{n}\) denote the channel gain for the \(n^{\text{th}}\) channel and \(p_{n}\) denote the power allocated to that channel. The sum rate of all \(M\) orthogonal channels is given by the sum of their individual rates [89], which can be expressed as \[\sum\limits_{m=1}^{M}\text{log}_{2}\left(1+g_{m}p_{m}/N_{0}\right), \tag{11}\] where \(N_{0}\) is the noise level that can be set as \(1\) without loss of generality for the analysis. The problem is to find the power allocation scheme \(\{p_{1},\dots,p_{M}\}\) that maximizes the capacity \(C\) under the power budget and the non-negativity constraints as \[\max\limits_{\{p_{1},\dots,p_{M}\}} C=\sum\limits_{m=1}^{M}\text{log}_{2}\left(1+g_{m}p_{m}\right) \tag{12}\] \[\text{s.t.}, \left\{\begin{array}{l}P_{m}\geq 0,\forall m,\\ \sum\limits_{m=1}^{M}p_{m}\leq P_{T}.\end{array}\right.\] The dynamic nature of the wireless environment presents a significant challenge, as the values of the channel gains, denoted as \(\{g_{1},\dots,g_{M}\}\), can fluctuate within a range. This variability is illustrated in Fig. 3, which depicts the sum rate values for different power allocation schemes and channel gains when \(M=3\). It is evident that changes in channel conditions can significantly impact the optimal power allocation scheme. While various solutions have been proposed to address this issue, the following problems exist: * Traditional mathematical solutions depend on accurate channel estimation [90]. However, even with precise estimation, the resources and energy consumed by pilot signals and the algorithm to perform the estimation are considerable and also introduce latency. * Heuristic algorithms [91] can achieve near-optimal solutions; but they involve multiple iterations in the solution process, leading to increased energy consumption and additional delays. * The water-filling algorithm [92], which can optimally solve this problem and provide an upper bound on the achievable sum rate, involves an iterative process to determine the correct number of channels for power allocation. The iteration stems from the fact that power is added to channels until the marginal increase in capacity is equal across all channels, or the power budget is consumed [92]. This process can be computationally intensive, particularly when dealing with a large number of channels. Given these challenges, AI-based solutions have been proposed. For example, despite requiring a certain overhead, DRL allows for direct model deployment once training is complete. The delay in inferring an optimal solution for a given wireless environment is minimal. However, as the performance of the DRL algorithms continues to improve, the model design becomes more complex. For example, the SAC [87], a state-of-the-art DRL method, involves five networks, including two Q-networks and their target networks and a policy network, which increases the complexity of the model. As discussed in Section II-C, GDMs are characterized by their simplicity, directness, and robustness. Furthermore, GDMs can easily incorporate the wireless environment as the condition in the denoising process, leveraging their strong generative capacity to generate optimal solutions. For example, the environmental factors such as channel gains and noise, that can influence the optimal solution can be modeled as a vector \(\boldsymbol{g}\) in (9). #### Iii-B2 GDM as the solution Next, we demonstrate how to solve the problem using GDMs. The GDM is trained to generate a power allocation scheme that maximizes the sum rate. The steps to solve the problem using diffusion models are as follows: 1. **Solution Space Definition:** The first step in wireless network optimization is to define the solution space. The AI-generated solution represents the optimal power allocation scheme that maximizes the sum rate. This scheme is generated by the GDM through a series of denoising steps applied to Gaussian noise. As shown in Algorithm 1 line 2, in the considered problem, the dimension of the solution vector should be the number of channels, i.e., \(M\). Then, it should be performed in the wireless environment, as shown in Algorithm 1 lines 3-7. 2. **Objective Function Definition:** The next step is to define the objective function to be maximized or minimized. In this context, the training objective of the diffusion model is to maximize the sum rate achieved by the GDM-generated power allocation, as shown in Algorithm 1 line 8. The upper bound can be provided by the water-filling algorithm [92]. Fig. 3: The sum rate values for different power allocation schemes and different channel gains with \(M=3\) and total power is 10 W. We can observe that the optimal power allocation scheme and the corresponding peak sum rate values keep changing because of the dynamic wireless environment. 3. **Dynamic Environment Definition:** In wireless networks, the channel conditions can vary among different users, resulting in a dynamic and diverse environment. To accommodate this variability, GDM is designed to generate the optimal power allocation scheme corresponding to a given set of channel conditions. Thus, we consider a general case that each channel gains, e.g., \(g_{m}\)\((m=1,\ldots,M)\), change randomly over a range, e.g., (\(0.5,2.5\)), as shown in Algorithm 2. Note that here we consider the general case. In practice, the uniform distribution can also be replaced with a specific channel fading distribution, e.g., Rayleigh, Rician, or Nakagami-\(m\). The upper and lower bounds of the channel gains can be chosen correspondingly as needed. 4. **Training and Inference:** The conditional GDM is proposed to generate the power allocation scheme. This approach diverges from back-propagation algorithms in neural networks or DRL techniques that directly optimize model parameters. Instead, GDMs strive to generate the optimal power allocation scheme by denoising the initial distribution. The power allocation scheme designed in the given environment is denoted as \(\mathbf{p}\). The GDM that maps environment states to power allocation schemes is referred to as the _solution generation network_, i.e., \(\mathbf{\epsilon}_{\theta}\left(\mathbf{p}\left|\mathbf{g}\right.\right)\) with neural network parameters \(\theta\). The objective of \(\mathbf{\epsilon}_{\theta}\left(\mathbf{p}\left|\mathbf{g}\right.\right)\) is to output a deterministic Fig. 4: GDM training approaches with and without an expert dataset. **Part A** illustrates the GDM training scenario when an expert database is accessible. The process learns from the GDM applications in the image domain: the optimal solution is retrieved from the expert database upon observing an environmental condition, followed by the GDM learning to replicate this optimal solution through forward diffusion and reverse denoising process. **Part B** presents the scenario where no expert database exists. In this case, GDM, with the assistance of a jointly trained solution evaluation network, learns to generate the optimal solution for a given environmental condition by actively exploring the unknown environment. power allocation scheme that maximizes the expected objective function values as defined in Algorithm 1. The _solution generation network_ is represented via the reverse process of a conditional GDM, according to (9). The end sample of the reverse chain is the final chosen power allocation scheme. According to whether the expert dataset, i.e., the optimal \(\mathbf{p}\) under given \(\mathbf{g}\), is available, there are two ways to train the \(\mathbf{\epsilon_{\theta}}\): 1. _When there is no expert dataset:_ A _solution evaluation network_\(Q_{v}\) is introduced, which can assign a Q-value that represents the expected objective function to an environment-power allocation pair, i.e., \(\mathbf{g}\) and \(\mathbf{p}\). Here, the \(Q_{v}\) network acts as a guidance tool for the training of the GDM network, i.e., _solution generation network_\(\mathbf{\epsilon}_{\theta}\). The optimal \(\mathbf{\epsilon}_{\theta}\) is the network that generates the power allocation scheme \(\mathbf{p}_{0}\) according to (9) that has the highest expected Q-value. Thus, the optimal _solution generation network_ can be computed by \[\operatorname*{arg\,min}_{\mathbf{\epsilon}_{\theta}}\mathcal{L}_{\mathbf{\epsilon}}( \theta)=-\mathbb{E}_{\mathbf{p}_{0}\sim\mathbf{\epsilon}_{\theta}}\left[Q_{v}\left( \mathbf{g},\mathbf{p}_{0}\right)\right].\] (13) The training goal of the _solution evaluation network_\(Q_{v}\) is to minimize the difference between the predicted Q-value by the current network and the real Q-value. Thus, the optimization of \(Q_{v}\) is \[\operatorname*{arg\,min}_{Q_{v}}\mathcal{L}_{Q}(v)=\mathbb{E}_{\mathbf{p}_{0}\sim \pi_{\theta}}\left[\left\|r(\mathbf{g},\mathbf{p}_{0})-Q_{v}\left(\mathbf{g},\mathbf{p}_{0} \right)\right\|^{2}\right],\] (14) where \(r\) denotes the objective function value when the generated power allocation scheme \(\mathbf{p}_{0}\) is performed in the environment \(\mathbf{g}\). Then, the network structure for training is shown in Part B of Fig 4, and the overall algorithm of GDM in sum rate maximization is given in Algorithm 3. 2. _When an expert database is available:_ In some instances of intelligent network optimization, a dataset of expert solutions might already be available. For example, by applying traditional optimization schemes over time, it is feasible to obtain the optimal power allocation schemes corresponding to various channel conditions. Utilizing this expert dataset, the loss function can be designed to minimize the gap between the generated power allocation and the expert schemes as follows: \[\operatorname*{arg\,min}_{\pi_{\theta}}\mathcal{L}(\theta)=\mathbb{E}_{\mathbf{p}_ {0}\sim\pi_{\theta}}\left[\left\|r\left(\mathbf{g},\mathbf{p}_{0}\right)-r_{\exp} \left(\mathbf{g}\right)\right\|^{2}\right],\] (15) where \(r_{\exp}\left(\mathbf{g}\right)\) is the objective function value under the given \(\mathbf{g}\). To achieve efficient training, we can use a similar process to that used for GDM in the image domain. Let \(\mathbf{x}_{0}\) denote the expert solution \(r_{\exp}\). As shown in Part A of Fig 4, to train GDM by forward diffusion and inverse denoising processes, the optimization of the loss function of the GDM network can be expressed as \[\operatorname*{arg\,min}_{\pi_{\theta}}\mathcal{L}(\theta)\!=\!\mathbb{E}\! \left[\left\|\mathbf{\epsilon}\!-\!\mathbf{\epsilon}_{\theta}\!\left(\!\sqrt{\bar{a}_ {t}}\mathbf{x}_{0}+\!\sqrt{1\!-\!\bar{a}_{t}}\mathbf{\epsilon},t,\mathbf{g}\right) \right\|^{2}\right]\!,\] (16) where \(\mathbf{\epsilon}\) is the added Gaussian noise, \(\sqrt{\bar{a}_{t}}\mathbf{x}_{0}+\!\sqrt{1\!-\!\bar{a}_{t}}\mathbf{\epsilon}\) denotes the expert solution after the forward diffusion process, and the network \(\mathbf{\epsilon}_{\theta}\) can accurately predict the added noise with the inputs including the disrupted expert solution, the timestep information \(t\), and the environment information condition \(\mathbf{g}\). After training, when the channel conditions change again, the GDM network \(\mathbf{\epsilon}_{\theta}\) is capable of efficiently generating the corresponding optimal solution according to (9). **Remark 1**.: _The Algorithm 3 is designed for scenarios where an optimal solution needs to be obtained under specific environmental conditions. However, in intelligent networking, there are many situations where the value of the objective function is not immediately obtained after executing a solution in the environment [93, 94]. A typical example of this is the service provider selection problem, where tasks from users are allocated across various servers, each of which is with unique computing capability [64, 95, 96]. The total utility of all users, which is designed as the objective function to be maximized, can only be calculated after a long period of the allocation process. As a result, a decision-making process, such as allocating user tasks to desired servers, has to be modeled by forming a Markov chain [97]. In such cases, our proposed Algorithm 3 remains useful with minor adjustments. Specifically, the reward part in Algorithm 3 (lines 7-13) needs to be adjusted to take into account the dynamics of the Markov chain and add the discount factor in the loss function model. More details on how to do this, along with examples, are discussed in Section III._ **Remark 2**.: _In situations where expert strategies are not available for guidance, GDM can leverage a solution evaluation network during the training phase. This is inspired by the Q-network commonly used in DRL [98, 99, 100]. The solution evaluation network estimates the quality of a given solution, e.g., the power allocation scheme in the discussed example, under specific environmental conditions. This quality assessment guides the GDM during its iterative denoising process. Moreover, other advanced techniques from the DRL field can be adopted to make GDM training even more efficient. For example, the double Q-learning technique [101], which aims at reducing over-estimation in Q-learning, can be adopted. This approach maintains two Q-networks, using the smaller Q-value for updates, thus offering a conservative estimate and mitigating over-optimistic solution assessments [101, 102]. Incorporating such methods can augment GDM training, promoting robustness and efficiency._ #### Iii-B3 Insights To better understand the proposed GDM method, we implemented Algorithm 3 to solve the optimization problem in (12) and observed the results. We denote the sum rate obtained by performing the power allocation scheme generated by the GDM in the training process as the _test sum rate_ and use the water-filling algorithm [92] to obtain the upper bound, i.e., the _achievable sum rate_. The experimental platform for running our proposed algorithms was built on a generic Ubuntu 20.04 system with an AMD Ryzen Threadripper PRO 3975WX 32-Cores CPU and an NVIDIA RTX A5000 GPU. First, we considered a scenario with \(N=3\) channels. The channel gain values were randomly selected from \(0.5\) to \(2.5\). Note that the upper and lower channel gain limits here can be changed accordingly depending on the actual channel conditions. The number of denoising steps, denoted by \(T\), was set to 9. We then investigated the impact of different learning rates and \(\beta\) schedulers on the algorithm's performance. Figure 5 illustrates the gap between achievable and test sum rates against the training epoch. We can observe that the conventional DRL method, i.e., PPO, exhibits more significant fluctuations and less effective convergence. The challenges are from the problem's inherent complexity, the environmental variability, or the influence of specific hyperparameters. However, despite these challenges, both GDM methods outperform the PPO method, irrespective of their learning rates. In the first case, the GDM with a learning rate of 0.001, demonstrates the fastest convergence towards zero. This swift learning process underscores the efficiency of the GDM approach. In the second one, the GDM, with a learning rate of 0.0005, converges slower but still approaches zero. This slower convergence rate indicates a more gradual learning process, partly due to the fact that more minor adjustments are made per training iteration. Despite this slower convergence rate, the model still effectively learns the power allocation strategy, demonstrating the robustness of the GDM approach. This superior performance manifests the GDM's ability to capture complex patterns and relationships between observations, leading to more accurate action decisions. This ability is advantageous in network optimization problems requiring high-performance, time-efficient, fast-converging solutions. Fig. 6 further shows the robustness of the GDM methods, examining how varying random seeds influence the training performance. The figure delineates three distinct curves, each corresponding to a different random seed. While the random seed is known to significantly sway outcomes in image-related GDM applications such as Stable Diffusion [18], our findings reveal a contrasting scenario. After about 50 timesteps, all three cases stabilize, maintaining a gap to zero (where zero signifies the theoretical upper bound) w Fig. 5: Test reward curves of GDM-aided and DRL-aided optimization methods under different learning rate values, with the number of channels \(M=3\), and the channel gains vary within \(0.5\) and \(2.5\). Fig. 6: Test reward curves of GDM-aided optimization methods under different random seed values, with the number of channels \(M=3\), and the channel gains vary within \(0.5\) and \(2.5\). of 0.05. This observation shows that, unlike in image-related applications where identical text prompts can yield vastly different images based on the seed, the random seed's impact on performance in this context is minimal. This insight highlights the GDM's resilience against varying initial conditions, suggesting its consistent ability to learn the power allocation scheme and achieve near-optimal performance, especially in similar network optimization problems. Then we consider a more complex case that the number of channels is \(5\) and the channel gains of these 5 channels vary within \(0.5\) and \(5\). We compare the performance of GDM and DRL algorithms and study the impact of denoising steps. In Fig. 7, we examine the performance of the GDM method compared to two DRL methods, i.e., SAC and PPO. All three methods demonstrate convergence, while the final gap values for GDM and SAC are closer to zero, indicating a better power allocation scheme. In contrast, PPO exhibits larger fluctuations and slower convergence. While the final results of GDM and SAC are similar, GDM converges faster, which is attributed to its ability to capture complex patterns and relationships more efficiently. This faster convergence of GDM is particularly beneficial in scenarios where time efficiency is crucial. Furthermore, we study the impact of different denoising steps on the performance of the GDM in Fig. 8. The figure presents three curves, each corresponding to a different number of denoising steps. The first curve, representing \(6\) denoising steps, exhibits the fastest convergence. The second curve, corresponding to \(3\) denoising steps, converges slower. This slower convergence rate could be attributed to insufficient denoising when the number of steps is small, leading to greater uncertainty in generated power allocation schemes. However, when the number of steps is too larger, as in the third curve where the number of denoising steps is \(12\), the convergence is slowest. This could be due to the model losing its ability to explore the environment effectively, as excessive denoising might lead to overfitting the training data. This analysis underscores the importance of carefully selecting the number of denoising steps in the GDM, striking a balance between sufficient denoising and maintaining the GDM's ability to explore the environment. Fig. 9 shows the test reward curves for GDM-aided optimization methods, both with and without access to an expert dataset, in a scenario with \(71\) channels, i.e., \(M=71\), and channel gains varying between \(2\) and \(25\). The figure further validates the efficacy of the GDM approaches, irrespective of the availability of the expert dataset. Using an expert dataset in GDM training significantly accelerates the convergence process. However, even without an expert dataset, the GDM approach can independently decrease the gap between the achieved sum rate and the upper bound. Furthermore, two straightforward power allocation schemes, namely average and random allocation, are also Fig. 8: Test reward curves of GDM-aided optimization methods under different denoising steps, with the number of channels \(M=5\), and the channel gains vary within \(0.5\) and \(5\). Fig. 7: Test reward curves of GDM-aided and DRL-aided optimization methods, with the number of channels \(M=5\), and the channel gains vary within \(0.5\) and \(5\). Fig. 9: Test reward curves of GDM-aided optimization methods with and without expert dataset, with the number of channels is \(71\), i.e., \(M=71\), and the channel gains vary within \(2\) and \(25\). Average allocation, which evenly distributes power among the channels, outperforms random allocation, which arbitrarily assigns power. However, GDM, with its advanced learning capability, outperforms both strategies. Fig. 10 visualizes the process of the well-trained GDM generating the power allocation scheme from the Gaussian noise. We consider \(71\) channels with a total transmission power of \(12\ \mathrm{W}\), where the specific channel gains of the \(71\) channels randomly vary between \((2,5)\), \((10,15)\), or \((20,25)\). Figs. 10 (a)-(e) show the progressive refinement of the power allocation scheme through the denoising process. Fig. 10 (f) presents the optimal power allocation scheme obtained by the water-filling algorithm [92]. This series of figures demonstrates the capability of GDM to generate near-optimal power allocation schemes through iterative denoising, even when confronted with complex and variable channel conditions. It also highlights the close agreement between the GDM-generated and water-filling algorithm-generated power allocation schemes, emphasizing the effectiveness of GDM in learning and imitating expert solutions. The gap between the sum rate under the power allocation scheme shown in Fig. 10 (e) and the upper bound is \(0.11\ \mathrm{bit/s/Hz}\). **Lesson Learned:** From the above showcase discussions, we glean several insights into the application of GDMs in network optimization. Firstly, the superior performance of GDMs over traditional DRL methods underscores the transformative potential of GDMs in complex optimization tasks. This is particularly notable in scenarios where rapid convergence and high performance are paramount. Secondly, the learning-related parameters in GDM, such as learning rates and denoising steps, facilitate a novel balance between exploration and exploitation. Notably, the denoising process, acting as a pivotal mechanism in GDMs, introduces a fresh perspective to this classic trade-off in RL as we discussed in Fig. 8. Thirdly, the resilience of GDMs to varying initial conditions and their consistent near-optimal performance, even in the absence of an expert dataset, show the robustness and adaptability. This robustness is particularly crucial in real-world applications where conditions can be unpredictable and data may be imperfect or incomplete. Lastly, the ability of GDMs to generate near-optimal power allocation schemes that are closely aligned with expert solutions underscores their capacity for sophisticated pattern recognition and imitation. This suggests that GDMs can be used as a powerful tool for learning from and leveraging expert knowledge in complex domains in network optimization tasks. ## III Deep Reinforcement Learning This section first discusses the DRL algorithms and their applications in network optimization. Then, the integration of GDMs within DRL and a case study on AIGC service provider selection in edge networks are studied. ### _Fundamentals of DRL_ DRL is a powerful approach that combines the strengths of both deep learning and reinforcement learning, enabling the development of algorithms capable of learning to make optimal decisions through interactions with their environment [103, 104]. The DRL framework comprises two main components: the agent and the environment [105]. The agent, a decision-making entity, learns to interact optimally with Fig. 10: **Sub-figures (a) to (e) illustrate the process of \(5\)-step denoising Gaussian noise into the transmit power allocation schemes using a well-trained GDM. Here, we consider \(71\) channels with the total transmission power of \(12\ \mathrm{W}\). In these \(71\) channels, the channel gains differ randomly. Some channels fall within the range of \(2\) to \(5\), others between \(10\) to \(15\), and the remaining channels exhibit gains varying from \(20\) to \(25\). We simulate using a set of observations obtained by random sampling. Sub-figure (f) is the optimal power allocation scheme obtained by the water-filling algorithm [92].** the environment to maximize a cumulative reward [106]. The environment provides feedback to the agent in the form of rewards based on the actions taken by the agent [107]. This interaction forms the basis of the learning process in DRL. We summarize several representative DRL algorithms as * **Deep Q-Network (DQN):** DQN uses a deep neural network for approximating the Q-value function, enabling it to handle high-dimensional state spaces. However, it struggles with high-dimensional or continuous action spaces [108]. * **Prioritized DQN:** This variant of DQN prioritizes experiences with high temporal-difference error, leading to faster learning but introducing additional complexity [109]. * **Deep Recurrent Q-Network (DRQN):** DRQN extends DQN with recurrent neural networks for tasks requiring memory of past information, which is however challenging to train [110]. * **PPO:** PPO is a stable policy gradient method that keeps policy updates close to zero, which however may require more samples to learn effectively [88, 111]. * **REINFORCE:** REINFORCE directly optimizes the policy function, making it widely applicable but suffering from high variance [112]. * **SAC:** SAC maximizes both the expected return and the policy's entropy, leading to better performance in complex environments at the cost of computational complexity [87]. * **Rainbow:** Rainbow combines seven DQN improvements, enhancing performance but increasing implementation complexity [113]. In the context of wireless communications, DRL offers several advantages. First, DRL is adept at handling complex network optimization problems, enabling network controllers to find optimal solutions even without complete and precise network information [114, 103]. This strength is further complemented by DRL's capacity to enable network entities to learn and accumulate knowledge about the communication and networking environment. This facilitates learning optimal policies without knowing the channel model and mobility pattern [115, 103]. Furthermore, DRL supports autonomous decision-making, reducing communication overheads and boosting network security and robustness [64, 116]. Given these advantages, DRL has found extensive applications in network optimizations [117]. However, it is important to note that DRL also has its limitations, which, however, may be mitigated by the introduction of GDMs: * **Sample Inefficiency:** DRL often requires a large number of interactions with the environment to learn effectively, which can be computationally expensive and time-consuming [103]. GDMs, with the strong ability to model complex data distributions, could reduce the number of samples required. * **Hyperparameter Sensitivity:** The performance of DRL algorithms can be significantly influenced by hyperparameters, demanding meticulous tuning for diverse tasks [118]. GDMs, with their flexible structure and adaptability to various data distributions, could provide a more robust solution. * **Difficulty in Modeling Complex Environments:** DRL algorithms may struggle with environments characterized by complex and high-dimensional state and action spaces. By accurately capturing the underlying data distributions, GDMs could provide a more efficient representation of the environment. * **Instability and Slow Convergence:** DRL algorithms may suffer from instability and slow convergence. The unique structure of GDMs involves a diffusion process, potentially offering a more stable and efficient learning process. ### _Applications of GDM in DRL_ The distinctive characteristics of GDMs have been effectively utilized to enhance DRL. These advantages include high expressiveness, the ability to capture multi-modal action distributions, and the potential to integrate with other RL strategies seamlessly. One notable application of GDMs in DRL is presented in [50], where the authors introduced Diffusion Q-learning (Diffusion-QL). This innovative method utilized a GDM as the policy representation, more specifically, a DDPM [19] based on a Multilayer Perceptron (MLP). The authors incorporated the Q-learning guidance into the reverse diffusion chain, facilitating optimal action selection. Through this integration, they demonstrated the expressiveness of GDMs in capturing multi-modal action distributions and showcased their effectiveness in enhancing behavior cloning and policy improvement processes. As a result, Diffusion-QL surpassed previous methods across several D4RL benchmark tasks [119] for offline RL. Complementarily, the work in [51] improves offline RL further by addressing the limitations of distributional expressivity in policy models. In contrast to the approach in [50], the authors in [51] decoupled the learned policy into a generative behavior model and an action evaluation model. This separation facilitated the introduction of a diffusion-based generative behavior model capable of modeling diverse behaviors such as agent's trajectories. The optimal selection of actions from this behavior model was achieved through importance sampling in concert with an action evaluation model. They also incorporated an in-sample planning technique to mitigate extrapolation error and enhance computational efficiency. The resulting methodology outperformed traditional offline RL methods on D4RL datasets [119] and showed proficiency in learning from heterogeneous datasets. These highlighted studies represent just a subset of the burgeoning body of work on GDMs in DRL. For an extended discussion, Table II reviews various key contributions and their impacts. In summary, the integration of GDMs into DRL, as demonstrated by these representative studies and further summarized in Table II, leverages several key advantages offered by GDMs. The key advantages that GDMs offer to address the disadvantages of DRL as we discussed in Section III-A are listed below: * **Expressiveness:** GDMs are capable of modeling complex data distributions, making them well-suited for representing policies in DRL [126]. For instance, in a dynamic traffic routing scenario, the policy needs to adapt to various traffic conditions, road structures, and vehicle behaviors [127]. GDMs can effectively model such a policy. * **Sample Quality:** GDMs are known for generating high-quality samples [128, 23]. In the context of DRL, this translates into the generation of high-quality actions or strategies [129]. For example, in a network resource allocation task, the quality of the generated allocation decisions directly impacts the network performance. GDMs can generate high-quality decisions, leading to improved \begin{table} \begin{tabular}{p{56.9pt}|p{113.8pt}|p{113.8pt}} \hline \hline **Paper** & **Key Contributions** & **Results** \\ \hline [120] & Leverage Language Augmented Diffusion (LAD) models for language-based skills in RL & Achieve an average success rate of 72\% on the CALVIN language robotics benchmark \\ \hline [50] & Propose Diffusion Q-learning (Diffusion-QL) for offline RL and represent the policy as a GDM & Achieve state-of-the-art performance on the majority of D4RL benchmark tasks \\ \hline [51] & Decouple policy learning into behavior learning and action evaluation and introduce a generative approach for offline RL & Achieve superior performance on complex tasks such as AntMaze on D4RL \\ \hline [49] & Develop a diffusion probabilistic model for trajectory optimization and introduce a model directly amenable to trajectory optimization & Demonstrate effectiveness in control settings emphasizing long-horizon decision-making and test-time flexibility \\ \hline [37] & Introduce Contrastive Energy Prediction (CEP) for learning the exact guidance in diffusion sampling & Demonstrate effectiveness in offline RL and image synthesis, outperforming existing state-of-the-art algorithms on D4RL benchmarks \\ \hline [35] & Propose a robust version of the Diffusion Implicit Models (DIMs) for better generalization to unseen states in RL & Show the new approach provides more stable policy improvement and outperforms the baseline DIM methods on various complex tasks \\ \hline [121] & Treat procedure planning as a distribution fitting problem, remove the expensive intermediate supervision and use task labels instead & Achieve state-of-the-art performance on three instructional video datasets across different prediction time horizons without task supervision \\ \hline [122] & Introduce the Equivariant Diffuser for Generating Interactions (EDGI), an algorithm for MBRL and planning & Improve sample efficiency and generalization in 3D navigation and robotic object manipulation environments \\ \hline [123] & Propose a general adversarial training framework for multi-agent systems using diffusion learning, enhancing robustness to adversarial attacks & Demonstrate enhanced robustness to adversarial attacks in simulations with FGM and DeepFool perturbations \\ \hline [52] & Introduce a new imitation learning framework that leverages both conditional and joint probability of the expert distribution, and explore the use of different generative models in the framework & Outperform baselines in various continuous control tasks including navigation, robot arm manipulation, dexterous manipulation, and locomotion \\ \hline [124] & Introduce a self-evolving method for diffusion-based planners in offline reinforcement learning, demonstrating an ability to improve planning performance for both known and unseen tasks & Outperform the previous state-of-the-art Diffuser by 20.8\% on Maze2D and 7.5\% on MuJoCo locomotion, and show better adaptation to new tasks, e.g., KUKA pick-and-place, by 27.9\% \\ \hline [125] & Introduce innovations for diffusion models in sequential environments & Accurately model complex action distributions, outperform state-of-the-art methods on a simulated robotic benchmark, and scale to model human gameplay in complex 3D environments \\ \hline [48] & Apply conditional generative modeling to the problem of sequential decision-making and investigate conditioning on constraints and skills & Outperform existing offline RL approaches and demonstrate the flexible combination of constraints and composition of skills at test time \\ \hline \hline \end{tabular} \end{table} TABLE II: Extended summary of papers on GDM in DRL network performance. * **Flexibility:** The ability of GDMs to model diverse behaviors is particularly useful in DRL, where the agent needs to adapt to a variety of situations and tasks [130]. In a network management task, for instance, the network may need to adapt to various traffic conditions and user demands. GDMs can model a wide range of behaviors, enabling the network to adapt to these diverse conditions. * **Planning Capability:** GDMs can be used for planning by iteratively denoising trajectories, providing a novel perspective on the decision-making processes in DRL [52]. For example, a DRL agent could use a GDM to plan the network operations, iteratively refining the plan to optimize the network efficiency [124, 125]. While GDMs offer promising advantages in DRL, they also present certain challenges. The iterative nature of GDMs can lead to increased computational complexity, which could be a hurdle in large-scale DRL tasks such as optimizing city-wide communication networks [52]. Additionally, GDMs may struggle to accurately model certain data distributions, especially those with high noise levels or irregularities. This could pose challenges in DRL tasks involving real-world network traffic data, which may contain stronhg noise and outliers [33]. While these challenges underline the limitations of GDMs, they also present opportunities for innovative approaches that can effectively harness the benefits of GDMs while mitigating their shortcomings. Leveraging GDMs within advanced DRL algorithms offers a promising solution to both computational complexity and modeling limitations. An example could be found in combining GDMs with SAC [64], a state-of-the-art DRL method known for its efficient learning and robustness. This combination capitalizes on the strength of GDMs in modeling complex action distributions while utilizing the optimization capabilities of SAC, yielding a hybrid model with the potential for enhanced performance and efficiency in complex network optimization tasks. To illustrate this, we delve into a case study, introducing an innovative combination of GDM and SAC. ### _Case Study: AIGC Service Provider Selection_ #### V-C1 System Model The AIGC service provider selection problem depicted in Fig. 11 and detailed in [64], can be regarded as an extension of the resource-constrained task assignment problem. This is a well-known challenge in wireless networks where resources are scarce and their efficient utilization is critical to achieving the desired performance [131]. Specifically, we consider a set of sequential tasks and available ASPs, each of which possesses a unique utility function. The objective is to assign users' AIGC tasks to ASPs in a way that maximizes the overall user utility. This user utility is a function of the required computing resource for each task and it is related to the AIGC model that performs the task. In addition, we acknowledge that the computing resources of each ASP is limited. From a mathematical perspective, the ASP selection problem can be modeled as an integer programming problem, with the decision variables representing the sequence of task assignments to available ASPs. The formulation also incorporates constraints that capture the limitations on available resources. Failing to meet these constraints can have severe consequences, such as the crash of an ASP and the subsequent termination and restart of its running tasks. #### V-C2 GDM-based Optimal Decision Generation The authors in [64] applied GDM to the actor-critic architecture-based DRL paradigm and propose the Deep Diffusion Soft Actor-Critic (D2SAC) as a deep diffusion reinforcement learning algorithm. As shown in Fig. 12, the D2SAC algorithm incorporates several key components to optimize the policy, including an actor network, a double critic network, a target Fig. 11: AIGC service provider selection problem. Following the paradigm of “AIGC-as-a-Service”, various ASPs deploy their AIGC models onto network edge servers. With user requests arriving, an optimal task scheduler should be designed for real-time user task allocation. The goal is to maximize total user QoE, considering the unique capabilities of each AIGC model and the computing resource constraints of edge servers [64]. Fig. 12: The overall architecture of the D2SAC algorithm [64]. actor, a target critic, an experience replay memory, and the environment. Here's a summary and explanation of these components and their roles: * **Trajectory Collection:** The agent observes the environment and collects transitions of state by executing actions in the environment. These transitions are regarded as experiences and are added to the experience replay memory. The actor network generates an action distribution over all possible actions given an environment observation and samples an action from this distribution. This action is performed, transitioning to a new state and returning an immediate reward as feedback. * **GDM as the Policy:** The core of the actor network is the GDM, which effectively encodes the observation's representation. It captures the dependencies between the observation and the action space. * **Experience Replay Memory:** This is a method to handle the delay in receiving reward feedback. Experiences are stored and the missing reward is filled in later before updating the GDM-based network. Off-policy training is used to improve the handling of delayed feedback [132]. * **Double Critic Network:** During the policy improvement process, the actor network is optimized by sampling mini-batches of transitions from the experience replay memory. The double critic network, composed of two separate critic networks, is used to reduce the overestimation bias by providing a conservative estimate of the Q-value function [101]. * **Policy Improvement:** The actor learns to maximize the expected cumulative reward for each action at the current state. The maximization problem is solved using the gradient ascent algorithm [133]. Specifically, gradients are calculated over a mini-batch of transitions sampled from the experience replay memory, and the actor network is updated by performing gradient descent on these gradients. * **Action Entropy Regularization:** An entropy regularization term is introduced to prevent the policy from becoming overly confident in certain actions and converging prematurely to a suboptimal solution [134]. This encourages exploration. * **Q-function Improvement:** The Q-function, used for estimating the future rewards of actions, must be accurately estimated for successful optimization. To achieve this, the Temporal Difference (TD) error between two Q networks is minimized during training [135]. Next, we discuss the performance of D2SAC and compare it with seven DRL algorithms as discussed in Section III-A. Furthermore, we demonstrate the efficacy of D2SAC across various benchmark tasks within the DRL domain. #### Iii-B3 Numerical Results Fig. 13 shows the performance of D2SAC compared to benchmark reinforcement learning algorithms: DQN, DRQN, Prioritized-DQN, and Rainbow (Fig. 13 a); and REINFORCE, PPO, and SAC (Fig. 13 b). Across both figures, D2SAC's reward acquisition over time demonstrates its superior ability to balance exploration and exploitation, resulting in more optimal policy decisions. Table III presents comparative performance metrics of various control tasks in the Gym environment [137] * **Acrobot-v1:** A two-link pendulum simulation, with the goal of maintaining an upright position. The reward system is designed to favor lesser negative values. * **CartPole-v1:** A cart-pole system model, where the objective is to prevent a pole from falling. The performance measure here is the average reward, with higher values being desirable. * **CoinRun-v0:** A platform game task where the agent's goal is to collect a coin while avoiding obstacles. The performance is gauged through the average reward per episode, aiming for higher values. * **Maze-v0:** A maze navigation task, where reaching the goal while taking fewer steps is rewarded. Similar to the previous tasks, higher average reward values indicate better performance. These benchmarks cover a diverse range of problems, including physics-based control (Acrobot-v1, CartPole-v1), strategy (CoinRun-v0), and pathfinding (Maze-v0). A closer examination of the table reveals that D2SAC significantly outperforms most of the compared policies on these tasks. Specifically, for the Acrobot-v1 task, D2SAC achieves the least negative reward, implying superior performance in the complex task of manipulating the two-link pendulum. In the CartPole-v1 and CoinRun-v0 tasks, D2SAC matches the top-performing algorithms with perfect average rewards of 500 and 10, respectively, indicating a consistent ability to keep the pole upright and successfully collect coins in the platform game. The performance on Maze-v0, although not the highest, is competitive and within the performance range of top-performing policies. ## IV Incentive Mechanism Design In this section, we investigate the applicability of GDM for shaping robust and efficient incentive mechanisms in network designs. ### _Fundamentals of Incentive Mechanisms_ Incentive mechanism [64, 85] plays an important role in network optimization for maintaining the network operationality and long-term economic sustainability. Specifically, the mechanism rewards the network participants who share computing, communication, and information resources and services. Take CrowdOut [138], a mobile crowdsourcing system for road safety, as an example. Drivers (using smartphones or vehicular sensors) can report road safety situations that they experience in their urban environments, e.g., speeding, illegal parking, and damaged roads, to the central management center. However, the drivers consume their computing and communication resources, e.g., battery power, CPU, and wireless bandwidth, to sense and report issues. They might be discouraged from actively joining such cooperations without appropriate rewards, especially in the long term. Accordingly, the incentive mechanisms aim at answering the following series of questions: 1) how to encourage the network entities to behave in a certain way that is beneficial to the network, e.g., through the use of rewards, reputation, or credit [139], 2) how to motivate the contribution of resources, 3) how to discourage and prevent the malicious behavior, and 4) how to ensure the fairness. To do so, the incentive mechanisms should be designed to satisfy several properties, including but not limited to Individual Rationality (IR), Incentive Compatibility (IC), fairness, Pareto Efficiency (PE), Collusion Resistant (CR), and Budget Balance (BB) [140]. With years of research, various incentive mechanisms have been presented and widely adopted in network optimization. We consider the following representative techniques for developing incentive mechanisms, including Stackelberg game, auction, contract theory, and Shapley value. #### V-B1 Stackelberg Game In game theory, the Stackelberg game refers to an iterative process, in which a leader makes the first move and the remaining followers move sequentially, until reaching the equilibrium [141]. In the network context, the leader, typically a network operator, first determines the resource prices or service charges. Network users, i.e., followers, then determine their resource demands based on the given prices, with a goal of balancing their utility against the cost that they paid for the resources. At the Stackelberg equilibrium, the followers cannot increase their utility by changing their demands, and the leader cannot increase its profit by altering the price. In this way, the network efficiency and the participants' utilities can be balanced, thereby promoting the efficient cooperation. With wide adoption, the Stackelberg game provides a robust foundation for designing network incentive mechanisms. #### V-B2 Auction An auction mechanism is widely adopted for incentivizing resource trading [142]. Specifically, an auctioneer conducts an auction for trading network resources, e.g., bandwidth or computing power, that are subject to allocation among bidders. The auction process begins with the auctioneer announcing the resources to be traded and soliciting bids. Each Fig. 13: Comparison of test reward curves of D2SAC and benchmarks in the AIGC service provider selection task [64] \begin{table} \begin{tabular}{c|c|c c c c} \hline \multicolumn{2}{c|}{**Policy**} & \multicolumn{1}{c}{**Acrobot-v1**} & \multicolumn{1}{c}{**CartPole-v1**} & \multicolumn{1}{c}{**CoinRun-v0**} & \multicolumn{1}{c}{**Maze-v0**} \\ \hline \hline \multirow{8}{*}{DRL} & DQN & -81.81 \(\pm\) 17.19 & 499.80 \(\pm\) 0.14 & 6.00 \(\pm\) 4.90 & 3.00 \(\pm\) 4.58 \\ & Prioritized-DQN & -105.20 \(\pm\) 14.74 & 498.70 \(\pm\) 1.43 & 5.00 \(\pm\) 5.00 & 2.00 \(\pm\) 4.00 \\ & DRQN & -82.26 \(\pm\) 14.34 & 132.50 \(\pm\) 69.79 & \(-\) & \(-\) \\ & REINFORCE & -104.80 \(\pm\) 14.51 & 500.00 \(\pm\) 0.00 & 0.00 \(\pm\) 0.00 & 0.00 \(\pm\) 0.00 \\ & PPO & -77.22 \(\pm\) 8.45 & 499.90 \(\pm\) 0.33 & 0.00 \(\pm\) 0.00 & 2.00 \(\pm\) 4.00 \\ & Rainbow & -158.10 \(\pm\) 55.48 & 478.30 \(\pm\) 29.28 & 5.00 \(\pm\) 5.00 & 2.00 \(\pm\) 4.00 \\ & SAC & -121.00 \(\pm\) 35.31 & 500.00 \(\pm\) 0.00 & 10.00 \(\pm\) 0.00 & 3.00 \(\pm\) 4.58 \\ \hline \hline \multirow{8}{*}{Online[136, 137]} & A2C & -86.62 \(\pm\) 25.10 & 499.90 \(\pm\) 1.67 & \(-\) & \(-\) \\ & ACER & -90.85 \(\pm\) 32.80 & 498.62 \(\pm\) 23.86 & \(-\) & \(-\) \\ & ACKTR & -91.28 \(\pm\) 32.52 & 487.57 \(\pm\) 63.87 & \(-\) & \(-\) \\ & PPO2 & -85.14 \(\pm\) 26.27 & 500.00 \(\pm\) 0.00 & \(-\) & \(-\) \\ & DQN & -88.10 \(\pm\) 33.04 & 500.00 \(\pm\) 0.00 & \(-\) & \(-\) \\ & TRPO & -485.39 \(\pm\) 70.51 & \(-\) & \(-\) \\ & PPO + IMPALA & \(-\) & \(-\) & 8.95 & **9.88** \\ & Rainbow + IMPALA & \(-\) & \(-\) & 5.50 & 4.24 \\ \hline \hline **Ours** & **D2SAC** & **-70.77 \(\pm\) 4.12** & **500.00 \(\pm\) 0.00** & **10.00 \(\pm\) 0.00** & 7.00 \(\pm\) 4.58 \\ \hline \end{tabular} \end{table} TABLE III: Performance Comparisons on General Benchmark Tasks. bidder evaluates its demand and willingness to pay, submitting a bid accordingly. The auctioneer then chooses a subset of bidders as the winners based on the bid amount or more complex rules. Finally, the auctioneer calculates the payment from each winner, which could be the bid amount or another value depending on the auction type, and performs the resource allocation. Auctions can foster competition among bidders, aiming to maximize the social welfare in terms of network utilities while satisfying certain constraints like budget balance, i.e., the auctioneer's revenue should be positive. #### V-A3 Contract Theory Contract-theoretic incentive mechanisms can effectively address network information asymmetry [143]. In this setup, an employer (typically the network operator or service provider) and an employee (the network user) engage in a contractual agreement. The employer designs contracts specifying service charges, Quality of Service (QoS) levels, and resource allocations. However, it may not have complete information about the employees' preferences and behaviors, which is called information asymmetry [143]. With contract theory, the employers can launch a series of contracts, which ensures the IR, i.e., the utility of the employee is higher than the threshold and IC, i.e., the employees can acquire the highest utility by faithfully following the contracts that they signed properties of the employees. Hence, the employees behave honestly, driven by utilities, circumventing the undesirable effects, such as selfish strategies, caused by the information asymmetry. Contract-theoretic incentive mechanisms have been widely adopted in various network scenarios and have many variants to support high-dimension resource allocation, heterogeneous employees, etc. #### V-A4 Shapley Value The Shapley Value (SV) is a solution from cooperative game theory, quantifying a player's marginal contribution across potential coalitions. In the incentive mechanism design, the players contribute to the network and are subject to being rewarded. Hence, SV for each player, denoted by \(i\), can be defined as \[SV(i)=\sum_{\mathbb{S}\subseteq\mathbb{N}\setminus i}\frac{|\mathbb{S}|!(| \mathbb{N}|-|\mathbb{S}|-1)!}{|\mathbb{N}|!}[v(\mathbb{S}\cup i)-v(\mathbb{S} )], \tag{17}\] where \(\mathbb{S}\) represents a coalition without \(i\), \(v\) represents the value function, \(n\) is the total number of players. SV can be used to allocate rewards, reputation, or credits, in which the player contributing more resources to the network will have higher SVs, thereby encouraging cooperation and resource contribution to the network. ### _Applications of GDM in Incentive Mechanism Design_ From the above description, we can observe that the overall procedure of incentive mechanism design is to model the participants' utility and thus formulate an optimization problem under constraints. Hence, the problem becomes solving an optimization and finding the optimal incentive mechanism strategies that can maximize the utility. Traditionally, researchers find the optimal solutions following the optimization principle. Nonetheless, this method requires complete and accurate information about the network and, more importantly, is not applicable to complex network scenarios with complicated utility functions. Thanks to the strong ability to model complex environments, GDMs provide new possibilities for solving optimization problems. A typical process of adopting GDMs to design incentive mechanisms contains the following steps. * **Model the network states**: The first step is to model the network states. To do so, we typically use a vector, say **e**, which contains many factors, e.g., the upstream and downstream bandwidth, number of participants, bit error rate, and other scenario-specific factors, to depict the given network environment. * **Formulate the utilities of participants**: Based on the factors in **e** and other hyperparameters, e.g., the weights of these factors, we can formulate the utility function, as well as the associated constraints. Generally, the incentive mechanism design problem is to maximize the utility while satisfying all the constraints. * **Customize the GDM settings**: Thirdly, we customize the GDM settings according to the incentive mechanism design task. The _solution space_ is the universe of all the possible incentive mechanism strategies. For instance, the action space contains all the possible contracts in the contract-theoretic incentive mechanism. The _objective function_ takes the value of the utility function acquired in Step 2 if all the constraints are satisfied. Otherwise, it takes a large negative value as the constraint violation punishment. The _dynamic environment_ is the vector **e**. * **Train GDM and perform inference**: Finally, we can perform GDM training. The well-trained GDM can then be used for finding the optimal incentive mechanism design in any given network state **e**. The details of the training process are elaborated in Section II-D. ### _Case Study: GDM-based Contract-Theoretic Incentive Mechanism_ #### V-C1 Background In this part, we conduct a case study to illustrate how to apply GDMs in a practical incentive mechanism design problem. Specifically, we consider an emerging network scenario, namely mobile AIGC [85, 144]. Currently, the success of ChatGPT ignited the boom of AIGC, while the substantial resource costs of large AIGC models prevent numerous end users from enjoying the easy-accessible AIGC services. To this end, researchers recently presented the concept of mobile AIGC, employing Mobile AIGC Service Providers (MASPs) to provide low-latency and customized AIGC inferences, leveraging mobile communications and edge computing capabilities. Hence, the mobile AIGC network is composed of users and MASPs. The former requests AIGC services from MASPs, and the latter operates the local AIGC models to perform inferences. Given that AIGC inferences are resource-intensive, we utilize contract theory to design an incentive mechanism that rewards the MASPs according to their contributed resources. #### V-C2 System Model Considering the diversity and heterogeneity of the current AIGC models, we divide all MASPs into \(\mathcal{Z}\) levels according to the complexity of their local models, i.e., from level-\(1\) to level-\(\mathcal{Z}\). The model complexity of each level of MASPs (denoted by \(\theta_{1}\),..., \(\theta_{\mathcal{Z}}\)) can be quantified from different aspects, such as the number of model parameters [145]. Typically, the higher the model complexity, the powerful the model is, and simultaneously, the more computing resources are required during the inference [146]. In our system, we let the index of level follow the ascending order of model complexity, i.e., the higher the model complexity, the higher the index. Finally, we use \(p_{z}\) to denote the proportion of level-\(z\) (\(z\in\{1,2,\ldots,Z\}\)) MASPs in the entire mobile AIGC network. #### V-B3 Utility Formulation For simplicity, we assume users evaluate the AIGC services using the most fundamental metric, i.e., the service latency. Considering the heterogeneity of MASPs, the expected service quality and the required service fees for different levels of MASPs are different. Hence, the utility of users towards level-\(z\) (\(z\in\{1,2,\ldots,Z\}\)) MASPs can be defined as [143] \[U_{\mathrm{U}}^{z}=\big{[}\alpha_{1}(\theta_{z})^{\beta_{1}}-\alpha_{2}( \mathcal{L}_{z}/\mathcal{L}_{max})^{\beta_{2}}\big{]}-\mathcal{R}_{z}, \tag{18}\] where \(\big{[}\alpha_{1}(\theta_{z})^{\beta_{1}}-\alpha_{2}(\mathcal{L}_{z}/\mathcal{ L}_{max})^{\beta_{2}}\big{]}\) is a complexity-latency metric [143], indicating the revenue that the client can gain. \(\mathcal{L}_{z}\) is the latency requirement of users for level-\(z\) MASPs, while \(\mathcal{L}_{max}\) is the maximum expected latency. \(\alpha_{1}\), \(\alpha_{2}\), \(\beta_{1}\), and \(\beta_{2}\) are weighting factors. \(\mathcal{R}_{z}\) represents the rewards that users need to pay for level-\(z\) MASPs. For MASPs, they sell the computational resources by performing AIGC inferences for users. Therefore, the utility of level-\(z\) MASPs can be defined as \[U_{\mathrm{SP}}^{z}=R_{z}-\Big{[}\frac{(\mathcal{L}_{max}-\mathcal{L}_{z})}{ \mathcal{L}_{z}}\cdot\theta_{z}\Big{]}, \tag{19}\] where \(\Big{[}\frac{(\mathcal{L}_{max}-\mathcal{L}_{z})}{\mathcal{L}_{z}}\cdot \theta_{z}\Big{]}\) represents the costs of level-\(z\) MASPs, which is determined by two factors, the model complexity \(\theta_{z}\) and the latency \(\mathcal{L}_{z}\). Firstly, with \(\theta_{z}\) fixed, the higher the \(\mathcal{L}_{z}\), i.e., longer latency can be tolerated by the users, the smaller the costs. Meanwhile, the larger the \(\theta_{z}\), the larger the costs of MASPs, since we have mentioned that complex models typically consume more resources for inference. #### V-B4 GDM-based Optimal Contract Generation Based on the above descriptions, we design the following contract-theoretic incentive mechanism. Specifically, the users produce a specific contract, formed by \(\big{\{}\mathcal{L}_{z},\mathcal{R}_{z}\big{\}}\) (\(z\in\{1,2,\ldots,Z\}\)), for each level of MASPs, which then decide whether to sign. The contract design should be optimal, maximizing \(U_{\mathrm{C}}\) while satisfying the IR and IC constraints, i.e., \[\max_{\mathcal{L}_{z},\mathcal{R}_{z}} \sum_{z=1}^{Z}p_{z}U_{\mathrm{U}}^{z}\left(\mathcal{L}_{z}, \mathcal{R}_{z},\theta_{z}\right), \tag{20}\] \[\mathrm{s.t.} \quad z\in\{1,\ldots,\mathcal{Z}\},\] \[\mathrm{(IC):} \ U_{\mathrm{SP}}^{z}(\mathcal{L}_{z},\mathcal{R}_{z},\theta_{z}) \geq U_{\mathrm{SP}}^{z}(\mathcal{L}_{j},\mathcal{R}_{j},\theta_{z}),\] \[\quad z,j\in\{1,\ldots,\mathcal{Z}\},z\neq j,\] where \(U_{th}\) is the utility lower bound for MASPs. Finally, we apply the aforementioned four-step procedure to formulate the GDM training paradigm and find the optimal contract design. * **Model the network state**: For simplicity, we consider two types of MASPs in the mobile AIGC network. Hence, the network state vector in our case is defined as [\(n\), \(L_{max}\), \(p_{1}\), \(p_{2}\), \(\theta_{1}\), \(\theta_{2}\)]. * **Formulate the utility of participants**: There are two utility functions in our case, i.e., \(U_{\mathrm{U}}\) and \(U_{\mathrm{SP}}\). The former is the major utility that we intend to maximize. The latter is used in calculating the constraints, i.e., IR and IC. * **Customize the GDM settings**: The space is formed as the universe of the contract design. Each item is formed as {\(\mathcal{L}_{1}\), \(\mathcal{R}_{1}\), \(\mathcal{L}_{2}\), \(\mathcal{R}_{2}\)}. The hyperparameters \(\alpha_{1}\), \(\alpha_{2}\), \(\beta_{1}\), and \(\beta_{2}\) are set as 30, 5, 1 and 1, respectively. * **Train GDM and perform inference**: We train the GDM for more than 50000 epochs. The numerical results are discussed below. #### V-B5 Numerical Results Fig. 14 shows the test reward curves of GDM and the baseline, i.e., PPO. We can observe that the coverage speeds of GDM and PPO are roughly the same. However, GDM outperforms PPO in terms of rewards. The reason is two-fold: 1) the _solution generation network_ is fine-tuned by the diffusion process, and 2) more policies can be Fig. 14: Test reward curves of GDM and DRL, i.e., PPO, for optimal contract finding task Fig. 15: Generated contracts of GDM under different network states. tested thanks to GDM's high sample quality. As shown in Fig. 15, the high positive rewards mean that GDM can stabilize and ensure high utility of the users while satisfying the IR and IC constraints in any given network states. ## V Semantic Communications In this section, we consider the SemCom technique and discuss the role of GDM in SemCom. ### _Fundamentals of Semantic Communications_ SemCom [41] refers to extracting and transmitting the most relevant semantic information from raw data to the receivers using AI technology. It aims to lower network loads by selectively transmitting meaningful and contextually relevant information instead of transmitting the entire raw data [147]. SemCom consists of three main components: the semantic encoder, the wireless channel, and the semantic decoder. #### V-A1 Semantic Encoder It is responsible for extracting and transmitting relevant semantic information from the raw data provided by the transmitting users. This is typically achieved by utilizing neural networks, which encode the raw data into meaningful semantic representations. The semantic encoder employs various techniques such as feature extraction and dimensionality reduction to capture the essential semantic information [148]. #### V-A2 Wireless Channels However, during transmission, the semantic information is subject to physical noise introduced by the wireless channel [149]. Physical noise refers to external factors that interfere with the transmission of the message. It can result in noise-corrupted semantic information, which is then transmitted to the receivers for further processing. The channel component of SemCom handles the transmission of this noise-corrupted semantic information, taking into account the wireless channel characteristics and the potential effects of noise and interference. #### V-A3 Semantic Decoder The receivers employ a semantic decoder, e.g., implemented by neural networks, to decode the received noise-corrupted semantic information and reconstruct the distorted data. The semantic decoder utilizes its learning capabilities to reverse the encoding process and extract the intended semantic meaning from the received information [150]. Semantic noise arises from the use of symbols that are ambiguous to the receivers. It can also occur when there is a mismatch in understanding between the sender and receiver. By employing sophisticated neural network architectures, the semantic decoder aims to minimize the effects of semantic noise and accurately obtain the original semantic data. The ultimate objective of SemCom is to effectively convey the intended meaning of the transmitted symbols, rather than transmitting the raw bits directly, thereby reducing communication overhead and enhancing communication effectiveness [151]. ### _Case Study: GDM-based Resource allocation for SemCom-aided AIGC services_ #### V-B1 Motivation There are several examples of integrating GAI technologies in SemCom [152]. For instance, GANs have been employed to develop semantic decoders that tackle the out-of-distribution problem of SemCom [153]. GANs are used to generate realistic and meaningful semantic information based on the available data. Additionally, a variational autoencoder (VAE) is utilized to calculate the lower bound of semantic distortion and derive the corresponding loss function [154]. By incorporating GANs and VAEs, SemCom can enhance the accuracy and fidelity of semantic decoding, thereby improving the overall communication performance. To further explore the applications of GDMs in SemCom, we consider an AIGC service process as shown in Fig. 16, where edge devices initially collect raw data, such as photographs, and extract semantic information. This semantic information is then utilized by AIGC service providers, who apply GAI models to perform AIGC inference and generate meaningful content, such as stylized animations. Subsequently, multimedia service providers, like Metaverse service providers, use this content to create digital content for users, such as animated avatars [44]. We formulate a unified resource allocation problem, considering the limited computing and communication resources allocated to the semantic extraction, AIGC inference, and graphic rendering modules. The objective is to maximize the overall utility by efficiently allocating these resources. #### V-B2 Problem Formulation The integration gain includes the computing time for semantic extraction (\(T_{s}^{comp}\)), AIGC inference (\(T_{a}^{comp}\)), and graphic rendering (\(T_{m}^{comp}\)). These times are influenced by the available computing resources and the current computing resource congestion, introducing uncertainty to the utility optimization problem. Concurrently, the transmission time is associated with the transfer of semantic information (\(T_{a}^{comp}\)), AIGC content (\(T_{m,u}^{comm}\)), and rendering results (\(T_{m,d}^{comm}\)). These times are affected by the allocated com Fig. 16: Resource allocation problem in an AIGC service scenario. First, the edge devices collect raw data, e.g., photos, and extract semantic information. Then, the AIGC service providers use the received semantic information to perform the AIGC inference using GAI models to obtain meaningful content, e.g., animated style photos. These contents are further used by the multimedia service provider, e.g., Metaverse service provider, to render digital content for the users, e.g., animated style avatars [44]. munication resources to each part. Specifically, we consider the allocation of bandwidth resources with \(W_{a}^{m}\), \(W_{m}^{s}\), and \(W_{s}^{a}\) denoting the bandwidths for semantic information, AIGC content, and rendering results transmissions, respectively. The objective function is given by \(\ln\left(R_{s}^{a}\right)+\ln\left(R_{m}^{m}\right)+\ln\left(R_{m}^{s}\right)\), where \(R_{s}^{a}\), \(R_{s}^{a}\) and \(R_{s}^{a}\) are the data rates for the transmissions of semantic information, AIGC content, and rendering results, respectively. The logarithmic form is used as we assume that the subjective user experience follows a logarithmic law to the objective performance metrics [155]. The objective function is considered as the reward in the GDM-based resource allocation scheme to find a near-optimal strategy. Following [156, 157], we construct the bandwidth allocation problem as follows: \[\begin{array}{cl}\max\limits_{W_{a}^{m},W_{m}^{s},W_{s}^{a}}&\ln\left(R_{a} ^{a}\right)+\ln\left(R_{m}^{a}\right)+\ln\left(R_{m}^{s}\right),\\ \mathrm{s.t.}&T_{s}^{comp}+T_{a}^{comm}+T_{m}^{comp}\\ &+T_{m,u}^{comm}+T_{m,d}^{comm}+T_{m}^{comp}\leq T_{\max},\\ &W_{a}^{m}+W_{m}^{s}+W_{s}^{a}\leq W_{\max}.\end{array} \tag{21}\] #### V-B3 GDM-based Resource Allocation Scheme Generation The optimal bandwidth resource allocation scheme can be generated according to the following steps * **Step 1: Solution Space Definition:** The solution space in the proposed problem encompasses allocating available bandwidth for transmission among the semantic extraction, AIGC inference, and rendering modules. The goal is to optimize the utilization of bandwidth resources to ensure efficient communication and collaboration between these modules. * **Step 2: Objective Function Definition:** The training objective of the proposed problem is to maximize the utility of the system, which is served as rewards that are obtained by dynamic resource allocation strategies. It should consider the total tolerable transmission time and available resources among these modules. * **Step 3: Dynamic Environment Definition:** GDMs are utilized to generate an optimal bandwidth allocation scheme based on a given set of wireless channel conditions and computing capabilities involved in the three modules, such as the semantic entropy and the transmit power. Semantic entropy is defined as the minimum expected number of semantic symbols about the data that is sufficient to predict the task [157]. The semantic entropy and the transmit power are randomly varied within a specific range associated with a given task. * **Step 4: Training and Inference:** The conditional GDM generates the optimal bandwidth allocation strategy by mapping different environments to bandwidth allocation designs. The optimal strategy is achieved through the reverse process, where the GDM trains and infers the corresponding allocation policies to maximize the expected cumulative utility. #### V-B4 Numerical Results As depicted in Fig. 17, Diffusion outperforms PPO regarding the obtained utilities. Fig. 18 is the generated strategies of Diffusion and PPO under dynamic environments \(\mathsf{GDM}_{1}\), \(\mathsf{GDM}_{2}\), \(\mathsf{PPO}_{1}\), and \(\mathsf{PPO}_{2}\)[44]. \(\mathsf{GDM}_{1}\) and \(\mathsf{GDM}_{2}\) are distinct environment definitions employed to generate utilities, which are aligned with \(\mathsf{PPO}_{1}\) and \(\mathsf{PPO}_{2}\). We can also learn from Fig. 18 that Diffusion outperforms PPO in terms of generated strategies under dynamic environments. This superiority can be attributed to the optimal bandwidth allocation mechanism inferred by GDMs, which enables fine-tuning output through denoising steps and facilitates exploration. Consequently, the proposed mechanism exhibits enhanced flexibility, mitigating the effects of uncertainty and noise encountered during the transmission and computing among semantic extraction, AIGC inference, and graphic rendering modules. ## VI Internet of Vehicles Networks In this section, we introduce the concept of IoV networks, discuss the role of GDM in IoV networks and give an example [158]. ### _Fundamentals of IoV Networks_ Drawing inspiration from the Internet of Things (IoT), the IoV network turns moving vehicles into information-gathering Fig. 17: Test reward curves of GDM and DRL, i.e., PPO, in SemCom [44] in bandwidth allocation task Fig. 18: Generated strategies of GDM and PPO under dynamic environments [44] nodes [159, 160]. Harnessing emerging information and communication technologies facilitates network connectivity between vehicles and other elements, i.e., other vehicles, users, infrastructure, and service platforms. For the IoV network, the goal is to enhance the overall intelligence of the vehicle, as well as improve the safety, fuel efficiency, and driving experience [161]. In the IoV network, vehicles are regarded as data agents for collecting and disseminating data such as traffic patterns, road conditions, and navigation guidance [162]. Managing large amounts of data in the IoV network is a very complex task. As a remedy, GAI is proposed. In particular, GAI performs the critical functions of organizing and restoring the data collected within the IoV. Additionally, it can generate synthetic data, enhancing the efficacy of machine learning model training within the network. Furthermore, the contributions of GAI go beyond simple data management. It utilizes the collected data to inform the real-time decision-making process. This includes predicting traffic conditions, identifying potential hazards, and determining the best route for the driver. ### _Applications of GDM in IoV Networks_ The field of GAI is composed of several models, and each model brings unique capabilities to various applications. The GDM has attracted much attention among these models due to its unique advantages. Applying the GDM model within IoV networks yields promising results. In particular, there are two specific applications as follow: #### V-B1 Recovery of Images sent by vehicles In IoV networks, vehicles usually transmit images to communicate information about their environment for safe driving. However, these images may be distorted or lose quality due to transmission errors, noise, or interference. The GDM, with its ability to generate high-quality images, can be employed to recover the original quality of these transmitted images. In particular, the vehicles adopt semantic technology to extract information from images, i.e., as a prompt at the transmitter, and recover it using GDM at the receiver. By doing so, the transmitted data and communication delays can be reduced in IoV. #### V-B2 Optimization Based on GDM The GDM iterative framework suits the IoV network optimization tasks, including path planning and resource allocation [163]. Using stochastic differential equations (SDEs), the model refines solutions progressively via a diffusion process. For example, in path planning, GDM begins with a random path, making iterative refinements based on performance criteria such as travel time and energy consumption. The model uses gradients of these metrics to guide the path updates toward an optimal or near-optimal solution, stopping iterations when updates become negligible. Therefore, thanks to the ability to recover high-quality images from transmitted data and iteratively optimize solutions, the GDM provides a powerful tool for enhancing the efficiency and robustness of IoV networks. ### _Case Study: A GAI-driven IoV network_ In this part, we conduct a case study to illustrate how to apply GDMs in IoV design. #### V-C1 System Model Under the 3GPP V2X standard [164], we consider a GAI-driven IoV network with multiple V2V links as shown in Fig. 19. We aim to ensure reliable, real-time information transmission in our considered network. The orthogonal frequency division multiplexing technology is adopted, where each V2V link can achieve dynamic transmission rates on different sub-channels. Moreover, a successful image transmission rate is introduced as a constraint. This rate is affected by different parameters such as achievable transmission rate, image similarity measure, channel coherence time, and generated image payload. #### V-C2 Problem Formulation In our considered work, We consider transmission rate and image similarity as the performance indicators, and hence they are combined into a unified QoE indicator and used as the optimization goal. As described in (22), an optimization problem is formulated to maximize the system QoE under the constraints of the transmission power budget and the probability of successful transmission for each vehicle, where the channel selection strategy, the transmission power for each vehicle, and the diffusion steps for inserting the skeleton are jointly optimized. \[\max_{\{P_{v},d_{v},c_{v}\}} \sum_{v\in V}\mathrm{QoE}(v) \tag{22a}\] \[\mathrm{s.t.} \sum_{v\in V}p_{v}\leq P_{\text{max}},\text{(Power Budget)}\] (22b) \[\mathrm{Pr}(v)\geq\mathrm{Pr}_{\text{min}},\text{(Transmission Constraint)}\] \[c_{v}\in C,\text{(Channel Selection Constraint)}\] (22d) \[d_{v}\in\mathbb{N}^{+},\text{(Diffusion Steps Constraint)}\] (22e) \[\forall v\in V.\] #### V-C3 GDM-based Joint Channel Selection and Power Allocation For the formulated problem, a GDM-based DDPG Fig. 19: GAI-enabled IoV network, where the semantic information extraction step, image skeleton extraction step, wireless transmission step, GAI-enabled image generation step and image reconstruction step are involved [158]. approach is proposed, where the corresponding three tuples of MDP and the network design are as follows. * **MDP design:** The state space consists of the current information and previously selected actions, where the current information includes the channel information of each V2V link, the transmission rate of each V2V link, and the generated image payload. The action space consists of the selectable channel, the transmit power, and the diffusion steps for inserting the skeleton. The reward function consists of an instant reward term and a penalty term. Accordingly, the agent can achieve high QoE while satisfying the corresponding constraints. * **GDM-based network design:** In our proposed approach, we adopt the GDM-based network instead of the traditional DRL neural network. Specifically, the GDM maps the state of the environment to a "solution generation network," representing the obtained resource allocation scheme. This approach can be fine-tuned to generate samples spanning multiple time steps, enhancing its ability to handle tasks with long-term dependencies. #### Vi-B4 Numerical Results Fig. 20 shows average cumulative rewards obtained by different types of schemes versus the number of training episodes. It shows that our proposed GDM-based approach always outperforms other baselines (i.e., DRL-DDPG, DRL-DQN, greedy, and random schemes) under the same parameter settings when all schemes converge. It can also be observed that although the proposed GDM-based DDPG approach and DDPG-based obtain roughly similar rewards during the training phase, the proposed GDM-based DDPG approach outperforms DDPG after convergence. The reason is that the GDM network can fine-tune the output through the denoising step, thus facilitating exploration for better actions. ## VII Miscellaneous Issues In this section, we discuss the applications of GDM to a number of other network issues, including channel estimation, error correction coding, and channel denoising. ### _Channel Estimation_ #### Vii-A1 Motivations In wireless communication systems, the wireless channel depends on a variety of factors such as fading, interference, and noise, which can lead to distortions in the received signal. Consequently, researchers introduce the channel estimation techniques to estimate the channel response, which can be used to mitigate the impacts caused by the aforementioned factors, thereby enhancing the quality of the received signal. As such, accurate channel estimation is crucial for reliable communication and efficient use of the available bandwidth [165]. So far, several kinds of channel estimation techniques have been proposed, including pilot-based, compressed sensing-based, etc. The pilot-based methods use known pilot symbols inserted in the transmitted signal to estimate the channel response. For instance, the minimum mean square error (MMSE) based method achieves channel estimation by multiplying the received signal with the conjugate of the transmitted signal, followed by division by the sum of the power of the transmitted signal and the noise variance. This method not only minimizes the mean square error between the received signal and the estimated signal but also considers the noise variance, which is important for determining the reliability of the estimated channel coefficients [166]. The compressed sensing-based methods exploit the sparsity of the channel response to estimate it from a small number of measurements. For example, the authors in [167] create a training signal using a random sequence with a known pilot sequence. At the receiver, first-order statistics and the compressed sensing method are applied to estimate the wireless channels with sparse impulse response. Unlike these two methods, data-driven methods employ machine learning algorithms to learn the channel response from the received signal without relying on any prior knowledge of the channel during the offline training phase. After trained, the data-driven methods can estimate the channel in an online phase. For instance, the authors in [168] first use the convolutional neural network (CNN) to extract channel response feature vectors, and then employs recurrent neural network (RNN) for channel estimation. Besides, there are some other techniques, such as optimization-based methods, which use mathematical optimization, such as convex optimization, to estimate the channel response, and hybrid methods that combine different techniques to improve the accuracy and efficiency of channel estimation. While effective, existing methods still faces several challenges. One of the main challenges is the dynamic nature of the channel, which means that the channel can change rapidly due to various factors such as mobility and interference. This requires channel estimation to be robust to test-time distributional shifts [169]. These shifts naturally occur when the test environment no longer matches the algorithm design conditions, especially at the user side (could be transmitter or receiver), where the propagation conditions may change from indoor to outdoor, whenever the user is moving. An effective solution to this challenge is to use GAI for robust channel estimation, because of the following main reasons. * The GAI model can extract complex patterns from large Fig. 20: Test reward curves of different solutions, i.e., GDM, DRL-DDPG, DRL-DQN, greedy, and random schemes, versus the training episodes in GDM-enabled IoV networks. amount of data and learn in a changing environment. This not only enhances the model's generalization ability but also enables it to adapt to the dynamic characteristics of the channel, thereby improving the robustness of the estimation. * The GAI model can directly learn the distribution of channel responses from the received signals and use the structure captured by the deep generative model as a prior for inference, eliminating the need for prior knowledge of the sparsifying basis. Next, we further illustrate applications of GAI in channel estimation, using MIMO channel estimation via GAI model as a case study [169]. #### V-A2 Case Study: MIMO Channel Estimation Utilizing Diffusion Model Channel estimation using diffusion model [169] primarily involves two phases: the training phase and the inference phase, as shown in Fig. 21. The training phase involves using a deep neural network to learn an underlying structure of the channel from a set of noisy channel estimates. The main steps include the following: * **Step 1**: Using the received pilot symbols to calculate the noisy channel estimation \(\mathbf{h}\). * **Step 2**: Adding the noise to the training channel \(\mathbf{h}\) to produce a perturbed channel \(\mathbf{\tilde{h}}\). * **Step 3**: Computing the gradient of \(\log_{p_{H}}\left(\mathbf{\tilde{h}}\right)\). * **Step 4**: Producing a regression target for the gradient using the diffusion model. * **Step 5**: Training the parameters of the deep neural network using back-propagation and the \(l_{2}\)-loss. The inference stage involves utilizing the trained model to estimate the channel based on a set of received pilot symbols. The primary steps are as follows: * **Step 1**: Updating the current channel estimation via the pilot consistency term, which enforces consistency between the received pilot symbols and the estimated channel. * **Step 2**: The diffusion update is applied to the channel estimate, which smooths out the estimate and helps to reduce noise. * **Step 3**: To prevent the model from converging to a sub-optimal solution, noise is added to the updated channel estimate at each step. * **Step 4**: The process is repeated until convergence, at which point the final estimate of the channel is produced. It is noteworthy that the iterative algorithm operates independently of the training phase and can accommodate other impairments such as interference scenarios or few-bit quantization of the received pilots. The proposed model is evaluated by training an NCNv2 model [170] on complex-valued channel matrices. The model architecture, RefineNet [171], comprises eight layers and approximately 5.2 million parameters. To accommodate complex-valued inputs, the real and imaginary components of the matrix are processed as two separate input channels. Training is performed on a dataset of \(20,000\) channel realizations, derived from the clustered delay line (CDL) channel model, with an equal distribution between two antenna spacings [169]. Fig. 2 in [169] presents the test results for in-distribution CDL channels in a blind SNR configuration with \(\alpha=0.4\). The top plot reveals that the comparison algorithm, WGAN [172], captures some aspects of the channel structure for very low antenna spacing. However, its performance peaks, about -26 dB, rapidly in high SNR conditions. Another comparative algorithm, i.e., Lasso [173], similarly exhibits a trend, with its peak value approximately at -22 dB. This effect is more pronounced with an antenna spacing of half wavelength and fewer structural components, indicating that neither baseline employs a suitable prior knowledge. In contrast, the diffusion-based approach exhibits a near-linear reduction in the normalized mean square error (NMSE), aligning with the theoretical findings in [174], without explicit learning of a prior. At an SNR level of 15 dB, the NMSE of the diffusion-based approach is over 12 dB lower than both baseline methods, underscoring the superiority of the diffusion-based approach. ### _Error Correction Coding_ #### V-B1 Motivations For wireless communications, it is crucial to design codes that can be decoded robustly in noisy channels. Fundamental decoding methods can be divided into hard decoding and soft decoding [175]. Specifically, hard decoding considers only the most probable value of the received signal, disregarding the information about signal quality. Conversely, soft decoding not only considers the most probable signal value, but also leverages additional information about signal quality to enhance decoding performance. Although these basic decoding strategies may be sufficiently effective in some cases, efficient decoding for more complex encoding systems, such as algebraic block codes, remains an unresolved issue [175]. In particular, decoding according to the maximum likelihood rule, that is, finding the codeword that makes the received signal most likely to appear, has been Fig. 21: During training, the noise is first added to \(\mathbf{h}\) to obtain \(\mathbf{\tilde{h}}\). Then a regression target for the gradient of \(\log_{p_{H}}\left(\mathbf{\tilde{h}}\right)\) is produced. After that, the \(l_{2}\)-loss is used to train the parameters of the deep neural network via back-propagation. After training, the current channel estimate is updated by a pilot consistency term, a diffusion update, and added noise, to achieve inference. proved to be a non-deterministic polynomial-time hard problem. In other words, an exponential search may be required to find the best decoding result, which is not feasible in practice. Recently, some works represented by model-free machine learning methods tried to solve this problem. In particular, in [176], a transformer-based decoder was proposed to embed the encoder into the considered architecture, where the results showed that it outperforms existing methods by a large margin at a fraction of the time complexity. However, this model-free approach suffers from several major drawbacks. First, it requires a lot of space and memory, which can be a problem on resource-constrained hardware. Second, since it does not employ an iterative solution, the same computationally intensive neural decoding process is required regardless of the degree of codeword corruption. To this end, the GDM is considered for decoding [177]. Specifically, GDM decodes the channel codewords in an iterative manner, which not only greatly reduces the computational complexity, but also adapts to different degrees of codeword corruption efficiently. In addition, GDM is able to regard the pollution of channel codewords as a forward diffusion process, and in this process, the channel corruption can be reversed by an adaptive denoising diffusion probability model. #### V-B2 Case Study _Denoising Diffusion Error Correction Codes:_ As shown in Fig. 22, the elements of the denoising diffusion used for decoding and the proposed architecture are summarized, where the training process is as follows. * **Decoding as a Reverse Diffusion Process** In this stage, a process of "forward diffusion" is used to process codewords sampled from a particular encoding distribution. Specifically, the process gradually transmits codewords by gradually adding a small amount of Gaussian noise, with the size of each step controlled by a specific variance table. Next, data transmission over a noisy communication channel is regarded as a modified iterative diffusion process that requires inversion at the receiving end to decode the original data. Finally, decoding is regarded as a reverse diffusion process, transforming the posterior probability into a Gaussian distribution as per the Bayesian theorem [176]. The goal of the decoder can be defined to predict the noise of the channel. * **Denoising via Parity Check Conditioning** In the decoding process, it is regarded as the reverse denoising process of the GDM, which relies on time steps and can reverse the entire diffusion process by sampling Gaussian noise corresponding to the final step. During training, a time step is randomly sampled, generating noise and a syndrome requiring correction. Owing to its invariance to the transmitted codeword, diffusion decoding can be trained using a single codeword. During inference, the denoising model predicts multiplicative noise, converts it into additive noise, and performs the gradient step in the original additive diffusion process. Fig. 4 in [178] shows BER obtained by three schemes in terms of the normalized SNR values, i.e., \(E_{b}/N_{0}\) (EbNo), over the Rayleigh fading channel environment. It shows that with the increment of the value of EbNo, the GDM-based scheme is superior to other benchmarks. In particular, when the EbNo is 4 dB, the BER obtained by GDM scheme is 50% of that obtained by binary phase (BP) scheme, and 11% of that obtained by error correction code transformer (ECCT) scheme [176]. The reason is that the GDM is able to learn to decode, even under some serious noisy fading channels. ### _Channel Denoising_ #### V-C1 Motivations GDM-based models are characterized by the ability to gradually add Gaussian noise to the training data, and then learn to restore the original data from the noise through a backsampling process. The process is similar to a receiver in a wireless communication system, which is required to recover the transmitted signal from the noisy received signal. Thus, in [179], a GDM-based wireless communication channel denoising model is designed, which can be used as a new module to predict and remove channel noise after channel equalization, thus improving the overall performance. In particular, it relies entirely on the forward diffusion process without any received signal. When applied to a semantic communication system based on joint source channel coding (JSCC), whether in Rayleigh fading channels or additive white Gaussian noise (AWGN) channels, the GDM-based channel denoising model can effectively reduce the distance between the transmitted signal and the received signal. #### V-C2 Case Study _GDM-based Channel Denoising Model:_ As shown in Fig. 23, the joint GDM and JSCC architecture is summarized, where the training process is as follows. * **Conditional Distribution of The Received Signals:** Real-valued and complex-valued symbols are transformed and transmitted in the wireless channel, where the transformation combines the effects of Rayleigh fading gain and additive white Gaussian noise. The received signal is then processed through an MMSE equalizer to produce an equalized complex signal. Study conditional Fig. 22: The denoising diffusion error correction codes architecture, where the decoding is performed via the reverse diffusion process [178]. distributions of real-valued vectors using known signal and channel state information. Based on the noise impact and channel state, the signal is reparameterized and a GDM-based channel denoising model is trained to obtain noise estimates. * **Training Algorithm of GDM:** In the training process of GDM, the original source signal is first represented in a new parameterized form. At the beginning of training, the Kullback-Leibler divergence [145] is mainly used to optimize the variational upper bound of the negative log-likelihood. During training, the optimal value of a key hyper-parameter is required to be determined. Next, the optimization objective for a series of loss functions is simplified by re-parameterization and re-weighting methods. Finally, the overall loss function is minimized, effectively recovering the original source signal. Figs. 5 and 6 in [179] show PSNR obtained by three schemes in terms of the SNR over the AWGN channel and Rayleigh fading channel environments. To achieve optimal performance, both GDM-based JSCC scheme and JSCC scheme are required to be retrained for a given SNR. It shows that for different values of SNR, the GDM-based JSCC scheme is superior to others. For example, over Rayleigh fading channel with SNR of 20 dB, compared with the JSCC scheme, the GDM-based JSCC scheme can obtain about 1.06 dB gain. ## VIII Future Directions This section elucidates potential research avenues warranting further examination. ### _Space-air-ground Integrated Network_ The Space-Air-Ground Integrated Network (SAGIN) is a promising paradigm for future wireless networks, characterized by its three-dimensional coverage, high capacity, and reliable communications [180, 181, 182]. However, the optimization of SAGIN is a complex task due to the high dimensionality of the network configuration, the heterogeneity of the network elements, and the dynamic nature of the network environment [183, 184]. GDMs, with their ability in complex data distribution modeling, could be a powerful tool for optimizing SAGIN. * **Dynamic Network Environment Modeling and Prediction:** The dynamic nature of the SAGIN environment poses a significant challenge for its optimization [181, 185]. GDMs can be used to model and predict these dynamic network environments. This would allow for more efficient resource allocation, network scheduling, and routing strategies, as the predictions could provide valuable insights into future network states [186]. * **Synthetic Network Scenario Generation:** Testing and validating network optimization algorithms require a variety of network scenarios [187]. GDMs can generate synthetic network scenarios that closely mimic real-world conditions, providing a robust platform for testing and validating these algorithms. * **Network Scheduling and Routing:** SAGIN involves a variety of network elements, each with its unique characteristics and requirements [188, 189]. GDMs can capture these unique characteristics and model the complex interactions between different network elements, facilitating more efficient network scheduling and routing strategies. ### _Extremely Large-Scale MIMO_ Extremely Large-Scale MIMO (XL-MIMO) is an emerging technology that is expected to play a pivotal role in the 6G of wireless mobile networks [190, 191]. XL-MIMO offers vast spatial degrees of freedom by deploying an extremely large number of antennas, leading to significant enhancements in spectral efficiency and spatial degrees of freedom. However, implementing XL-MIMO introduces new challenges, including the need for more flexible hardware designs, a much larger number of antennas, smaller antenna spacing, new electromagnetic characteristics, and near-field-based signal processing schemes [192, 193]. GDMs can be instrumental in addressing these challenges and optimizing the performance of XL-MIMO systems. Here are some potential research directions: * **Hybrid Channel Estimation and Modeling:** XL-MIMO systems involve a large number of antennas, leading to high-dimensional data [194], and also the co-existence of near-field and far-field channels within the coverage of cellular networks. Especially, in the near-field channel, the channel response vectors depend on both the distance and direction between the transceiver of each antenna element, unlike the far-field channel. Therefore, the increased "huge" complexity for near-field channel estimation may not be resolved with the conventional approaches. GDMs can be used to model and estimate such hybrid channel state information efficiently. They can exploit the inherent graph structure in the spatial domain, where antennas can be considered as nodes and the spatial correlation between antennas as edges. This can lead to more accurate and efficient channel estimation methods. Fig. 23: The joint GDM and JSCC system architecture, where GDM is trained using a specialized noise schedule [179]. * **Signal Processing:** The signal processing in XL-MIMO systems can be complex due to the large number of antennas and the near-field communication characteristics. Especially, in the latter case, the interference caused by multi-user transmissions can be effectively mitigated by utilizing the higher degree of freedom existing in the distance and direction of near-field channel response vectors. GDMs can be used to develop efficient signal processing algorithms that can handle high-dimensional data and exploit the spatial correlation in the antenna array. This can lead to improved performance in terms of data rate and reliability. * **Hardware Design and Implementation:** XL-MIMO systems involve different hardware designs, such as uniform linear array (ULA)-based, uniform planar array (UPA)-based, and continuous aperture phased (CAP)-based XL-MIMO. GDMs can be used to model and analyze these different designs, helping to understand their characteristics and interrelationships. This can guide the design and implementation of XL-MIMO systems. ### _Integrated Sensing and Communications_ The ISAC unifies wireless sensing and communication systems to efficiently employ limited resources for mutual benefits [195]. It is a key element in future wireless systems, supporting various applications like autonomous driving and indoor localization [40, 196]. The GDM can be utilized in ISAC systems for both data processing and generation. As a processing technique, it can classify and recover ISAC-related data. Moreover, it can generate synthetic ISAC data, a vital function for boosting the training efficiency of neural networks within the ISAC systems. Specifically, GDM has applications in various aspects of the ISAC system. * **ISAC Data Generation:** The GDM can be used to generate samples for ISAC network training. For example, in indoor localization based on received signal strength indication (RSSI), the authors in [197] proposed a GAN for RSSI data augmentation. This network generates fake RSSI based on a small set of real collected labeled data. Using these data, the experimental results show that overall localization accuracy of the system has improved by 15.36%. Compared to GAN, GDM has stronger inference capabilities, which enable it to generate better fake data, thereby further enhancing system performance. * **ISAC Data Processing:** Apart from data generation, GAI models are also commonly used to process ISAC data [198]. For instance, given that the GAN-based semi-supervised learning can handle unlabeled and labeled data, the authors in [199] introduced a complement generator that uses a limited amount of unlabeled data to generate samples for training the discriminator. Building on this, they further adjust the number of probability outputs and utilize manifold regularization to stabilize the learning process, enhancing the human activity recognition performance in both semi-supervised and supervised scenarios. ### _Movable Antenna System_ The future of wireless communication networks is expected to be shaped significantly by the integration of movable antennas [200, 201]. Movable or fluid antennas, unlike conventional fixed-position antennas, have the capability of flexible movement and can be deployed at positions with more favorable channel conditions to achieve higher spatial diversity gains [202]. This flexibility enables better coverage and adaptability to changing environmental conditions. By strategically relocating the antenna, it becomes possible to mitigate signal blockage or interference caused by various obstacles, including buildings and vegetation. Therefore, the movable antennas can reap the full diversity in the given spatial region [202]. The complex and dynamic nature of wireless environments, characterized by high-dimensional configurations and non-linear relationships, necessitates sophisticated models like GDMs that can capture such high-dimensional and complex structures. * **Optimization of Antenna Positioning:** GDMs can be used to optimize the positioning of movable antennas in real time. By modeling the wireless environment and the effects of different antenna positions, GDMs can generate optimal antenna positions that maximize signal strength and minimize interference. * **Dynamic Resource Allocation:** GDMs can be applied to the dynamic resource allocation problem in movable antennas. By modeling the resource demands and availability in the network, GDMs can generate optimal resource allocation strategies that balance the needs of different network users and maximize network efficiency [203]. * **Predictive Maintenance:** Based on historical data, GDMs can be used to predict potential failures in movable antennas. By modeling antenna performance and failure patterns, GDMs can generate predictions about future failures, allowing for proactive maintenance and minimizing network downtime. * **Integration with Reinforcement Learning:** As demonstrated in Section III, the integration of GDMs with reinforcement learning techniques can be further explored in the context of movable antennas. This can lead to more robust and efficient resource slicing and scheduling strategies, enhancing the performance of 5G networks [204] and autonomous vehicles [205]. ## IX Conclusions In this tutorial, the transformative potential of GDMs in the realm of intelligent network optimization has been thoroughly explored. The unique strengths of GDMs, including their broad applicability and capability to model complex data distributions, were studied. We highlighted their potential in enhancing the DRL algorithms and providing solutions in several key intelligent network scenarios, such as incentive mechanism design, SemCom, IoV networks, channel estimation, error correction coding, and channel denoising. These explorations demonstrated the practicality and efficacy of GDMs in real-world applications. The tutorial concluded by emphasizing the research directions of GDMs in shaping the future of intelligent network optimization and encouraging further exploration in this promising field.
2305.14513
Windscreen Optical Quality for AI Algorithms: Refractive Power and MTF not Sufficient
Windscreen optical quality is an important aspect of any advanced driver assistance system, and also for future autonomous driving, as today at least some cameras of the sensor suite are situated behind the windscreen. Automotive mass production processes require measurement systems that characterize the optical quality of the windscreens in a meaningful way, which for modern perception stacks implies meaningful for artificial intelligence (AI) algorithms. The measured optical quality needs to be linked to the performance of these algorithms, such that performance limits - and thus production tolerance limits - can be defined. In this article we demonstrate that the main metric established in the industry - refractive power - is fundamentally not capable of capturing relevant optical properties of windscreens. Further, as the industry is moving towards the modulation transfer function (MTF) as an alternative, we mathematically show that this metric cannot be used on windscreens alone, but that the windscreen forms a novel optical system together with the optics of the camera system. Hence, the required goal of a qualification system that is installed at the windscreen supplier and independently measures the optical quality cannot be achieved using MTF. We propose a novel concept to determine the optical quality of windscreens and to use simulation to link this optical quality to the performance of AI algorithms, which can hopefully lead to novel inspection systems.
Dominik Werner Wolf, Markus Ulrich, Alexander Braun
2023-05-23T20:41:04Z
http://arxiv.org/abs/2305.14513v1
# Windscreen Optical Quality for AI Algorithms: ###### Abstract Windscreen optical quality is an important aspect of any advanced driver assistance system, and also for future autonomous driving, as today at least some cameras of the sensor suite are situated behind the windscreen. Automotive mass production processes require measurement systems that characterize the optical quality of the windscreens in a meaningful way, which for modern perception stacks implies meaningful for artificial intelligence (AI) algorithms. The measured optical quality needs to be linked to the performance of these algorithms, such that performance limits - and thus production tolerance limits - can be defined. In this article we demonstrate that the main metric established in the industry - refractive power - is fundamentally not capable of capturing relevant optical properties of windscreens. Further, as the industry is moving towards the modulation transfer function (MTF) as an alternative, we mathematically show that this metric cannot be used on windscreens alone, but that the windscreens a novel optical system together with the optics of the camera system. Hence, the required goal of a qualification system that is installed at the windscreen supplier and independently measures the optical quality cannot be achieved using MTF. We propose a novel concept to determine the optical quality of windscreens and to use simulation to link this optical quality to the performance of AI algorithms, which can hopefully lead to novel inspection systems. windscreen optical quality, AI algorithms, computer vision, refractive power, MTF ## I Introduction Every car has a windscreen. The number of newly produced windscreens therefore ranges in the millions every year. Following quality processes for automotive mass production established since the 1960ies - like the outdated ISO/TS 16949 [18] or the more recent VDA6.3 [31] - these windscreens are tested end-of-line (EOL) at the suppliers (Tier 1) production line using well-defined optical measurements. Importantly, the windscreen quality is measured at the production site alone, independent of any production tolerances that may arise during assembly of the whole car at the site of the car manufacturer (original equipment manufacturer, OEM). Economically, this is mandatory, as a thorough testing of the whole windscreen after assembly by the OEM is prohibitively expensive. For several decades the optical quality of these windscreens has been judged acceptable if humans could look through it with low impact on the perception of the driver. With the rise of advanced driver assistance systems (ADAS) and the future promise of autonomous driving (AD) many cars nowadays are equipped with several camera systems, many of which are situated behind the windscreen. A camera is not a human observer, and it is now not enough to qualify a windscreen using human perception, especially as the quality and resolution of the cameras are steadily increasing. The influence of the optical quality on the image quality and further on the computer vision algorithms evaluating these images has to be precisely determined. In theory, the working limits of the computer vision algorithms are determined, and production tolerance limits are derived from these algorithmic working limits through a number of processes defined in the above mentioned quality norms. Opto-mechanical tolerance calculations, numerical simulations and test campaigns in the real world form three important pillars of these studies accompanied by environmental stress tests and aging simulations [7, 15]. In practice, though, modern camera-based ADAS applications are based on artificial intelligence (AI) and employ deep convolutional neural networks, due to their superior performance in comparison to traditional, rule-based computer vision algorithms. The difference in performance is such that currently there is no alternative to using AI algorithms. As these AI algorithms are 'black boxes' in nature [13], i.e. the output cannot be predicted, the link between optical quality and AI algorithm performance cannot be easily established [25]. And due to the lack of quantitative working limits w.r.t. the AI algorithms, production tolerance limits for the windscreens can not be straightforwardly deduced [4]. In this article we are evaluating the two main measurement processes that are currently used in the automotive industry to qualify windscreen optical quality: refractive power and the modulation transfer function (MTF). While refractive power is the established measurement method and has been standardized already in the 1990ies [6, 9], the MTF - or equivalently the spatial frequency response (SFR) - has gained recent attention as automotive researchers [32] look for alternatives to refractive power because of the increasing ADAS camera performances in terms of the number of pixels per field angle. Novel startups are even forming around the promise of using MTF to characterize windscreens. We find and mathematically demonstrate in this work that both refractive power and MTF are not sufficient to quantify windscreen quality for AI algorithm performance. This is a fundamental finding in that our results are derived from first principles of optics, and apply very generally. First, we recapitulate the optical basics in Sec. II. Importantly, the optical quality is described in terms of wavefront aberrations, using the Zernike formalism to mathematically decompose the nature of the optical perturbations. Then in Sec. III, using these basics we show how refractive power is fundamentally not capable of accounting for a distinct number of wavefront aberrations, while at the same time these aberrations have a demonstrable effect on AI algorithm performance [24, 26]. In Sec. IV we then show how the windscreen and the camera system form a joint optical system, that - again fundamentally - cannot be separated into two distinct optical systems. This separation, though, is a necessary requirement in linear system theory for the multiplicativity of the system MTF w.r.t. the individual optical elements [12]. Therefore, this prohibits any MTF measurement on the windscreen alone, and thus from using MTF as a qualifying measurement at the production site of the Tier 1. Optical quality has many different aspects. For this article, we concentrate solely on the'sharpness' of the camera image, which is deteriorated by optical path variations across the windshield plane and is typically quantified by the MTF in optical linear system theory. In general, lens distortions, which describe the failure of a lens to map lines into lines and represent a curvilinear mapping [28], might also deteriorate the performance of ADAS functionalities. Effects of optical distortions will not be considered in the following. In summary, we will show how the two only current measurement techniques in the automotive industry are not sufficient to measure the sharpness of the windscreen alone. These results have far reaching implications for the automotive industry, which needs to focus more effort on finding alternatives. We finally propose a concept on how to find a novel measurement process, combining optical modeling, numerical simulation and AI algorithms to link the optical quality of windscreens to the performance of AI algorithms. ## II Optical Quality and Mathematical Models Maxwell's equations are the fundamental physical model of electromagnetic radiation, and the wave equation forms the basis for the technological application of light. If all elements in the optical system are large compared to the wavelength of the light, geometrical optics may be used. It plays an important role in the development of optical systems as well, in the form of raytracing simulations. A windscreen is large in mechanical dimensions, both laterally as well as axially, but previous work has shown that the aberrations originating inside the windscreen cannot be neglected [5, 20]. Thus, it is not sufficient to take only the geometry of the windscreen into account - which would allow for a raytracing approach - but a comprehensive optical model needs to be based on the wave description of light. This is why in the following we use the fundamental Zernike approach [10] to model wavefront aberrations, where the optical path difference mathematically models the aberrations present in the windscreen. ### _Wavefront Modelling with Zernike Polynomials_ The optical path difference \(W\), defined on the principle plane, is usually expressed as a decomposition into Zernike polynomials \(Z_{n}\) with corresponding Zernike coefficients \(c_{n}\) (in units of meters) as [3]: \[W(\rho,\;\phi)=\sum_{n=0}^{\infty}c_{n}Z_{n}(\rho,\;\phi)\;\;\;,\;\;c_{n}: \stackrel{{(2)}}{{=}}\left\langle W,\;Z_{n}\right\rangle\;. \tag{1}\] Here, the domain of the principle plane of the optical element is parameterized by normalized polar coordinates with radius \(\rho\) and polar angle \(\phi\). There are different numbering schemes for Zernike polynomials, i.a. a linear numbering scheme according to the American National Standards Institute (ANSI) which has been adopted within this work. The Zernike polynomials reproduce the aberration pattern on the unit circle and correspond to different, independent optical perturbations like defocus or astigmatism. The independence of the perturbations is mathematically reflected by the orthogonality relation of the scalar product: \[\left\langle Z_{i},\;Z_{j}\right\rangle\coloneqq\int\limits_{0}^{2\pi}\int \limits_{0}^{1}Z_{i}(\rho,\;\phi)\cdot Z_{j}(\rho,\;\phi)\cdot\rho\;\mathrm{d} \rho\;\mathrm{d}\phi=\pi\cdot\delta_{ij}\;. \tag{2}\] This is important, because we will demonstrate how certain Zernike polynomials are simply not present in the refractive power measurement, and the orthogonality fundamentally implies that this information can not be recovered. Table I indicates the normalized Zernike polynomials defined by ISO 24157 [17] up to the third order. ### _Refractive Power_ Refractive power measures how much focusing power a lens has. It is given in units of diotters, i.e. in inverse distance of the focal length of the lens. A comprehensible way to visualize refractive power is two parallel light rays entering the optical system - here: the windscreen - and upon exit are not parallel anymore, but either divergent or convergent. In the convergent case, the focal length is the distance from the refractive element to the intersection of the two rays, and its inverse is the \begin{table} \begin{tabular}{c|l|l|l} \hline \(Z_{i}\) & \multicolumn{2}{c}{Zernike polynomial} & Harmonic \\ & Polar coordinates & Cartesian coordinates & \\ \hline \hline \(Z_{0}\) & 1 & 1 & ✓ \\ \(Z_{1}\) & \(2\rho\sin\phi\) & \(2y\) & ✓ \\ \(Z_{2}\) & \(2\rho\cos\phi\) & \(2x\) & ✓ \\ \(Z_{3}\) & \(\sqrt{6}\rho^{2}\sin 2\phi\) & \(2\sqrt{6}xy\) & ✓ \\ \(Z_{4}\) & \(\sqrt{3}(2\rho^{2}-1)\) & \(\sqrt{3}(2x^{2}+2y^{2}-1)\) & \(\times\) \\ \(Z_{5}\) & \(\sqrt{6}\rho^{2}\cos 2\phi\) & \(\sqrt{6}(x^{2}-y^{2})\) & ✓ \\ \(Z_{6}\) & \(\sqrt{8}\rho^{3}\sin 3\phi\) & \(\sqrt{8}(3x^{2}y-y^{3})\) & ✓ \\ \(Z_{7}\) & \(\sqrt{8}(3y^{3}-2\rho)\sin\phi\) & \(\sqrt{8}(3x^{2}y+3y^{3}-2y)\) & \(\times\) \\ \(Z_{8}\) & \(\sqrt{8}(3\rho^{3}-2\rho)\cos\phi\) & \(\sqrt{8}(3x^{3}+3xy^{2}-2x)\) & \(\times\) \\ \(Z_{9}\) & \(\sqrt{8}\rho^{3}\cos 3\phi\) & \(\sqrt{8}(x^{3}+3xy^{2})\) & ✓ \\ \hline \end{tabular} \end{table} TABLE I: Zernike polynomials up to the third order. numerical value of the refractive power. For concave lenses, the diverging rays are extended in the negative direction until these two rays intersect, and the negative distance now forms the focal length. For windscreens, the refractive power is not a single number for the whole glass, but the measurement has a spatial resolution as depicted by Fig. 1. In the early days, two actual parallel laser beams were deflected, and the whole setup was laterally moved to achieve a certain spatial resolution [6]. More modern systems such as the one produced by ISRA use the Moire effect to spatially resolve the refractive power over a limited area by observing the location dependency of the perturbed grid spacing between Moire interferences [23]. In addition, new refractive power measurement systems like the one produced by LaVision [22] use the Background Oriented Schlieren (BOS) imaging method to overcome the resolution limitation of the Moire approach [33]. Importantly, the refractive power depends on the direction, as the two parallel lines form a plane together with the principal plane of the optical element. In principle, this direction can be rotated full circle by \(360^{\circ}\), but in practice, the refractive power is determined and specified only in the horizontal and vertical direction. ### _Modulation Transfer Function_ The modulation transfer function (MTF) - and its non-harmonic equivalent, the spatial frequency response (SFR) - are established metrics to characterize optical systems, based on linear system theory and scalar diffraction theory [2, 10]. In image space, the transfer function of the system under test is called the point spread function (PSF), and in frequency space, it is denoted as the optical transfer function (OTF). The MTF is given by the absolute value of the OTF and is of particular importance if the intensity distribution is the matter of interest. The PSF and the MTF are highly non-linear functions over the image field (radius, azimuth), and they also depend on the defocus \(\Delta z\), due to the refractions on different lens element surfaces. Hence, the input space of the PSF is in general three dimensional. The MTF is measured by using either harmonic input signals (MTF, e.g. sinusoidal Siemens star) or a step function type input (SFR, e.g. slanted edge). ISO12233 defines a norm to measure the MTF [16], and IEEE P2020 is currently finalizing an automotive extension of this norm [14]. In this article, we will use slanted edge measurements. According to scalar diffraction theory, the MTF is proportional to the absolute value of the Fourier transform of the wavefront in the aperture plane of the lens (more general: the optical element). The wavefront is transformed, normalized, and the absolute value is taken to yield the MTF. This allows for an analytical relationship between the MTF and the wavefront aberrations, which can be parameterized by Zernike coefficients \(c_{n}\): \[\mathrm{MTF}(\vec{k}|\lambda)=\left|\frac{\underset{P_{+}\cap P_{-}}{\overset{ \text{\Large$\underset{\lambda}{\text{\Large$\underset{\lambda}{\text{ \Large$\underset{\lambda}{\text{\Large$\underset{\lambda}{\text{\Large$ \underset{\lambda}{\text{\Large$\underset{\lambda}{\text{\Large$\underset{ \text{\Large$\underset{\text If the Zernike polynomials of Table I are transformed into Cartesian coordinates it becomes obvious that the difference in Eq. (3) vanishes for Zernike polynomials of zeroth order. For the y-tilt \(Z_{1}\) and the x-tilt \(Z_{2}\), the integrand evaluates to a constant phasor. Hence, it holds that: \[\begin{split}&\mathrm{MTF}(\vec{k}|\lambda)\stackrel{{ \eqref{eq:Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernike polynomials of Zernikeernike polynomials of The Zernike decomposition coefficients \(c_{i}\) are uniquely determined if \(|\mathbf{\mathcal{M}}^{T}\mathbf{\mathcal{M}}|\neq 0\). In other words, the Gramian matrix \(\mathbf{\mathcal{M}}^{T}\mathbf{\mathcal{M}}\) has to be invertible, wherefore \(\mathbf{\mathcal{M}}^{T}\mathbf{\mathcal{M}}\) needs to have full rank. If this condition is fulfilled, then the Zernike coefficient vector \(\vec{c}\) can be retrieved from the measured local wavefront gradient vector \(\vec{\beta}\) by: \[\vec{c}\overset{(8)}{=}\rho_{a}\cdot\left[\mathbf{\mathcal{M}}^{T}\mathbf{\mathcal{M}} \right]^{-1}\cdot\mathbf{\mathcal{M}}^{T}\cdot\vec{\beta}. \tag{9}\] ### _From Wavefront Aberration Maps to local Refractive Power_ From Sec. III-A we know how to determine the Zernike coefficients \(c_{i}\), wherefore we can reconstruct the wavefront aberration map according to Eq. (1). If the reference wavefront has been characterized by a plane wave, then the local refractive power of an optical element is given by the second derivative of the wavefront aberration map \(W\) with respect to the axis of interest [29, 30]. Hence, the refractive power \(D_{x_{i}}\) along the axis \(x_{i}\) is given by: \[D_{x_{i}}(\vec{x}_{a})=\frac{\partial^{2}}{\partial x_{i}^{2}}W(\vec{x}_{a}). \tag{10}\] Here, the input vector \(\vec{x}_{a}\in\mathbb{R}^{2}\) is restricted to the principal plane of the refractive element. The validity of Equation (10) can be proven for the special case of a spherical thin lens: \[f_{x_{a_{1}}}^{2} =x_{a_{1}}^{2}+\left(f_{x_{a_{1}}}-W(x_{a_{1}})\right)^{2}\,\ \text{w. l.o.g.:}\ x_{a_{2}}\overset{!}{=}0\] \[\Rightarrow W(x_{a_{1}}) =f_{x_{a_{1}}}\Bigg{(}1-\sqrt{1-\left(\frac{x_{a_{1}}}{f_{x_{a_{1 }}}}\right)^{2}}\Bigg{)}\] \[\Leftrightarrow W(x_{a_{1}}) =f_{x_{a_{1}}}\Bigg{(}1-\Bigg{(}1-\frac{1}{2}\left(\frac{x_{a_{1 }}}{f_{x_{a_{1}}}}\right)^{2}+\mathcal{O}\left\{\left(\frac{x_{a_{1}}}{f_{x_{a _{1}}}}\right)^{4}\right\}\Bigg{)}\Bigg{)}\] \[\Rightarrow W(x_{a_{1}}) \approx\frac{x_{a_{1}}^{2}}{2f_{x_{a_{1}}}}=:\frac{D_{x_{a_{1}}}} {2}\cdot x_{a_{1}}^{2}\] \[\Rightarrow D_{x_{a_{1}}} \overset{(10)}{=}\frac{\partial^{2}}{\partial x_{a_{1}}^{2}}W(x_{ a_{1}})=D_{x_{a_{1}}}\.\qquad\blacksquare\] ### _Information Content of Refractive Power Measurements_ Eq. (10) determines the relationship between refractive power measurements \(D\) and wavefront aberration measurements \(W\) via the curvature of the optical path difference map. In Sec. II-C we introduced the concept of the PSF as a Fourier optical merit function, which serves as the impulse response function or the Green's function of an optical system [21]. In addition to the Fourier optical approach there is also a ray optics approximation to describe the PSF in terms of the area of a blurring ellipse, which encloses a certain amount of light around the focusing spot in relation to the total amount of energy entering the system through the aperture stop. The area of this blurring ellipse is proportional to the Gaussian curvature [8] of the wavefront aberration map or equivalently speaking, proportional to the determinant of the Hessian matrix of the wavefront aberration function [29]: \[\oiint_{\mathcal{C}}\mathrm{PSF}(\vec{x}_{o})\Big{|}_{\hat{z}_{o}}\mathrm{d}^ {2}x_{o}\propto\left|\left(\begin{array}{cc}\frac{\partial^{2}}{\partial x_{ 1}^{2}}W(\vec{x}_{a})&\frac{\partial}{\partial x_{1}}\frac{\partial}{ \partial x_{2}}W(\vec{x}_{a})\\ \frac{\partial}{\partial x_{1}}\frac{\partial}{\partial x_{2}}W(\vec{x}_{a})& \frac{\partial^{2}}{\partial x_{2}}W(\vec{x}_{a})\end{array}\right)\right|. \tag{12}\] Here, \(\mathcal{C}\) denotes the contour confining the domain of integration, which is given by the blurring ellipse. Due to the relationship presented in Eq. (10), this matrix is also known as the dioptric power matrix \(\mathbf{\mathcal{D}}\)[11]. The determinant can be rewritten in terms of the traces of the dioptric power matrix: \[\oiint_{\mathcal{C}}\mathrm{PSF}(\vec{x}_{o})\Big{|}_{\hat{z}_{o}}\mathrm{d}^ {2}x_{o}\propto\frac{1}{2}\left[\left(\mathrm{tr}\left(\mathbf{\mathcal{D}}\right) \right)^{2}-\mathrm{tr}\left(\mathbf{\mathcal{D}}^{2}\right)\right]. \tag{13}\] So far, the automotive industry exclusively specifies requirements in terms of the refractive power w.r.t. the horizontal and vertical directions. Consequently, only the trace of \(\mathbf{\mathcal{D}}\) is measured and off-diagonal elements in the Hessian matrix are not investigated. This demonstrates that there is a blind spot in the quality assurance chain at the moment. This conclusion can be further underpinned by an mathematical argument. The trace of \(\mathbf{\mathcal{D}}\) is given by: \[\mathrm{tr}\left(\mathbf{\mathcal{D}}\right)=\sum_{i=1}^{d}D_{x_{i}}(\vec{x}_{a})= \triangle W(\vec{x}_{a}). \tag{14}\] Consequently, the trace of \(\mathbf{\mathcal{D}}\) is unaffected by wavefront aberration fields, which fulfill the Laplace equation: \[\triangle\Gamma(\vec{x}_{a})\overset{!}{=}0. \tag{15}\] As a result, the trace of \(\mathbf{\mathcal{D}}\) is Gauge invariant under aberration fields \(\Gamma(\vec{x}_{a})\) that are composed of harmonic functions. Hence, Zernike polynomials in Table I that are harmonic functions (like astigmatism or trefoil) will not alter the trace of \(\mathbf{\mathcal{D}}\). In a nutshell, refractive power measurements are not sensitive for optical distortions quantified by \(c_{1}\) and \(c_{2}\). Furthermore, the refractive power is invariant under oblique astigmatism given by \(c_{3}\) if the refractive power requirements are specified exclusively along the horizontal and vertical axis, as it is the current governing standard in the automotive industry. Finally, those quality standards are insufficient for extracting more fundamental information about the optical system in terms of the \(\mathrm{PSF}\). Nonetheless, the aberrations associated with these polynomials have been proven to have an influence on the performance of AI algorithms [24, 26]. ### _Experimental Verification_ Since Eq. (10) is not well established in the automotive industry, we experimentally demonstrate the validity of the relationship by a Shack-Hartmann wavefront measurement of a calibration lens. The lens under test was produced by Zeiss and is traced back to national standards by an accredited calibration authority. The local wavefront gradients \(\beta_{i}\) are measured by a Shack-Hartmann sensor and the refractive power is retrieved by utilizing Eq. (10). As demonstrated by Eq. 8 the Shack-Hartmann measurement yields the first derivative of the wavefront. Here, we measure the lens and numerically determine the second derivative by a simple central difference scheme, which should result in the specified refractive power. Fig. 3 illustrates the outcome w.r.t. the refractive power map over the lens aperture. From the frequency distribution of the local refractive power across the entire principal plane, the expectation value for the global refractive power of the optical element in the \(x\)- and \(y\)-plane can be deduced. The expectation values meet the certified refractive power values of the calibration lens within the uncertainty intervals. Hence, the validity of Eq. (10) has also been experimentally confirmed. Summarizing, we have demonstrated that fundamentally several optical aberrations are not captured by a refractive power measurement. The image quality can be deteriorated even though the refractive power measurement indicates a compliant windscreen sample. From previous studies on the effect of oblique astigmatism (\(c_{3}\)) on road sign classification [24, 26] it becomes evident, that refractive power measurements are insufficient for specifying the quality of a windshield in order to ensure reliable computer vision for autonomous driving vehicles. ## IV Modulation Transfer Function In this section, we will demonstrate why the windscreen and the camera form a joint optical system that cannot be separated into two independent constituents, such that the MTF cannot be determined for the two systems separately. First, we argue how the refractive power of the windscreen interacts with the focal length of the camera system. In a second step, this is experimentally verified using a MTF measurement with and without a windscreen. A discussion elaborates on several implications for the production and testing process. ### _Field Curvature_ The focal length of an imaging system varies over the field of view, with the so-called _field curvature_ being a prominent optimization goal for any lens designer. The semiconductor production processes produce completely flat image sensors, which for the imaging optics is a challenge, as the field curvature needs to be flat as well to minimize aberrations. This field curvature, as a design property of the lens, is given by the offset \(\Delta z_{\text{fc}}\) over field in units of length, typically on the micrometer range. A symbolic field curvature is visualized in Fig. 4. As explained above, the refractive power of the windscreen leads to parallel rays converging or diverging. Taking the two elements windscreen and lens together yields a second focus offset \(\Delta z_{\text{ws}}\) for the camera system, as the converging (diverging) rays will shorten (prolong) the focal length of the camera system. Fig. 4 depicts this situation. The two offsets are added for the system offset, such that: \[\Delta z=\Delta z_{\text{ws}}+\Delta z_{\text{fc}}. \tag{16}\] Importantly, both \(\Delta z_{\text{ws}}\) and \(\Delta z_{\text{fc}}\) can have positive or negative values, and thus the overall offset may vanish when these terms cancel. A vanishing offset value implies a sharpening of the system. Here, a MTF measurement of the camera alone would yield a certain number, while putting a windscreen in front of the camera would act like glasses and the image would become sharper. That this is indeed the case in practice is presented in the following section. ### _Experimental Validation_ Fig. 5 depicts two slanted edge measurements, one without a windscreen (5a), and one with a windscreen placed in front of the camera system (5b). The insets indicate the MTF values derived from the numerical evaluation for all four edges, using an ISO12233-compliant algorithm [16]. There are two horizontal and two vertical values. The two vertical values (top and bottom) distinctly decrease from \(52\pm 1.5\,\%\,\,[95\,\%]\) to \(39\pm 1.7\,\%\,\,[95\,\%]\) when a windscreen is placed in front of the camera. However, for the horizontal direction (left and right) the MTF values both significantly increase from \(45/47\pm 1.5\,\%\,\,[95\,\%]\) to \(52/54\pm 1.6\,\%\,\,[95\,\%]\) when the windscreen is placed in front of the camera. The results experimentally confirm that the defocus \(\Delta z_{\text{ws}}\) and \(\Delta z_{\text{fc}}\) may Fig. 4: Windscreen and lens form a joint optical system. \(H\) and \(H^{\prime}\) are the principle planes of the lens, \(f\) is the nominal focal length. The blue line visualizes the field curvature (not to scale). Normally, parallel rays are focused onto the field curvature (yellow line). Windscreen refractive power shortens or prolongs the effective focal length of the lens (red line). There are two different focus offsets \(\Delta z_{\text{fc}}\) and \(\Delta z_{\text{ws}}\) which may add or even cancel at different fields of view. Fig. 3: Wavefront measurement performed on a \(\langle D\rangle=100.3\) mdpt reference lens. In order to cover the entire aperture of the lens, several Shack-Hartmann measurements have been stitched together. This procedure has introduced artifacts, which are visible in the measurement data by strongly pronounced vertical and horizontal lines. In total, 15 measurements have been performed over the calibration lens aperture of \(d=10\) cm. cancel to a certain degree, increasing the sharpness like glasses would do for a myopic person. This conclusion is well established in physics [12] for decades but the implications for the quality assurance testing procedure of ADAS systems in the automotive industry are not well prevalent. ### _Discussion_ Both the field curvature of the lens and the refractive power of the windscreen are spatially variant. The field curvature not only varies over field (radially) as the name implies, but due to production tolerances the rotation symmetry of the lens is usually broken to a certain degree. The field of view of the lens projected on the windscreen yields a trapezoidal cutout (cf. Fig. 1). I.e., the (almost) rotational symmetry of the lens projected on the windscreen combines with the local refractive power variation of the windscreen in this cutout. Not only that, but the windscreen also has distinctly different refractive power for the horizontal and the vertical direction as given by the Kerkhof model [33]. Taken together, it is apparent that a windscreen cannot be qualified by a MTF measurement if both the windscreen and the camera are measured separately. The experimental sharpening unambiguously demonstrates a non-linear process, proving that the two elements cannot be separated using linear system theory (read: the individual MTFs cannot be multiplied). The way individual production tolerances will add is not predictable. In brief, it is not possible to determine individual MTF limits, as the combination of individual tolerances may hold both good and bad surprises, either sending a good system to scrap or a bad system to the field. Therefore, any solution using MTF would have to measure the MTF on the combined system of produced windscreen and camera system with their individual production tolerances. This could be either at the production site of the Tier 1 or the OEM. But there are still several important open questions that make this an unattractive proposal: if an assembly is non-compliant, is it worth finding a compliant combination, does it make economically sense? How big are the assembly tolerances fitting the windscreen into the car body? If the OEM wants the measurement system at the site of the Tier 1, one should be aware that the assembly of the windscreen into the car produces distinct mechanical tolerances, changing the shape and internal tension of the windscreen. As we are looking for subtle differences in optical quality, this may affect the pre-assembled camera system as well. Finally, from an automotive process view it is clear that an independent measurement of the camera and the windscreen is much preferred. Summarizing, the MTF is a measure of'sharpness' based on linear system theory. The windscreen and the camera form a combined optical system that cannot be separated, which prohibits its use for windscreen characterization without the actual, produced camera system in place. Taken together with the possibility of finding a better metric we are skeptical that the MTF should be prioritized for windscreen characterization going forward. ## V Simulating Optical Properties Having shown that basically no current measurement system in the automotive windscreen industry is capable of a meaningful characterization of the windscreen optical quality for downstream AI algorithm consumption, what could be a way forward? A comprehensive experimental study using thousands of actual cameras and windscreens is out of the question. Therefore, the AI performance needs to be linked to the windscreen optical quality by simulation, using physical-realistic optic models. These simulations need to model the production tolerances of both windscreens and cameras. Then, the performance requirements of the AI-based ADAS functionalities can be translated to optical quality specifications for windscreen production. The waveform description is fundamental and includes all optical effects and aberrations, and can be measured by a Shack-Hartmann sensor. Currently, this is not a viable approach to windscreen characterization at the site of the Tier 1, as it is too expensive and more importantly, too slow for a 100 % part check. Nonetheless, we believe it is possible to use special laboratory-grade equipment to create the physical-realistic optical models necessary for the simulations, and then derive from these simulations an understanding of the optical properties that are really necessary for the AI performance. Finally, from this we can deduce a simplified form of measurement that captures this newfound knowledge of the required optical properties. A first example of this process is published in [20]. Therefore, the challenge is to understand those optical properties that are really necessary for a robust AI algorithm performance. We believe that this is a necessary step, and without it, the move from ADAS to AD will be prohibitively difficult, as production tolerances combined with the complexity of the world create an unmanageable number of combinations. ## VI Summary In automotive mass production, the inspection systems at the suppliers need to measure the quality of windscreens in a meaningful way for the final device performance. Modern Fig. 5: MTF measurement for an ADAS system based on the Slanted Edge method according to ISO 12233 [16]. ADAS and future AD camera systems are based on AI algorithms, wherefore the windscreen quality needs to be linked to the performance of these algorithms. Currently, there are two types of measurements established in the industry to measure the optical quality of a windscreen: refractive power and MTF. In this article, we demonstrated how both these measurements are fundamentally not capable of capturing relevant optical properties of the windscreen. The refractive power measurement does not include several aberrations - given by Zernike polynomials, e.g. oblique astigmatism - while these aberrations obviously affect the performance of the AI algorithms: oblique astigmatism causes a directional blurring of the scene, and blurring causes a degradation of the performance. Because of the orthogonality of the Zernike polynomials, it is clear that this information is simply lacking in refractive power measurements. MTF is based on linear system theory, where independent optical systems might be multiplied in frequency space to yield the system MTF. This is, for example, the case for the lens and the imager. Here, we demonstrated mathematically and experimentally that the windscreen forms a novel optical system together with the lens of the camera system, which cannot be separated into individual components. Therefore, measuring the MTF on the windscreen alone will not yield the performance of the combined system. Thus, the final assembly of the windscreen and camera system in the car may be both better or worse than the EOL measurement at the windscreen production site, either sending good parts to scrap or bad parts into the field. Every car has a windscreen. Using the knowledge presented in this article we believe that the automotive industry needs to focus their efforts on finding novel measurement methods that qualify the optical quality of windscreens in a meaningful way for the downstream AI algorithms. We propose a concept using fundamental wave (and Fourier) optics to characterize the windscreens and combine wavefront measurements and physical-realistic simulations to reach an understanding of what optical properties are really important for AI computer vision algorithms. We believe that it cannot be said generally that the optical quality of windscreens is too low - what is currently lacking is not optical quality, but understanding about the robustness of the algorithms against optical aberrations. We simply do not know what optical quality is needed exactly. Taking these elements together we believe that a novel metric can be found that contains the relevant information, while at the same time the measurement is practical enough to be used stand-alone at the windscreen production site. This is the great windscreen challenge the automotive industry currently faces.
2306.02621
Spontaneous breaking of mirror symmetry beyond critical doping in Pb-Bi2212
Identifying ordered phases and their underlying symmetries is the first and most important step toward understanding the mechanism of high-temperature superconductivity; critical behaviors of ordered phases are expected to be correlated with superconductivity. Efforts to find such ordered phases have been focused on symmetry breaking in the pseudogap region while the Fermi liquid-like metal region beyond the so-called critical doping $p_{c}$ has been regarded as a trivial disordered state. Here, we used rotational anisotropy second harmonic generation and uncovered a broken mirror symmetry in the Fermi liquid-like phase in (Bi,Pb)$_{2}$Sr$_{2}$CaCu$_{2}$O$_{8+\delta}$ with $p = 0.205 > p_{c}$. By tracking the temperature evolution of the symmetry-breaking response, we verify an order parameter-like behavior with the onset temperature $T_{up}$ at which the strange metal to Fermi liquid-like-metal crossover takes place. Complementary angle-resolved photoemission study showed that the quasiparticle coherence between $\mathrm{CuO_{2}}$ bilayers is enhanced in proportion to the symmetry-breaking response as a function of temperature, indicating that the change in metallicity and symmetry breaking are linked. These observations contradict the conventional quantum disordered scenario for over-critical-doped cuprates and provide new insight into the nature of the quantum critical point in cuprates.
Saegyeol Jung, Byeongjun Seok, Chang jae Roh, Donghan Kim, Yeonjae Lee, San Kang, Shigeyuki Ishida, Shik Shin, Hiroshi Eisaki, Tae Won Noh, Dongjoon Song, Changyoung Kim
2023-06-05T06:47:14Z
http://arxiv.org/abs/2306.02621v2
# Spontaneous breaking of mirror symmetry beyond critical doping in Pb-Bi2212 ###### Abstract Identifying ordered phases and their underlying symmetries is the first and most important step toward understanding the mechanism of high-temperature superconductivity; critical behaviors of ordered phases are expected to be correlated with superconductivity. Efforts to find such ordered phases have been focused on symmetry breaking in the pseudogap region while the Fermi liquid-like metal region beyond the so-called critical doping \(p_{c}\) has been regarded as a trivial disordered state. Here, we used rotational anisotropy second harmonic generation and uncovered a broken mirror symmetry in the Fermi liquid-like phase in (Bi,Pb)\({}_{2}\)Sr\({}_{2}\)CaCu\({}_{2}\)O\({}_{8+\delta}\) with \(p=0.205>p_{c}\). By tracking the temperature evolution of the symmetry-breaking response, we verify an order parameter-like behavior with the onset temperature \(T_{up}\) at which the strange metal to Fermi liquid-like-metal crossover takes place. Complementary angle-resolved photoemission study showed that the quasiparticle coherence between CuO\({}_{2}\) bilayers is enhanced in proportion to the symmetry-breaking response as a function of temperature, indicating that the change in metallicity and symmetry breaking are linked. These observations contradict the conventional quantum disordered scenario for over-critical-doped cuprates and provide new insight into the nature of the quantum critical point in cuprates. ## I 1. Introduction A long-standing question in the study of cuprate superconductors is how the normal-state properties are related to the high-transition temperature (high-\(T_{c}\)) superconductivity [1; 2]. One of the most exotic normal-state characteristics is a strange metal behavior with linear temperature dependence of the resistivity \(\rho\sim T^{n}(n=1)\)[3]. In the experimentally established doping temperature phase diagram of cuprates (Fig. 1a), the strange metal phase with n = 1 is located above the regions of underdoped pseudogap and overdoped Fermi liquid (FL)-like metal phases with \(T\)-sublinear (\(n<1\) at \(T<T^{*}\)) and \(T\)-superlinear (\(n>1\) at \(T<T_{up}\)) resistivities, respectively [4; 5] (Fig. 1b, Supplementary Fig. 1). The prevailing view is that the \(T\)-linear resistivity persists down to the zero temperature near the critical doping \(p_{c}\) of the underlying pseudogap-FL phase boundary, resulting in a V-shaped strange metallic region from the putative pseudogap quantum critical point (QCP) [6; 7]. Therefore, it has been suggested that the strange metal behavior arises from the competition between the quantum and thermal fluctuations, i.e., the so-called "QCP scenario" [8]. In the generic QCP scenario, the under-critical-doped pseudogap and over-critical-doped FL-like metal phases play the roles of the quantum ordered and disordered states, respectively, indicating that the transition from the strange metal to the pseudogap phase at \(T^{*}\) is a phase transition with a corresponding symmetry breaking, while that to the FL-like metal phase at \(T_{up}\) is a smooth crossover. Indeed, various symmetry breaking across the pseudogap phase boundary has been observed, which is consistent with the QCP scenario [9; 10; 11; 12]. On the other hand, the over-critical-doped FL-like metal region has been investigated less systematically, and it is unclear whether it definitively represents a quantum disordered state. In contrast to the long-standing belief, ordered phenomena, such as charge order and ferromagnetism, have been observed in the heavily overdoped region [13; 14; 15]. The discovery of ordered phases is surprising not only because it is incompatible with the QCP scenario, but also because the collective bosonic mode attributed to the order parameter fluctuations can mediate the fermion-fermion interaction, as well as superconductivity. Yet, it still remains an open question whether the FL-like metal phase of cuprates is genuinely related to a symmetry broken state with a hidden order or not. To detect subtle symmetry breaking, rotational anisotropy second harmonic generation (RA-SHG) has been used to research various quantum materials [16; 17; 18]. Specifically, as it measures the high-rank (n\(>\)2) optical susceptibility tensor, it is useful in finding both electronically and magnetically driven symmetry breaking induced by multipolar orders. Indeed, previous RA-SHG studies on undoped Sr\({}_{2}\)CuO\({}_{2}\)Cl\({}_{2}\) and under-critical-doped YBa\({}_{2}\)Cu\({}_{3}\)O\({}_{y}\) revealed hidden symmetry breaking inside antiferromagnetic and pseudogap regime, respectively [19; 20]. To explore the symmetry evolution in the over-critical-doped region using RA-SHG, (Bi,Pb)\({}_{2}\)Sr\({}_{2}\)CaCu\({}_{2}\)O\({}_{8+\delta}\) (Pb-Bi2212) is suitable because its crystallographic structure has higher symmetry compared to other cuprates with monoclinic structures, such as LSCO, YBCO, and pristine Bi2212 [19; 21]. For example, Fig. 1e shows RA-SHG intensity \(I_{SS}^{2}\) of Pb-(\(T_{c}=79\) K) and pristine Bi2212 (\(T_{c}=92\) K) as a function of the \(\varphi\), angle of the incident plane rotated around the \(c\)-axis, measured in the S\({}_{\text{in}}\)-S\({}_{\text{out}}\) (SS) geometry at room temperature (Fig. 1d). While mirror symmetry along the crystal axis is broken in pristine Bi2212 due to monoclinic distortion [22], the SHG data of Pb-Bi2212 is quite isotropic, which is beneficial for identifying the emergence of small anisotropy in the SHG response. Here, we performed a polarization- and temperature-dependent RA-SHG study, along with complementary angle-resolved photoemission spectroscopy (ARPES) measurements of the over-critical-doped Pb-Bi2212 with \(p\sim 0.205\) and \(T_{up}\sim 225\) K. Our SHG investigation revealed the unexpected discovery of mirror symmetry breaking with onset temperature coincident with the strange metal to FL-like metal crossover. Moreover, we found that the symmetry breaking is accompanied by order parameter-like enhancement of not only the SHG response, but also the photoemission quasiparticle coherence with the same transition temperature \(T_{up}\). Our observations suggest that the ground state of the FL-like metal in the \(p>p_{c}\) region is a quantum ordered state with broken mirror symmetry, which contradicts the conventional QCP scenario. ## II 2. Result Fig. 2 shows experimental and simulated RA-SHG results (open circles and solid lines, respectively) obtained with four polarization geometries at room temperature and near 70 K: P\({}_{\text{in}}\)-P\({}_{\text{out}}\) (PP), P\({}_{\text{in}}\)-S\({}_{\text{out}}\) (PS), S\({}_{\text{in}}\)-P\({}_{\text{out}}\) (SP), and S\({}_{\text{in}}\)-S\({}_{\text{out}}\) (SS). As there is no consensus on the point group of Bi-based cuprates, we first determined the crystallographic point group of Pb-Bi2212 at room temperature [26]. For all of the RA-SHG polar data at room temperature, two mirror symmetries along the crystal axes \(a\) and \(b\) (two mirror planes of \(m_{ac}\) and \(m_{bc}\)) are clearly identified. We simulated RA patterns from bulk crystallographic point group candidates within these two mirror symmetries and twofold rotational symmetry: centrosymmetric \(mmm\) and noncentrosymmetric \(mm2\). We found that bulk electric quadrupole (EQ, \(Q_{ij}^{2\omega}\sim\chi_{ijkl}^{EQ}E_{k}E_{l}\)) SHG derived from the \(mmm\) point group \(I_{SS}^{2\omega}\propto sin^{2}2\varphi|\text{A }sin^{2}\varphi+\text{ B }cos^{2}\varphi|^{2}\) well fit eight lobes in the SS channel, where A and B are linear combinations of \(\chi_{ijkl}^{EQ}\) susceptibility tensor components. Moreover, the fitting results in Figs. 2a-d indicate excellent agreement in the other polarization channels with an EQ SHG response from the centrosymmetric \(mmm\) orthorhombic point group. In contrast, we excluded \(mm2\) from the point group candidates because the electric dipole (ED, \(P_{i}^{2\omega}\sim\chi_{ijk}^{ED}E_{j}E_{k}\)) SHG, which is the dominant contribution from noncentrosymmetric \(mm2\), is forbidden in the SS polarization channel data (Methods and Supplementary Section 2). Next, we turned our attention to the temperature-dependent symmetry change. Figs. 2e-h show the RA Figure 1: **Orthorhombic crystallographic structure of Pb-Bi2212 with \(p>p_{c}\).****(a)** Temperature versus doping phase diagram of pristine Bi2212 extracted from the temperature deviation from the linear behavior of in-plane resistivity \(\rho_{ab}\)(T) (green diamonds [23], red triangles [24], and blue squares [25]). The doping measured in this study is indicated by the grey line. **(b)** Temperature-dependent resistivity curve for Pb-Bi2212 with \(p\sim 0.205\). The grey circle marks \(T_{up}\), the onset of upturn deviation from linear resistivity. **(c)** Crystal structure of Pb-Bi2212. Pb substitution alleviates the structural modulation that exists along the \(a\)-axis. **(d)** Schematic illustration of the RA-SHG setup. Linear polarized light, which is parallel (P) or perpendicular (S) to the incidence plane, with frequency \(\omega\) focused obliquely onto the (001) surface of the sample. \(\varphi\) is the angle of incident plane rotated around the \(c\)-axis. **(e)** RA-SHG patterns of Pb-Bi2212 and pristine Bi2212 acquired in S\({}_{\text{in}}\)-S\({}_{\text{out}}\) (SS) polarization geometry at room temperature. Data were fitted to the bulk electric quadrupole induced SHG from \(mmm\) and \(2/m\), respectively. SHG at low temperature \(T\sim 70\) K below \(T_{up}\) as well as \(T_{c}\). Compared to the room temperature results, PP and SP channels show larger SHG intensities while retaining the symmetries. In contrast to PP and SP cases, PS and SS channels exhibit significant amplitude modulation of the lobes across the crystal axes, which is not observed at room temperature. This constitutes clear evidence that the mirror symmetry of the system is broken at low temperature, while the onset temperature of this symmetry breaking remains questionable. We note that the signature of the mirror symmetry breaking is detected only in the polarization channels with S output, at which room temperature data are mostly described by in-plane tensor elements (\(\chi^{EQ}_{ijkl}\) elements with \(i,j,k,l=x\) or \(y\)). This implies that the low-temperature SHG response mainly comes from the in-plane symmetry breaking. To find the onset temperature of symmetry breaking, we fixed \(\varphi\) to the maximum lobe intensity angle and tracked the temperature dependence of the SHG intensity \(I^{2\omega}(T)\) in fine temperature steps. We obtained the \(\Delta I^{2\omega}(T)\) by subtracting the linear-in temperature background from the \(I^{2\omega}(T)\), where the linear-in temperature background is attributed to the linearly decreasing change in the crystallographic \(\chi^{EQ}_{ijkl}\) elements [20; 27; 28] (Supplementary Fig. 2). Figs. 3a-d show \(\Delta I^{2\omega}(T)\) for all four geometries normalized to their room temperature values before subtracting the background (Supplementary Fig. 3). With changing temperature, the SHG intensity in all polarization geometries abruptly increases below \(T\sim 225\) K which is close to \(T_{up}\) observed in the resistivity data instead of \(T_{c}\). This feature is a strong indicator of spontaneous symmetry breaking across the border from a strange metal to FL-like metal phase, in contrast to the long-standing conjecture that \(T_{up}\) is associated with a smooth crossover to the disordered state. Moreover, except for the \(\text{S}_{\text{in}}\)-\(\text{P}_{\text{out}}\) data, \(\Delta I^{2\omega}(T)\) shows obvious order parameter-like behavior below \(T_{up}\), indicating that there is an order parameter lowering the symmetry of the system. An important open question is whether the \(T_{up}\) coincidently meets the onset temperature of the symmetry breaking, or whether they are mutually connected. As \(T_{up}\) is an indication of a change in dynamics of charge carriers, the possible emergence of the order parameter is expected to impact the quasiparticle self-energy, which is reflected in the photoemission spectral function. Therefore, we performed complementary ARPES measurements with fine temperature steps to observe the effects of symmetry breaking on the quasiparticles. Figs. 4a-d show ARPES spectra and the second energy derivatives, respectively, above \(T_{up}\) at 280 K and below \(T_{up}\) at 140 K, along the Brillouin zone boundary. Compared to the blurred spectral distribution in the 280 K data, the spectrum at 140 K showed clear bilayer splitting with certain features of the bonding and anti-bonding band. We traced the temperature evolution of the split peaks through the energy distribution curve (EDC) at the anti-nodal point (Fig. 4e, inset). In Fig. 4e, while EDC near room temperature shows a shoulder-like feature around -50 meV, the split peaks become clearly discernible with Figure 2: **Spontaneous mirror symmetry breaking in Pb-Bi2212.** RA-SHG data of Pb-Bi2212 collected under four polarization geometries: **(a,e)** PP, **(b,f)** SP, **(c,g)** PS, and **(d,h)** SS at \(T\sim 295\) K (\(T>T_{up}\)) (a–d) and \(T\sim 70\) K (\(T<T_{up}\)) (e–h). All data sets are presented with the same intensity scale, in which the maximum intensity in the PP geometry at \(T\sim 295\) K is set to 1. A data set of room and low temperatures for each polarization was obtained from same samples (see Methods for details). The data taken at room temperature are fitted to bulk EQ-induced SHG from \(mmm\) (red lines), whereas the low-temperature data are fitted to the coherent superposition of bulk EQ-induced SHG from \(mmm\) and \(mm^{\prime}m^{\prime}\) (blue lines), as described in the main text. decreasing temperature, which implies that the doped holes on the two CuO\({}_{2}\) bilayers in the unit cell propagate more coherently at lower temperature. To quantify the quasiparticle coherence and track its temperature evolution, we focus on the temperature dependence of the spectral weight transfer to the coherent split peaks, \(W_{coh}(T)\) (Inset of Fig. 4f). In detail, the \(\Delta W_{coh}(T)\) is obtained by integrating the spectral weight difference between EDC of 300 K and each temperature for the energy range of -75 meV\(<E\)-\(E_{F}<\)0 with subsequent subtraction of the linear-in temperature background, which is considered to be the simple impurity scattering factor (Supplementary Fig. 3). In Fig. 4f, the \(\Delta W_{coh}\) is plotted along with \(\rho-\rho_{Linear}\) and \(\Delta I_{SS}^{2\omega}\), where \(\rho_{Linear}\) is the linear component of resistivity obtained by fitting the \(\rho(T)\) above \(T_{up}\) (dashed line in Fig. 1b); thus, \(\rho-\rho_{Linear}\) is a measure of the superlinear component. Remarkably, the \(\Delta W_{coh}\) and \(\Delta I_{SS}^{2\omega}\) overlap with each other, showing almost the same order parameter-like temperature dependence with the transition temperature \(T=T_{up}\), where the superlinear component of resistivity emerges. A previous ARPES study proposed an association between the sudden enhancement of quasiparticle coherence and the strange metal to FL-like metal crossover, while the FL-like metal was still assumed to be disordered [25]. In addition to this long-standing view, our precise quantification of the spectral weight transfer and comparative study with SHG show that the sudden enhancement of coherence is intertwined with the order parameter of the broken symmetry. Therefore, the results presented here suggest that the strange metal to FL-like metal transition at \(T_{up}\) in Pb-Bi2212 is a phase transition accompanied by mirror symmetry breaking. ## IV 4. Discussion We attempted to narrow down the point group symmetry candidates of the FL-like phase by surveying both crystallographic and magnetic subgroups of the \(mmm\) point group, and categorized possible sources of the low-temperature RA-SHG response to the ED contribution from noncentrosymmetric subgroups, and the MD or EQ contribution from centrosymmetric subgroups. Table 1 shows the highest-rank subgroups of the \(mmm\) point group and the possible SHG enhancement in each polarization geometry. As susceptibility tensors \(\chi_{ijk}^{ED}\), \(\chi_{ijk}^{MD}\), and \(\chi_{ijkl}^{EQ}\) are invariant under the corresponding symmetry operation of each subgroup, the SHG response can occur in certain polarization geometries (Supplementary Section II). Therefore, all noncentrosymmetric subgroup candidates can be excluded due to the forbidden ED SHG enhancement in SS geometry, which contradicts our observation in Fig. 3d. As the SHG enhancement at \(T_{up}\) occurs in all polarization geometries, we identified the subgroup candidates from Table 1: the EQ process from \(2/m\) crystallographic subgroup and EQ process from \(mm^{\prime}m^{\prime}\) magnetic subgroup, where \(m^{\prime}\) denotes the combination of mirror operation and time reversal. We further fitted the low-temperature polar data with the two subgroup candidates of \(2/m\) and \(mm^{\prime}m^{\prime}\), and crosschecked whether the subgroup candidates indeed reflect the symmetry of the system. For the crystallographic subgroup \(2/m\), all of the data with four different polarizations are well fitted with (\(P_{i}^{2\omega}\sim\chi_{ijkl}^{EQ}\nabla_{j}E_{k}E_{l}\)) where \(\chi_{ijkl}^{EQ}\) is the nonvanishing susceptibility tensor from \(2/m\) (Supplementary Fig. 4). In the case of \(mm^{\prime}m^{\prime}\), \(c\)-type contribution \(\chi_{ijkl}^{EQ(c)}\) is allowed because time reversal symmetry is broken. Therefore, we fit the data with \(P_{i}^{2\omega}\sim(\chi_{ijkl}^{EQ(i)}+\chi_{ijkl}^{EQ(c)})\nabla_{j}E_{k}E_{l}\), where \(\chi_{ijkl}^{EQ(i)}\) is the time-invariant tensor from crystallographic point group \(mmm\) and \(\chi_{ijkl}^{EQ(c)}\) is the time-noninvariant tensor from the magnetic point group \(mm^{\prime}m^{\prime}\). The solid lines in Figs. 2e-h show that a coherent superposition of \(\chi_{ijkl}^{EQ(i)}\) from \(mmm\) and \(\chi_{ijkl}^{EQ(c)}\) from \(mm^{\prime}m^{\prime}\) well reproduces two characteristic features of low-temperature S-output data: the modulation of lobe amplitude, which is a marker of Figure 3: **Temperature dependent RA-SHG pattern amplitude** for **(a)** PP, **(b)** SP, **(c)** PS, and **(d)** SS in a heating cycle. Data were obtained with \(\varphi\) fixed to the maximum SHG lobe at low temperature. Data were first normalized to the values at \(T\sim 295\) K. Then, a high-temperature linear background was subtracted. The error bars represent the standard deviation of the intensity over 12 independent measurements in the PP, SP, and PS channels. The error bars in the SS channel denote the standard deviation of 60 measurements while heating by 5 K with a ramping rate of 2 K/min (Methods). Blue lines on the data are guides for the eye. The width of the shaded grey interval shows the uncertainty in the transition temperature. mirror symmetry breaking, and shrinkage along the \(a\)-axis (Supplementary Fig. 5). These results showed that \(2/m\) and \(mm^{\prime}m^{\prime}\) are suitable candidates to explain both the temperature dependence of SHG intensity and low-temperature polar data. Based on these two sub point groups, we discuss the potential physical origins for the order parameter. We first examined the monoclinic structural transition, which enabled the \(mmm\) to evolve into the \(2/m\) symmetry [29, 30]. It is worth noting that, based on our ARPES results, the low-temperature ordering that reduces the symmetry is correlated with enhancement of quasiparticle coherence along the \(c\)-axis, at least to the distance between the CuO\({}_{2}\) bilayers. Therefore, if monoclinic distortion is the origin of the symmetry breaking, it would impact the \(c\)-axis lattice constant. However, our complementary X-ray diffraction (XRD) study, which focused on the d spacing measurement along the \(c\)-axis, did not identify any significant anomalies in the temperature evolution of the \(c\)-axis lattice constant (Supplementary Fig. 4). Moreover, previous XRD and neutron scattering experiments with various Bi-based cuprates found no evidence of such structural transition near \(T_{up}\)[31, 32]. Nevertheless, we still cannot fully rule out unidentified structural transitions. Therefore, further in-depth study of the structural transition is required. Another possibility is that the enhancement of SHG at \(T_{up}\) could be attributed to a hitherto unknown bulk order parameter described by the \(mm^{\prime}m^{\prime}\). Indeed, \(mm^{\prime}m^{\prime}\) point group incorporates \(A_{2g}\) inversion symmetric magnetic order, which is compatible with ferroic order, for example. Intriguingly, several fingerprints of ferromagnetic fluctuations have been reported in the heavily overdoped regime of various cuprates [13, 33]. In addition, coupling of such magnetic fluctuations with optical phonons is theoretically able to generate the ferroic order with a quasistatic magnetoelectric quadrupole [34]. Detection of such multipolar orders via, for example, magnetic neutron scattering and muon spin measurements represents an important direction for further study [35]. Lastly, our comprehensive study with SHG, ARPES, and resistivity promotes a better understanding of the strange metal phase. We highlight that the \(T\)-superlinear component of resistivity \(\rho-\rho_{Linear}\) becomes zero as the coherence of the bilayer-split quasiparticle peaks and multipolar order which can be described by the \(z\)-axis component of the magnetization are significantly suppressed above \(T_{up}\) (see Fig. 4). This implies that the decoherence of doped holes between the CuO\({}_{2}\) layers and the resulting two-dimensional confinement is an important ingredient of the strange metal behavior in the over-critical-doped regime [36]. These results provide ex Figure 4: **Temperature dependence of electronic structure in Pb-Bi2212 across \(T_{up}\)**. (a–b) ARPES spectra (left) and second energy derivatives (right) taken above \(T_{up}\) at 280 K and (c–d) below \(T_{up}\) at 140 K, along the Brillouin zone boundary, as denoted by the solid grey arrow in the inset of (e). All ARPES data were divided by the energy resolution-convoluted Fermi-Dirac distribution function, and the momentum-independent background was then subtracted (see Methods for details). (e) Temperature dependence of EDC at the anti-node (grey dot in the inset of (e)). Each EDC was normalized to the total area over the energy range -200 meV\(<E\)-\(E_{F}<\)0. Black rectangles denote the bilayer splitting of antibonding band (AB) and bonding band (BB). The inset shows a schematic of the Fermi surface of Pb-Bi2212 composed of the bilayer splitting. (f) \(\rho-\rho_{Linear}\) as a function of temperature, where \(\rho_{Linear}\) is the linear fitting of \(\rho(T)\) between 250 and 300 K (top). Temperature dependence of the integrated spectral weight difference over the energy range -75 meV\(<E\)-\(E_{F}<\)0 between 300 K and each temperature (\(W_{coh}\), orange area in the inset) with subtracting linear-in temperature background, in comparison with the normalized SHG intensity in SS geometry from Fig. 3d (bottom). perimental support for the extensive theoretical descriptions of non-Fermi liquid behaviors from QCP in two-dimensional metals including cuprates [37; 38]. ## Acknowledgements We are grateful to Y. S. Kim, R. Noguchi, S. S. Huh, H. Y. Choi and A. Hallas for their helpful discussions and useful comments. We appreciate the technical support on fitting process from B.T. Fichera. This work was conducted under the ISSP-CCES Collaborative Programme and was supported by the Institute for Basic Science in Republic of Korea (Grant Numbers IBS-R009-G2 and Grant No. IBSR009-D1 ). This work was supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIT) (No. 2022R1A3B1077234) and the JSPS KAKENHI (No. JP19H0582*).
2305.04700
Lacunary maximal functions on homogeneous groups
We observe that classical arguments of Ricci--Stein can be used to prove $L^p$ bounds for maximal functions associated to lacunary dilates of a fixed measure in the setting of homogenous groups. This recovers some recent results on averages over Kor\'anyi spheres and horizontal spherical averages of a type introduced by Nevo--Thangavelu. Moreover, the main theorem applies much more broadly and we explore its consequences through a variety of explicit examples.
Aswin Govindan Sheri, Jonathan Hickman, James Wright
2023-05-08T13:32:26Z
http://arxiv.org/abs/2305.04700v1
# Lacunary maximal functions on homogeneous groups ###### Abstract. We observe that classical arguments of Ricci-Stein can be used to prove \(L^{p}\) bounds for maximal functions associated to lacunary dilates of a fixed measure in the setting of homogenous groups. This recovers some recent results on averages over Koranyi spheres and horizontal spherical averages of a type introduced by Nevo-Thangavelu. Moreover, the main theorem applies much more broadly and we explore its consequences through a variety of explicit examples. ## 1. Main result and examples ### Introduction Recently, there has been some interest in the \(L^{p}\) mapping properties of geometric maximal operators over homogeneous groups \(G\). Here we consider operators \(f\mapsto\sup_{k}|f*\sigma_{k}|\) acting on functions \(f\in C_{c}(G)\), where \(\sigma\) is a measure supported on some submanifold \(S\) of \(G\), the \(\sigma_{k}\) are (automorphic) lacunary dilates of \(\sigma\) and the convolution is taken with respect to the group operation. In the case of the Heisenberg groups \(\mathbb{H}^{m}\), prototypical examples arise by taking \(S\) (in exponential coordinates) to be: 1. The Koranyi sphere \(S:=\{(x,u)\in\mathbb{H}^{m}:|x|^{4}+|u|^{2}=1\}\): see [11, 20]; 2. The horizontal (euclidean) sphere \(S:=S^{2m-1}\times\{0\}\): see [2, 19]. We shall discuss these examples in more detail in SS1.3 below.1 Footnote 1: In many recent works, including those cited above, the authors were also (and often primarily) interested in proving sparse bounds for the associated maximal operators. Here we are only interested in the \(L^{p}\) theory. Here we observe that classical arguments of Ricci-Stein [18] lead to a very general and robust \(L^{p}\) theory for such lacunary maximal functions. These methods were developed to study boundedness properties of classes of singular integrals and also maximal operators along homogeneous submanifolds (see [18, Lemma 4.2]); it is therefore unsurprising that they are useful in the present context. Nevertheless, given the renewed interest in these problems, it appears timely to revisit the approach of [18] and give a clear presentation of its implications. ### Setup and main results Let \((G,\,\cdot\,)\) be a homogenous group with family of automorphic dilations \((\delta_{t})_{t>0}\); we recall the relevant definitions in SS2.1 below. Given a finite, compactly supported Borel measure \(\sigma\) on \(G\) and \(k\in\mathbb{Z}\), define the \(2^{k}\)-dilate \(\sigma_{k}\) of \(\sigma\) and the reflection \(\tilde{\sigma}\) of \(\sigma\) by \[\langle\sigma_{k},\phi\rangle:=\int_{G}\phi\circ\delta_{2^{k}}(x)\,\mathrm{d} \sigma(x)\quad\text{and}\quad\langle\tilde{\sigma},\phi\rangle:=\int_{G}\phi( x^{-1})\,\mathrm{d}\sigma(x)\qquad\text{for $\phi\in C_{c}(G)$.}\] For any such measure \(\sigma\) we consider the associated averaging operator \[A[\sigma]f(x):=f*\sigma(x)=\int_{G}f(x\cdot y^{-1})\,\mathrm{d}\sigma(y)\qquad \text{ for }x\in G,\] and lacunary maximal function \[M[\sigma]f(x):=\sup_{k\in\mathbb{Z}}|A[\sigma_{k}]f(x)|\qquad\text{for }x\in G,\] defined initially for \(f\in C_{c}(G)\). We are interested in establishing conditions on \(\sigma\) which ensure \(M[\sigma]\) is bounded on \(L^{p}(G)\) for all \(1<p\leqslant\infty\). Following Ricci-Stein [18] (and also [6]), we consider iterated convolution products of \(\sigma\). For \(N\in\mathbb{N}_{0}\), we define the \(N\)th convolution product \(\sigma^{(N)}\) recursively: starting with \(\sigma^{(0)}=\sigma\), we take \[\sigma^{(N)}:=\begin{cases}\sigma^{(N-1)}*\tilde{\sigma}&\text{if }N\text{ is odd,}\\ \sigma^{(N-1)}*\sigma&\text{if }N\geqslant 2\text{ is even.}\end{cases} \tag{1.1}\] Our main hypothesis is framed in terms of the regularity properties of the \(\sigma^{(N)}\) for large \(N\). **Definition 1.1** (Curvature assumption).: Let \(\sigma\) be a finite Borel measure on \(G\). We say that \(\sigma\) satisfies (CA) if the following hold: 1. There exists \(N>0\) such that \(\sigma^{(N)}\) is absolutely continuous with respect to the Haar measure on \(G\); 2. If \(h\) denotes the Radon-Nikodym derivative of \(\sigma^{(N)}\) with respect to the Haar measure, then there exists \(\gamma>0\) and \(C_{\sigma}>0\) such that \[\int_{G}|h(x\cdot y^{-1})-h(x)|\,\mathrm{d}x+\int_{G}|h(y^{-1}\cdot x)-h(x)| \,\mathrm{d}x\leqslant C_{\sigma}|y|_{G}^{\gamma}\quad\text{for }y\in G.\] Here \(|\,\cdot\,|_{G}\) denotes a choice of homogeneous norm on \(G\); see SS2.1 for the definition.2 Footnote 2: The use of homogeneous norm is for convenience; here one could equally replace \(|y|_{G}\) with the usual euclidean norm \(|y|\). With the above definition, our main theorem reads as follows. **Theorem 1.2**.: _Let \(G\) be a homogeneous group and \(\sigma\) be a finite, compactly supported Borel measure on \(G\). If \(\sigma\) satisfies (CA), then \(M[\sigma]\) is bounded on \(L^{p}(G)\) for all \(1<p\leqslant\infty\)._ Theorem 1.2 recovers many results in the literature. In the euclidean case it is equivalent to a classical result of Duoandikoetxea and Rubio de Francia [7] concerning maximal estimates under Fourier decay hypotheses on \(\sigma\). For Heisenberg groups, it implies the \(L^{p}\) boundedness of the spherical averages (over both Koranyi and horizontal spheres) discussed in SS1.1. It also implies [18, Lemma 4.2], which concerns maximal functions along homogeneous submanifolds of \(G\). However, the setup described above is very robust and there is a wealth of additional examples, some of which we describe in the following subsection. As mentioned previously, the proof of Theorem 1.2 is heavily based on the methods of [18], which in turn were partially inspired by [6]. Key ingredients are iterated \(T\)*\(T\) arguments, which allow us to access and exploit the condition (CA), and Calderon-Zygmund theory adapted to the homogeneous group setting. ### Examples The curvature assumption from Definition 1.1 applies to a broad class of measures and, consequently, Theorem 1.2 unifies and dramatically extends a number of results in the literature. Here we briefly describe some representative examples of such measures; we return to discuss these and other examples in more detail in SS3. _1) The Euclidean case._ Suppose \(\sigma\) is a finite, compactly supported Borel measure on \(\mathbb{R}^{n}\). It is not difficult to show (CA) holds in this case if and only if there exists some \(\kappa>0\) and a constant \(C_{\sigma}\geq 1\) such that \[|\widehat{\sigma}(\xi)|\leq C_{\sigma}(1+|\xi|)^{-\kappa}\quad\text{for all }\xi\in\widehat{\mathbb{R}}^{n},\quad\text{where}\quad\widehat{\sigma}(\xi) :=\int_{\mathbb{R}^{n}}e^{-2\pi ix\cdot\xi}\,\mathrm{d}\sigma(x). \tag{1.2}\] Thus, Theorem 1.2 recovers a classical result of [7]. Prototypical examples of measures satisfying (1.2) are given by smooth densities supported on finite-type submanifolds of \(\mathbb{R}^{n}\): see, for instance, [21, Chapter VIII, SS3.2]. There are also many interesting 'fractal' measures. The literature is too vast to survey here, but we mention some recent representative examples: measures arising from the theory of diophantine approximation [10], realisations of random processes [9] and also various non-constructive examples exhibiting interesting dimensional properties [12]. _2) The Koranyi sphere and extensions._ For \(m\geq 1\) let \(J\) denote the symplectic form on \(\mathbb{R}^{2m}\) given by \(x^{\top}Jy=\frac{1}{2}\sum_{j=1}^{m}(x_{j}y_{m+j}-x_{m+j}y_{j})\) for \(x,\,y\in\mathbb{R}^{2m}\). Consider the Heisenberg group \(\mathbb{H}^{m}\), which we identify with \(\mathbb{R}^{2m+1}\) endowed with the non-commutative group operation \((x,u)\cdot(y,v):=(x+y,u+v+x^{\top}Jy)\). The _Koranyi sphere_ is then defined to be the set \[S:=\{(x,u)\in\mathbb{R}^{2m}\times\mathbb{R}:|(x,u)|_{\mathbb{H}^{m}}=1\} \quad\text{where}\quad|(x,u)|_{\mathbb{H}^{m}}:=\big{(}|x|^{4}+|u|^{2}\big{)} ^{1/4};\] that is, \(S\) is the unit sphere with respect to the _Koranyi norm_\(|\,\cdot\,|_{\mathbb{H}^{m}}\). We take \(\sigma\) to be the normalised surface measure on \(S\) induced by the Lebesgue measure on the ambient space \(\mathbb{R}^{2m+1}\). It is not difficult to show \(\sigma\) satisfies (CA), and therefore Theorem 1.2 implies the associated lacunary maximal operator \(M[\sigma]\) is bounded for all \(1<p\leq\infty\). For \(n\geq 2\), this recovers Theorem 1.2 of [11] (see also [20]). However, here the setup can be generalised considerably. For instance, one may replace the Heisenberg group with any graded Lie group \(G\) and the Koranyi sphere \(S\) with any member of a broad class of convex sets lying in \(G\). More precisely, given any analytic submanifold \(\Sigma\) of the Lie algebra \(\mathfrak{g}\) formed by the boundary of an open, bounded convex set, we can consider averages over \(S:=\exp(\Sigma)\subset G\). We discuss the details of these extensions in SS3.3 below. _3) Horizontal spheres and extensions._ Returning to the Heisenberg group \(\mathbb{H}^{m}\), now consider the normalised surface measure \(\sigma\) on the (euclidean) sphere \(S^{2m-1}\times\{0\}\subseteq\mathbb{R}^{2m+1}\). In this case the operators \[A[\sigma_{k}]f(x,u)=\int_{S^{2m-1}}f\big{(}x-2^{k}y,u-2^{k}x^{\top}Jy\big{)} \,\mathrm{d}\sigma(y),\qquad(x,u)\in\mathbb{R}^{2m+1}, \tag{1.3}\] take averages over ellipsoids lying in the horizontal distribution on \(\mathbb{H}^{m}\). Averages of this kind were first considered by Nevo-Thangavelu [16] (in the context of the 'full' maximal function defined with respect to a continuum of dilates) and have been extensively studied: see, for instance, [14, 15, 1, 17, 4, 3, 19, 13]. As before \(\sigma\) satisfies (CA) and we obtain the boundedness of \(M[\sigma]\). This recovers Theorem 1.1 of [2] (see also [19], where the \(n=1\) case and extensions to Metivier groups are considered) although, as remarked in [19], such bounds can be directly deduced from earlier work such as [14]. However, once again the setup can be generalised considerably. In this case it is natural to work in a stratified group \(G\) (we recall the relevant definitions in SS3.1), so that the Lie algebra \(\mathfrak{g}=\bigoplus_{j=1}^{\infty}V_{j}\) is graded and \(V_{1}\) is a vector subspace of dimension \(d\) which generates \(\mathfrak{g}\). We consider the (euclidean) unit sphere \(S^{d-1}\) in \(V_{1}\), which we map into \(G\) via the exponential map. We then let \(\sigma\) be the normalised surface measure on this sphere in \(G\). The curvature assumption (CA) continues to hold at this level of generality. Moreover, we may further replace the sphere \(S^{d-1}\) in \(V_{1}\) with some other surface, possibly of lower dimension. We say that an analytic submanifold \(\Sigma\) of \(V_{1}\) is _non-degenerate_ if it is not contained in a proper affine subspace of \(V_{1}\). With this definition, (CA) continues to hold for any smooth, compactly supported density on \(S:=\exp(\Sigma)\) whenever \(\Sigma\subset V_{1}\) is a non-degenerate analytic submanifold. _4) Tilted spheres and extensions._ An interesting variant of the previous example arises from 'tilting' the sphere \(S^{2m-1}\) in (1.3). Let \(v\in\mathbb{R}^{2m}\) and now consider the averaging operators \[A[\sigma_{k}^{v}]f(x,u)=\int_{S^{2m-1}}f\big{(}x-2^{k}y,u-2^{2k}\langle v,y \rangle-2^{k}x^{\top}Jy\big{)}\,\mathrm{d}\sigma(y),\qquad(x,u)\in\mathbb{R}^ {2m+1},\] which correspond to Heisenberg convolution with a dilates of a suitably 'tilted' variant \(\sigma^{v}\) of the measure \(\sigma\) on \(S^{2m-1}\times\{0\}\). Averages of this form, along with natural extensions to the class of Metivier groups, have been considered in a number of works [14, 1, 19]. The tilted measures \(\sigma^{v}\) also satisfy (CA) and we again obtain the boundedness of \(M[\sigma]\). The results also extend to general stratified groups and analytic submanifolds \(\Sigma\subseteq V_{1}\) under the non-degeneracy hypothesis described in 3). However, we may go further. Indeed, we may replace the tilted sphere with any analytic submanifold \(\Sigma\subset\mathfrak{g}\) with the property that the projection \(\Pi_{1}(\Sigma)\) onto \(V_{1}\) is not contained in any proper affine subspace of \(V_{1}\). This includes the examples discussed here and in 3) above, but it also includes examples which are not associated to some \(d\)-plane distribution in \(G\), such as averages over group translates of the moment curve \((t,t^{2},\ldots,t^{n})\) in \(G\). We discuss the details of these extensions in SS3.4 below. ### Notational conventions Given a list of objects \(L\) and real numbers \(A\), \(B\geq 0\), we write \(A\lesssim_{L}B\) or \(B\gtrsim_{L}A\) to indicate \(A\leq C_{L}B\) for some constant \(C_{L}\) which depends only on items in the list \(L\) and the underlying choice of group \(G\).3 We write \(A\sim_{L}B\) to indicate \(A\lesssim_{L}B\) and \(B\lesssim_{L}A\). Footnote 3: We also suppress the dependence on various constructions associated to \(G\) such as a choice of homogeneous norm, basis \(\{X_{j}^{k}\}_{j=1}^{n}\) of right-invariant vector fields, and so on. ### Acknowledgements The first and second authors thank David Beltran and Leonardo Tolomeo for a variety of helpful comments and suggestions regarding this project. The first author was supported by The Maxwell Institute Graduate School in Analysis and its Applications, a Centre for Doctoral Training funded by the UK Engineering and Physical Sciences Research Council (grant EP/L016508/01), the Scottish Funding Council, Heriot-Watt University, and the University of Edinburgh. ## 2. Proof of Theorem 1.2 ### Homogeneous groups We begin by reviewing some basic concepts from the theory of homogeneous groups relevant to the proof. Much of the material presented here can be found in Chapter 1 of the classical treatise [8]. Let \(G\) be a connected, simply connected nilpotent Lie group with Lie algebra \(\mathfrak{g}\); we shall always tacitly assume \(\mathfrak{g}\) is real and finite-dimensional. Under these hypotheses, the exponential map \(\exp\colon\mathfrak{g}\to G\) is a (globally defined) diffeomorphism. This allows us to identify \(G\) with \(\mathbb{R}^{n}\) endowed with a group operation \((x,y)\mapsto x\,\raisebox{0.0pt}{\scalebox{1.0}{$\bullet$}}\,y\). Furthermore, this group operation is a polynomial mapping and the usual Lebesgue measure on \(\mathbb{R}^{n}\) is a left and right-invariant Haar measure for \(G\). We refer to the dimension \(n\) as the _topological dimension_ of \(G\), and sometimes denote this by \(\dim G\). If \(\mathfrak{g}\) is a Lie algebra, then \((\delta_{t})_{t>0}\) is a _family of dilations on \(\mathfrak{g}\)_ if each \(\delta_{t}\colon\mathfrak{g}\to\mathfrak{g}\) is an algebra automorphism4 and there exists some diagonalisable linear operator \(\Lambda\) on \(\mathfrak{g}\) with positive eigenvalues such that \(\delta_{t}:=\exp(\Lambda\log t)\) for all \(t>0\). In particular, \(\delta_{st}=\delta_{s}\circ\delta_{t}\) for all \(s\), \(t>0\). In this case, \(\Lambda\) is called the _dilation matrix_. Footnote 4: So that \(\delta_{t}\colon\mathfrak{g}\to\mathfrak{g}\) is a linear map and \(\delta_{t}[X,Y]_{\mathfrak{g}}=[\delta_{t}X,\delta_{t}Y]_{\mathfrak{g}}\) for all \(X,Y\in\mathfrak{g}\), where \([\,\cdot\,,\,\cdot\,]_{\mathfrak{g}}\) denotes the Lie bracket on \(\mathfrak{g}\). A _homogeneous group_ is a connected, simply connected nilpotent Lie group \(G\) such that the Lie algebra \(\mathfrak{g}\) is endowed with a family of dilations \((\delta_{t})_{t>0}\). In this case, the maps \(\exp\circ\delta_{t}\circ\exp^{-1}\colon G\to G\) are group automorphisms, which are referred to as _dilations of \(G\)_ and are also denoted by \(\delta_{t}\). Henceforth, let \(G\) be a homogenous group \(G\). The _homogeneous dimension of \(G\)_ is the quantity \[Q:=\operatorname{trace}(\Lambda)=\sum_{j=1}^{n}\lambda_{j},\] where \(\Lambda\) is the dilation matrix and the \(\lambda_{j}>0\) are the eigenvalues of \(\Lambda\) listed with multiplicity. The Haar measure on \(G\) then satisfies the important scaling property \[t^{-Q}\int_{G}f\circ\delta_{t^{-1}}(x)\,\mathrm{d}x=\int_{G}f(x)\,\mathrm{d}x \qquad\text{for all $f\in L^{1}(G)$ and $t>0$.}\] A _homogeneous norm_ on \(G\) is a continuous map \(|\,\cdot\,|_{G}\colon G\to[0,\infty)\) which is \(C^{\infty}\) on \(G\backslash\{0\}\) and satisfies: 1. \(|x^{-1}|_{G}=|x|_{G}\) and \(|\delta_{t}x|_{G}=t|x|_{G}\) for all \(x\in G\) and \(t>0\); 2. \(|x|_{G}=0\) if and only if \(x=0\). Henceforth we shall assume \(|\,\cdot\,|_{G}\) is some fixed choice of homogeneous norm on \(G\); it is easy to see that there always exists at least one such norm. Furthermore, one can show that there exists a constant \(C_{G}\geq 1\) such that \[|x\,\raisebox{0.0pt}{\scalebox{1.0}{$\bullet$}}\,y|_{G}\leq C_{G}\big{(}|x|_{ G}+|y|_{G}\big{)}\qquad\text{for all $x,\,y\in G$.} \tag{2.1}\] Finally, we fix \(\{X_{1},\ldots,X_{n}\}\subset\mathfrak{g}\) an orthogonal basis of eigenvectors for the dilation matrix \(\Lambda\), so that \(X_{j}\) has eigenvalue \(\lambda_{j}\) and \(0<\lambda_{1}\leq\cdots\leq\lambda_{n}\). Thus, we have \(\delta_{t}X_{j}=t^{\lambda_{j}}X_{j}\) for \(1\leq j\leq n\). Furthermore, each \(X_{j}\) corresponds to a right-invariant vector field \(X_{j}^{R}\) on \(G\) which, in particular, satisfies \[\frac{\partial}{\partial t}g(\exp(tX_{j})\,\raisebox{0.0pt}{\scalebox{1.0}{$ \bullet$}}\,x)=(X_{j}^{R}g)(\exp(tX_{j})\,\raisebox{0.0pt}{\scalebox{1.0}{$ \bullet$}}\,x) \tag{2.2}\] for all \(g\in C^{1}(G)\) and \(x\in G\), \(t\in\mathbb{R}\). Finally, the map \[\Upsilon\colon\mathbb{R}^{n}\to G,\qquad\Upsilon\colon(t_{1},\ldots,t_{n})\mapsto \exp(t_{n}X_{n})\cdot\ldots\cdot\exp(t_{1}X_{1})\] is a global diffeomorphism (for a proof of this fact, see [8, Lemma 1.31]). ### Littlewood-Paley theory Henceforth, we fix a homogeneous group \(G\) with family of dilations \((\delta_{t})_{t>0}\). We shall use a variant of the classical Littlewood-Paley decomposition adapted to this setting. Consider a function \(\psi\in C^{\infty}_{c}(G)\) which is _mean zero_ in the sense that \[\int_{G}\psi=0\] and form the dilates \[\psi_{k}(x):=2^{-kQ}\psi\circ\delta_{2^{-k}}(x);\] here \(Q\) is the homogeneous dimension of \(G\), as defined in SS2.1. Consider the convolution operators \(f\mapsto f*\psi_{k}\) for \(f\in L^{1}(G)\). For a suitable choice of \(\psi\), these operators play the role of classical Littlewood-Paley frequency projections. **Proposition 2.1**.: _Let \(G\) be a homogeneous group. There exists \(\psi\in C^{\infty}_{c}(G)\) of mean zero such that_ \[f=\sum_{k\in\mathbb{Z}}f*\psi_{k}\qquad\text{for all $f\in C_{c}(G)$,} \tag{2.3}\] _where the convergence holds in the \(L^{p}\) sense for all \(1<p\leq\infty\)._ This result is well-known (for instance, it can be deduced from [5, Proposition 3.4]); however, for completeness, we present the straightforward proof in Appendix A. ### Standard reductions We apply a sequence of standard reductions to reduce the study of the maximal operator to proving certain \(L^{2}\) and weak-type \((1,1)\) bounds for a family _linear_ operators. #### Cancellation It is useful to introduce some additional cancellation in our operator. Let \(\phi\in C^{\infty}_{c}(G)\) be a non-negative function such that \[\int_{G}\phi=\sigma(G).\] Note that \(\nu:=\sigma-\phi\) is a compactly supported, signed Borel measure on \(G\) such that \(\|\nu\|<\infty\) and \(\nu\) is _mean zero_ in the sense that \[\nu(G)=0.\] Clearly, we have a pointwise inequality \[M[\sigma]f(x)\leq M[\nu]f(x)+M[\phi]f(x).\] Moreover, \(M[\phi]\) is a variant of the Hardy-Littlewood maximal function on \(G\) which, as is well-known (see, for instance, [8, Corollary 2.5]), is bounded on \(L^{p}(G)\) for all \(1<p\leq\infty\). It therefore suffices to estimate the maximal function \(M[\nu]\). _Domination by a square function._ Using Proposition 2.1, we can choose a mean zero function \(\psi\in C_{c}^{\infty}(G)\) which satisfies (2.3). Consider the family of maximal functions \[M_{\ell}[\nu]f:=M[\psi_{\ell}*\nu]f=\sup_{k\in\mathbb{Z}}|f*\psi_{k+\ell}*\nu_{k }|\qquad\text{for }f\in C_{c}(G),\] defined for all \(\ell\in\mathbb{Z}\), so that we may pointwise dominate \[M[\nu]f(x)\leq\sum_{\ell\in\mathbb{Z}}M_{\ell}[\nu]f(x)\leq\sum_{\ell\in \mathbb{Z}}S_{\ell}[\nu]f(x) \tag{2.4}\] where each \(S_{\ell}[\nu]f\) is the associated square function \[S_{\ell}[\nu]f:=\Big{(}\sum_{k\in\mathbb{Z}}|f*\psi_{k+\ell}*\nu_{k}|^{2}\Big{)} ^{1/2}.\] Theorem 1.2 is therefore a consequence of the following proposition. **Proposition 2.2**.: _For all \(1<p\leq 2\), there exists some \(\varepsilon(p)>0\) such that_ \[\|S_{\ell}[\nu]f\|_{L^{p}(G)}\lesssim_{\sigma,p}2^{-\varepsilon(p)|\ell|}\|f \|_{L^{p}(G)}\] _holds for all \(f\in C_{c}(G)\) and all \(\ell\in\mathbb{Z}\)._ By our earlier observations (and, in particular, (2.4)), Proposition 2.2 immediately implies Theorem 1.2 in the restricted range \(1<p\leq 2\); the remaining cases then follow via interpolation with the trivial bound for \(p=\infty\). _Linearised square function._ In order to prove Proposition 2.2, we apply a standard randomisation procedure to linearise the square function. Let \(r=(r_{k})_{k\in\mathbb{Z}}\) be a sequence of IID random signs, with each \(r_{k}\) taking the values \(+1\) and \(-1\) with equal probability. Consider the function \[T_{\ell,r}f:=\sum_{k\in\mathbb{Z}}r_{k}T_{\ell}^{k}f\qquad\text{where}\qquad T _{\ell}^{k}f:=f*\psi_{k+\ell}*\nu_{k}\] In view of Khintchine's inequality, to prove Proposition 2.2, it suffices to show that for all \(1<p\leq 2\) there exists some \(\varepsilon(p)>0\) such that the norm bound \[\|T_{\ell,r}\|_{L^{p}(G)\to L^{p}(G)}\lesssim_{p}2^{-\varepsilon(p)|\ell|} \tag{2.5}\] holds for all \(\ell\in\mathbb{Z}\) with a constant independent of realisation of \(r\). Furthermore, (2.5) in turn follows from a pair of endpoint estimates. **Lemma 2.3**.: _There exists \(\rho>0\) such that_ \[\|(T_{\ell}^{k})^{*}T_{\ell}^{j}\|_{L^{2}(G)\to L^{2}(G)}+\|T_{\ell}^{k}(T_{ \ell}^{j})^{*}\|_{L^{2}(G)\to L^{2}(G)}\lesssim_{\sigma}\min\{2^{-\rho|\ell|}, 2^{-\rho|j-k|}\} \tag{2.6}\] _holds for all \(j,k,\ell\in\mathbb{Z}\)._ **Lemma 2.4**.: _For all \(\varepsilon>0\), we have_ \[\|T_{\ell,r}\|_{L^{1}(G)\to L^{1,\infty}(G)}\lesssim_{\sigma,\varepsilon}2^{ \varepsilon|\ell|}, \tag{2.7}\] _where the implicit constant is independent of the realisation of \(r\) and \(\ell\in\mathbb{Z}\)._ Once Lemma 2.3 and Lemma 2.4 are established, Proposition 2.2 (and hence Theorem 1.2) follow in a straightforward matter. Proof of Proposition 2.2.: Fix \(\ell\in\mathbb{Z}\). By combining Lemma 2.3 with the classical Cotlar-Stein lemma (see, for instance, [21, Chapter VII, SS2]), we deduce that there exists \(\rho>0\) (not necessarily the same as that in the statement of Lemma 2.3) such that \[\|T_{\ell,r}\|_{L^{2}(G)\to L^{2}(G)}\lesssim_{\sigma}2^{-\rho|\ell|}, \tag{2.8}\] where the implicit constant is independent of the realisation of \(r\). Fix \(1<p\leqslant 2\). By interpolating between (2.8) and the weak \(L^{1}\) estimate from Lemma 2.4 (with \(\varepsilon\) chosen sufficiently small, depending on \(p\)), we obtain (2.5). As previously noted, a standard argument using Khintchine's inequality then yields the square function bound in Proposition 2.2. ### Almost orthogonality In this subsection we present the proof of Lemma 2.3. For this, it is convenient to introduce a slight generalisation of the curvature assumption. Let \(\sigma\) be a finite Borel measure on \(G\). We say that \(\sigma\) satisfies (\(\Sigma\)CA) if it can be written as a sum \(\sigma=\sigma_{1}+\cdots+\sigma_{m}\) where each \(\sigma_{j}\) is a finite Borel measure on \(G\) which satisfies (CA). Lemma 2.3 is then a fairly direct consequence of the following result. **Lemma 2.5**.: _Suppose \(\mu,\vartheta\) are finite, compactly supported Borel measures on \(G\) which are mean zero and satisfy (\(\Sigma\)CA). Then there exists \(\rho>0\) such that_ \[\|A[\mu_{a}*\vartheta_{b}]\|_{L^{2}(G)\to L^{2}(G)}\lesssim_{\mu, \vartheta}2^{-\rho|a-b|}\qquad\text{for all }a,b\in\mathbb{Z}. \tag{2.9}\] Proof (Lemma 2.5 \(\implies\) Lemma 2.3).: By unwinding definitions, we may write \[(T_{\ell}^{k})*(T_{\ell}^{j})=A[\psi_{j+\ell}*\nu_{j}*\tilde{\nu}_{k}*\tilde{ \psi}_{k+\ell}] \tag{2.10}\] and \[(T_{\ell}^{k})(T_{\ell}^{j})*=A[\tilde{\nu}_{j}*\tilde{\psi}_{j+ \ell}*\psi_{k+\ell}*\nu_{k}]. \tag{2.11}\] Recall that the measures \(\psi,\nu\) are mean zero. Furthermore, \(\phi\) satisfies (CA) and \(\nu=\sigma-\phi\) is a difference of measures satisfying (CA). We therefore have an estimate of the form (2.9) whenever the measures \(\mu\) and \(\vartheta\) are taken from the collection \(\{\psi,\nu,\tilde{\psi},\tilde{\nu}\}\). In order to bound the operators in (2.10) and (2.11), consider the pairs of measures listed in Table 1. Thus, by Lemma 2.5, we can find \(\rho>0\) such that \[\|A[\psi_{j+\ell}*\nu_{j}]\|_{L^{2}(G)\to L^{2}(G)}+\|A[\tilde{ \nu}_{j}*\tilde{\psi}_{j+\ell}]\|_{L^{2}(G)\to L^{2}(G)}\lesssim_{\sigma}2^{- \rho|\ell|}, \tag{2.12}\] and \[\|A[\nu_{j}*\tilde{\nu}_{k}]\|_{L^{2}(G)\to L^{2}(G)}+\|A[\tilde{ \psi}_{j+\ell}*\psi_{k+\ell}]\|_{L^{2}(G)\to L^{2}(G)}\lesssim_{\sigma}2^{- \rho|k-j|}. \tag{2.13}\] To deduce the statement of Lemma 2.3 from the above estimates, we repeatedly apply Young's inequality5 to deduce that Footnote 5: For Young’s inequality over homogeneous groups, see [8, Proposition 1.18]. The reference only treats the functional case; however, the version for measures follows in a similar manner. \[\|(T_{\ell}^{k})^{*}(T_{\ell}^{j})\|_{L^{2}(G)\to L^{2}(G)}\] \[\qquad\qquad\lesssim_{\sigma}\min\big{\{}\|A[\psi_{j+\ell}*\nu_ {j}]\|_{L^{2}(G)\to L^{2}(G)},\|A[\nu_{j}*\tilde{\nu}_{k}]\|_{L^{2}(G)\to L^{2} (G)}\big{\}}\] and \[\|(T^{k}_{\ell})(T^{j}_{\ell})^{*}\|_{L^{2}(G)\to L^{2}(G)}\] \[\qquad\lesssim_{\sigma}\min\left\{\|A[\tilde{\nu}_{j}*\tilde{\psi}_ {j+\ell}]\|_{L^{2}(G)\to L^{2}(G)},\|A[\tilde{\psi}_{j+\ell}*\psi_{k+\ell}]\|_{ L^{2}(G)\to L^{2}(G)}\right\}.\] Combining these inequalities with (2.12) and (2.13), we obtain (2.6), which completes the proof of Lemma 2.3. Proof (of Lemma 2.5).: Fix \(\mu\) and \(\vartheta\) as in the statement of the lemma and \(a\), \(b\in\mathbb{Z}\) with \(a\leq b\). The adjoint of \(A[\mu_{a}*\vartheta_{b}]\) is the operator \(A[\tilde{\vartheta}_{b}*\tilde{\mu}_{a}]\). Since the hypotheses on \(\mu\) and \(\vartheta\) are preserved under reflection, it suffices to show there exists some \(\rho>0\) such that \[\|A[\mu_{a}*\vartheta_{b}]\|_{L^{2}(G)\to L^{2}(G)}\lesssim_{\mu,\vartheta}2^ {\rho(a-b)}.\] By linearity, we may further assume without loss of generality that \(\vartheta\) satisfies (CA). By a simple scaling argument, \[\|A[\mu_{a}*\vartheta_{b}]\|_{L^{2}(G)\to L^{2}(G)}=\|A[\mu_{a-b}*\vartheta]\|_ {L^{2}(G)\to L^{2}(G)}. \tag{2.14}\] Let \(\vartheta^{(n)}\) denote the \(n\)-fold convolution product as introduced in (1.1) and, for notational convenience, write \[A^{(n)}:=A[\mu_{a-b}*\vartheta^{(n)}].\] We claim that \[\|A^{(n)}\|_{L^{2}(G)\to L^{2}(G)}\leq\|\mu\|^{1/2}\|\vartheta\|^{n/2}\|A^{(n+ 1)}\|_{L^{2}(G)\to L^{2}(G)}^{1/2} \tag{2.15}\] holds for any \(n\in\mathbb{N}_{0}\). To prove the claim, we use the Hilbert space identity \[\|A^{(n)}\|_{L^{2}(G)\to L^{2}(G)}=\|(A^{(n)})^{*}(A^{(n)})\|_{L^{2}(G)\to L^{ 2}(G)}^{1/2}.\] However, \[(A^{(n)})^{*}(A^{(n)})=A[\mu_{a-b}*\vartheta^{(n)}*\mathcal{R}(\vartheta^{(n) })*\widetilde{\mu}_{a-b}]\] where \(\mathcal{R}\) maps a measure \(\varrho\) to its reflection \(\tilde{\varrho}\). In view of (1.1), we deduce that \[\mu_{a-b}*\vartheta^{(n)}*\mathcal{R}(\vartheta^{(n)})*\tilde{\mu}_{a-b}=\mu_ {a-b}*\vartheta^{(n+1)}*\mathcal{R}(\vartheta^{(n-1)})*\tilde{\mu}_{a-b}.\] Thus, by repeated applications of Young's inequality, we obtain the claim (2.15) (note that the \((n-1)\)th convolution product, as defined by (1.1), involves the convolution of \(n\) measures). \begin{table} \begin{tabular}{|l|l|l|l|l|l|} \hline Operator & \(\mu\) & \(\vartheta\) & \(a\) & \(b\) & \(2^{-|a-b|}\) \\ \hline \hline \(A[\psi_{j+\ell}*\nu_{j}]\) & \(\psi\) & \(\nu\) & \(j+\ell\) & \(j\) & \(2^{-|\ell|}\) \\ \hline \(A[\nu_{j}*\tilde{\nu}_{k}]\) & \(\nu\) & \(\tilde{\nu}\) & \(j\) & \(k\) & \(2^{-|k-j|}\) \\ \hline \(A[\tilde{\nu}_{j}*\tilde{\psi}_{j+\ell}]\) & \(\tilde{\nu}\) & \(\tilde{\psi}\) & \(j\) & \(j+\ell\) & \(2^{-|\ell|}\) \\ \hline \(A[\tilde{\psi}_{j+\ell}*\psi_{k+\ell}]\) & \(\tilde{\psi}\) & \(\psi\) & \(j+\ell\) & \(k+\ell\) & \(2^{-|k-j|}\) \\ \hline \end{tabular} \end{table} Table 1. Applications of Lemma 2.5. By (2.14) and repeated application of (2.15), we deduce that \[\|A[\mu_{a}*\vartheta_{b}]\|_{L^{2}(G)\to L^{2}(G)}\lesssim_{\mu,\vartheta}\|A^{( n)}\|_{L^{2}(G)\to L^{2}(G)}^{1/2^{n}}\quad\text{for all $n\in\mathbb{N}_{0}$}. \tag{2.16}\] Since \(\vartheta\) satisfies (CA), there exists some \(N\in\mathbb{N}_{0}\) such that \(\vartheta^{(N)}\) is absolutely continuous with respect to the Haar measure on \(G\) with Radon-Nikodym derivative \(h\) and, furthermore, there exists some \(\gamma>0\) such that \[\int_{G}\left|h(y^{-1}\boldsymbol{\cdot}x)-h(x)\right|\,\mathrm{d}x\lesssim_{ \vartheta}|y|_{G}^{\gamma}\qquad\text{for all $y\in G$}. \tag{2.17}\] In view of (2.16), it suffices to estimate the operator norm of \(A^{(N)}\) for this choice of exponent \(N\). By Young's inequality, the operator bound for \(A^{(N)}\) follows from an estimate on \(\|\mu_{a-b}*\vartheta^{(N)}\|_{L^{1}(G)}\). Since \(\mu\) has mean zero, it follows that \[\|\mu_{a-b}*\vartheta^{(N)}\|_{L^{1}(G)}=\|\mu_{a-b}*h\|_{L^{1}(G)}=\int_{G} \left|\int_{G}\left(h(y^{-1}\boldsymbol{\cdot}x)-h(x)\right)\,\mathrm{d}\mu_{ a-b}(y)\right|\,\mathrm{d}x.\] Applying the triangle inequality, the Fubini-Tonelli theorem and (2.17), we have \[\|\mu*\vartheta^{(N)}\|_{L^{1}(G)}\lesssim_{\vartheta}\int_{G}|y|_{G}^{ \gamma}\,\mathrm{d}|\mu_{a-b}|(y)=2^{\gamma(a-b)}\int_{G}|y|_{G}^{\gamma}\, \mathrm{d}|\mu|(y).\] Consequently, \[\|A[\mu_{a}*\vartheta_{b}]\|_{L^{2}(G)\to L^{2}(G)}\lesssim_{\mu,\vartheta}2^ {\rho(a-b)}\qquad\text{for $\rho:=2^{-N}\gamma$}\] and this concludes the proof. ### Calderon-Zygmund estimates In this subsection we present the proof of Lemma 2.4. The argument is based on Calderon-Zygmund theory adapted to homogeneous groups. Proof (of Lemma 2.4).: Let \(K_{\ell}\) and \(K_{\ell}^{k}\) denote the kernels of \(T_{\ell,r}\) and \(T_{\ell}^{k}\), respectively, so that \[K_{\ell}=\sum_{k\in\mathbb{Z}}r_{k}K_{\ell}^{k}\qquad\text{and}\qquad K_{ \ell}^{k}=\psi_{k+\ell}*\nu_{k}=(\psi_{\ell}*\nu)_{k}. \tag{2.18}\] By Calderon-Zygmund theory adapted to the homogeneous group setting (see, for instance, [21, Chapter I, SS5]), to prove (2.7) it suffices to verify the Hormander condition \[\sup_{y\in G}\int_{|x|_{G}\ni C_{0}|y|_{G}}|K_{\ell}(y^{-1}\boldsymbol{\cdot}x )-K_{\ell}(x)|\,\mathrm{d}x\lesssim_{\varepsilon}2^{\varepsilon|\ell|}\qquad \text{for all $\varepsilon>0$} \tag{2.19}\] for some fixed constant \(C_{0}>1\). In fact, we shall take \(C_{0}:=2C_{G}\) for \(C_{G}\geq 1\) the constant appearing in the quasi triangle inequality (2.1). In view of (2.18), it is clear that (2.19) follows from the estimate \[\sup_{y\in G}\,\sum_{k\in\mathbb{Z}}I_{\ell}^{k}(y)\lesssim_{\varepsilon}2^{ \varepsilon|\ell|}\qquad\text{for all $\varepsilon>0$} \tag{2.20}\] where \[I_{\ell}^{k}(y):=\int_{|x|_{G}\ni C_{0}|y|_{G}}|K_{\ell}^{k}(y^{-1}\boldsymbol{ \cdot}x)-K_{\ell}^{k}(x)|\,\mathrm{d}x.\] To prove (2.20), we first identify \(y\in G\) for which \(I_{\ell}^{k}(y)=0\). By unwinding the definition of \(K_{\ell}^{k}\), we may write \[I_{\ell}^{k}(y)=\int_{|x|_{G}\ni C_{0}2^{-k}|y|_{G}}|(\psi_{\ell}*\nu)((\delta_{ 2^{-k}}y)^{-1}\mathbin{\mathchoice{\vbox{\hbox{\scalebox{.5}{$ \bullet$}}}}{\vbox{\hbox{\scalebox{.5}{$\bullet$}}}}{\vbox{ \hbox{\scalebox{.5}{$\bullet$}}}}{\vbox{\hbox{\scalebox{.5}{$ \bullet$}}}}{\vbox{\hbox{\scalebox{.5}{$\bullet$}}}}}x)-(\psi_{\ell}* \nu)(x)|\,\mathrm{d}x. \tag{2.21}\] As \(\nu\) and \(\psi\) are compactly supported, we can find some \(R>1\) such that the support of \(\psi_{\ell}*\nu\) is contained inside the ball \(B(0,R2^{\max\{\ell,0\}})\). Set \(C:=2R\). We claim that \[I_{\ell}^{k}(y)=0\qquad\text{whenever}\quad|y|_{G}\geq C2^{k+\max\{\ell,0\}}. \tag{2.22}\] To see this, fix \(y\in G\) such that \(|y|_{G}\geq C2^{k+\max\{\ell,0\}}\). In view of (2.21), it suffices to check that \[|(\delta_{2^{-k}}y)^{-1}\mathbin{\mathchoice{\vbox{\hbox{\scalebox{.5}{$ \bullet$}}}}{\vbox{\hbox{\scalebox{.5}{$\bullet$}}}}{\vbox{ \hbox{\scalebox{.5}{$\bullet$}}}}{\vbox{\hbox{\scalebox{.5}{$ \bullet$}}}}{\vbox{\hbox{\scalebox{.5}{$\bullet$}}}}}x|_{G},\,|x|_{G} \geq C2^{\max\{\ell,0\}}\qquad\text{whenever}\quad|x|_{G}\geq C_{0}|\delta_{2^ {-k}}y|_{G}.\] The lower bound on \(|x|_{G}\) is immediate. On the other hand, by (2.1) and our choice of \(C_{0}\), we deduce that \[|(\delta_{2^{-k}}y)^{-1}\mathbin{\mathchoice{\vbox{\hbox{\scalebox{.5}{$ \bullet$}}}}{\vbox{\hbox{\scalebox{.5}{$\bullet$}}}}{\vbox{\hbox{\scalebox{.5}{$\bullet$}}}}{\vbox{\hbox{\scalebox{.5}{$ \bullet$}}}}}x|_{G}\geq|\delta_{2^{-k}}y|_{G}\geq C2^{\max\{\ell,0\}}.\] Thus, we have established (2.22). By Young's inequality, the \(L^{1}\) norm of the kernel \(K_{\ell}^{k}\) is uniformly bounded. Consequently, we have the uniform estimate \[I_{\ell}^{k}(y)\lesssim 1\qquad\text{ for any }k,\ell\in\mathbb{Z}. \tag{2.23}\] In order to sum in \(k\), we shall improve over (2.23) for certain exponent pairs by establishing an estimate with geometric decay in \(k+\ell\). The key tool is a variant of the mean value theorem on \(G\). **Lemma 2.6** (Mean value theorem).: _Let \(g\in C_{c}^{1}(G)\). For any \(z\in G\), we have_ \[\int_{G}|g(z\cdot x)-g(x)|\,\mathrm{d}x\lesssim\sum_{j=1}^{n}|z|_{G}^{\lambda_ {j}}\|X_{j}^{R}g\|_{L^{1}(G)}.\] Lemma 2.6 is a variant of [8, Theorem 1.33]; for completeness we present the proof at the end of the section. Here the \(\lambda_{j}\) are the eigenvalues of the dilation matrix \(\Lambda\) and the \(X_{j}^{R}\) are right-invariant vector fields as defined in SS2.1. Fix \(k,\ell\in\mathbb{Z}\). After relaxing the region of integration in (2.21), we see that \[I_{\ell}^{k}(y) \leq\int_{G}|(\psi_{\ell}*\nu)((\delta_{2^{-k}}y)^{-1}\mathbin{ \mathchoice{\vbox{\hbox{\scalebox{.5}{$\bullet$}}}}{\vbox{\hbox{ \scalebox{.5}{$\bullet$}}}}{\vbox{\hbox{\scalebox{.5}{$ \bullet$}}}}{\vbox{\hbox{\scalebox{.5}{$\bullet$}}}}}x)-(\psi_{\ell}*\nu)(x)|\, \mathrm{d}x\] \[=\int_{G}\left|\int_{G}\psi_{\ell}((\delta_{2^{-k}}y)^{-1} \mathbin{\mathchoice{\vbox{\hbox{\scalebox{.5}{$\bullet$}}}}{ \vbox{\hbox{\scalebox{.5}{$\bullet$}}}}{\vbox{\hbox{\scalebox{.5}{$ \bullet$}}}}{\vbox{\hbox{\scalebox{.5}{$\bullet$}}}}}x \mathbin{\mathchoice{\vbox{\hbox{\scalebox{.5}{$\bullet$}}}}{ \vbox{\hbox{\scalebox{.5}{$\bullet$}}}}{\vbox{\hbox{\scalebox{.5}{$ \bullet$}}}}{\vbox{\hbox{\scalebox{.5}{$\bullet$}}}}}z^{-1})-\psi_{\ell}(x*z^{-1})\, \mathrm{d}\nu(z)\right|\,\mathrm{d}x\] \[\leq\|\nu\|\int_{G}|\psi_{\ell}((\delta_{2^{-k}}y)^{-1} \mathbin{\mathchoice{\vbox{\hbox{\scalebox{.5}{$\bullet$}}}}{ \vbox{\hbox{\scalebox{.5}{$\bullet$}}}}{\vbox{\hbox{\scalebox{.5}{$ \bullet$}}}}{\vbox{\hbox{\scalebox{.5}{$\bullet$}}}}}x)-\psi_{\ell}(x)|\, \mathrm{d}x.\] We apply Lemma 2.6 with \(g=\psi_{\ell}\) and \(z=(\delta_{2^{-k}}y)^{-1}\). As a consequence of (2.2) and the identity \(\delta_{2^{-\ell}}X_{j}=2^{-\ell\lambda_{j}}X_{j}\), we have \(X_{j}^{R}\psi_{\ell}=2^{-\ell\lambda_{j}}(X_{j}^{R}\psi)_{\ell}\). We therefore deduce that \[I_{\ell}^{k}(y)\lesssim_{\sigma}\sum_{j=1}^{n}(2^{-k}|y|_{G})^{\lambda_{j}}\|X _{j}^{R}(\psi_{\ell})\|_{L^{1}(G)}=\sum_{j=1}^{n}(2^{-(k+\ell)}|y|_{G})^{\lambda_ {j}}\|X_{j}^{R}\psi\|_{L^{1}(G)}.\] Combining this with the trivial estimate (2.23), we have \[I_{\ell}^{k}(y)\lesssim_{\sigma}\min\Big{\{}1,\sum_{j=1}^{n}(2^{-(k+\ell)}|y|_{G} )^{\lambda_{j}}\Big{\}}. \tag{2.24}\] To conclude the proof, we consider two separate cases. _Case 1:_\(\ell\geq 0\). By (2.22) and (2.24), we deduce that \[\sum_{k\in\mathbb{Z}}I_{\ell}^{k}(y)=\sum_{\begin{subarray}{c}k\in\mathbb{Z} \\ |y|_{G}\ll C2^{\ell+k}\end{subarray}}I_{\ell}^{k}(y)\quad\lesssim_{\sigma}\sum_ {j=1}^{n}(2^{-\ell}|y|_{G})^{\lambda_{j}}\Big{(}\sum_{\begin{subarray}{c}k\in \mathbb{Z}\\ C^{-1}2^{-\ell}|y|_{G}\ll 2^{k}\end{subarray}}2^{-k\lambda_{j}}\Big{)}\lesssim 1.\] This is a strong version of the desired inequality (2.20). _Case 2:_\(\ell<0\). In view of (2.22), we may split the sum \[\sum_{k\in\mathbb{Z}}I_{\ell}^{k}(y)=\sum_{\begin{subarray}{c}k\in\mathbb{Z} \\ |y|_{G}\ll C2^{k}\end{subarray}}I_{\ell}^{k}(y)=\sum_{k\in\mathcal{A}_{\ell}(y) }I_{\ell}^{k}(y)+\sum_{k\in\mathcal{B}_{\ell}(y)}I_{\ell}^{k}(y)\] where \[\mathcal{A}_{\ell}(y) :=\{k\in\mathbb{Z}:2^{-k}|y|_{G}\leq C2^{\ell}\},\] \[\mathcal{B}_{\ell}(y) :=\{k\in\mathbb{Z}:C2^{\ell}<2^{-k}|y|_{G}\leq C\}.\] We apply (2.24) to each of the terms in the right-hand sums. For large values of \(k\), this leads to the estimate \[\sum_{k\in\mathcal{A}_{\ell}(y)}I_{\ell}^{k}(y)\lesssim_{\sigma}\sum_{j=1}^{ n}(2^{-\ell}|y|_{G})^{\lambda_{j}}\Big{(}\sum_{\begin{subarray}{c}k\in \mathbb{Z}\\ C^{-1}2^{-\ell}|y|_{G}\leq 2^{k}\end{subarray}}2^{-k\lambda_{j}}\Big{)} \lesssim 1.\] On the other hand, for the small values of \(k\), we have \[\sum_{k\in\mathcal{B}_{\ell}(y)}I_{\ell}^{k}(y)\lesssim_{\sigma}\#\mathcal{B }_{\ell}(y)\lesssim|\ell|.\] Thus, we again have the desired inequality (2.20). In either case, we obtain (2.20), which completes the proof of Lemma 2.4. ### A mean value estimate It remains to provide the details of the proof of Lemma 2.6. Proof (of Lemma 2.6).: First consider an element of the form \(z_{j}=\exp(\pm t_{j}X_{j})\) for some \(1\leq j\leq n\) and \(t_{j}>0\). By the right-invariance property (2.2), we have \[|g(z_{j}\mathbin{\raisebox{-1.0pt}{\scalebox{1.0}{$\bullet$}}}x)-g(x)|=\Big{|} \int_{0}^{t_{j}}\frac{\partial}{\partial s}g(\exp(\pm sX_{j})\mathbin{ \raisebox{-1.0pt}{\scalebox{1.0}{$\bullet$}}}x)\,\mathrm{d}s\Big{|}=\Big{|} \int_{0}^{t_{j}}(X_{j}^{R}g)(\exp(\pm sX_{j})\mathbin{\raisebox{-1.0pt}{ \scalebox{1.0}{$\bullet$}}}x)\,\mathrm{d}s\Big{|}.\] Therefore, by changing the order of integration and using the translation invariance of the Haar measure, \[\int_{G}|g(z_{j}\mathbin{\raisebox{-1.0pt}{\scalebox{1.0}{$\bullet$}}}x)-g(x)| \,\mathrm{d}x\leq|t_{j}||X_{j}^{R}g_{\|L^{1}(G)}. \tag{2.25}\] Now consider an arbitrary element \(z\in G\backslash\{0\}\). Since, as discussed in SS2.1, the map \[\Upsilon(t_{1},\dots,t_{n}):=\exp(t_{n}X_{n})\mathbin{\raisebox{-1.0pt}{ \scalebox{1.0}{$\bullet$}}}\dots\mathbin{\raisebox{-1.0pt}{\scalebox{1.0}{$ \bullet$}}}\exp(t_{1}X_{1})\] is a global diffeomorphism, \(z\) can be written uniquely as \[z=z_{n}\mathbin{\hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt} \vrule height 6.0pt width 0.4pt depth 0.0pt}\cdots z_{1}\qquad\text{where}\qquad z_{j}:=\exp(t_{j}X_{j})\] for some \(t_{1},\ldots,t_{n}\in\mathbb{R}\). Define a sequence of group elements by taking \(\zeta_{0}:=0\) and \[\zeta_{j}:=z_{j}\mathbin{\hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule heigh t 6.0pt width 0.4pt depth 0.0pt}\cdots z_{1}}\qquad\text{for $1\leqslant j \leqslant n$},\] so that \(z=\zeta_{n}\). We therefore have \[g(z\mathbin{\hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule heigh t 6.0pt width 0.4pt depth 0.0pt}\vrule height 6.0pt width 0.4pt depth 0.0pt} \cdot x)-g(x)=\sum_{j=1}^{n}\big{(}g(\zeta_{j}\mathbin{\hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule heigh t 6.0pt width 0.4pt depth 0.0pt}\cdot x)-g(\zeta_{j-1}\mathbin{\hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule heigh t 6.0pt width 0.4pt depth 0.0pt}\cdot x)}\big{)}=\sum_{j=1}^{n}\big{(}g(z_{j}\mathbin{\hbox{ \vrule height 6.0pt width 0.4pt depth 0.0pt\vrule heigh t 6.0pt width 0.4pt depth 0.0pt}\cdot x_{j})-g(x_{j})\big{)}\] where \(x_{j}:=\zeta_{j-1}\mathbin{\hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule heigh t 6.0pt width 0.4pt depth 0.0pt}\cdot x}\). By repeated application of the inequality (2.25) from the case considered above, \[\int_{G}|g(z\cdot x)-g(x)|\mathrm{d}x\leqslant\sum_{j=1}^{n}|t_{j}|\|X_{j}^{R} g\|_{L^{1}(G)}\lesssim\sum_{j=1}^{n}|z|_{G}^{\lambda_{j}}\|X_{j}^{R}g\|_{L^{1}(G)}, \tag{2.26}\] as required. Note that the final step in (2.26) follows from the inequality \[|z|_{G}^{-1}\sum_{j=1}^{n}|t_{j}|^{1/\lambda_{j}}\leqslant\sup\Big{\{}\sum_{j =1}^{n}|s_{j}|^{1/\lambda_{j}}:|\exp(s_{n}X_{n})\mathbin{\hbox{\vrule height 6.0pt wid th 0.4pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt} \cdot\ldots\cdot\exp(s_{1}X_{1})|_{G}\leqslant 1\Big{\}}\lesssim 1,\] which in turn is a simple consequence of the definition of the group dilations. ## 3. Analysis of the examples ### Graded and stratified groups We investigate example applications of Theorem 1.2 in the setting of graded and stratified groups; the relevant definitions are recalled presently (see [8, Chapter 1] for further details). A Lie algebra \(\mathfrak{g}\) with Lie bracket \([\,\cdot\,,\,\cdot\,]_{\mathfrak{g}}\) is _graded_ if there exists a vector space decomposition \[\mathfrak{g}=\bigoplus_{j=1}^{\infty}V_{j}\qquad\text{where}\qquad[V_{i},V_{j} ]_{\mathfrak{g}}\subseteq V_{i+j}\quad\text{for $i,j\geqslant 1$}; \tag{3.1}\] here all but finitely many of the vector spaces \(V_{j}\) are equal to \(\{0\}\). If \(G\) is a connected, simply connected nilpotent Lie group with graded Lie algebra \(\mathfrak{g}\), then there is automatically a natural dilation structure on \(\mathfrak{g}\) induced by the grading and so \(G\) is a homogeneous group. A Lie algebra \(\mathfrak{g}\) is _stratified_ if it is graded and \(V_{1}\) generates \(\mathfrak{g}\) as an algebra; in this case, if \(\mathfrak{g}\) is nilpotent of step \(m\), then \[\mathfrak{g}=\bigoplus_{j=1}^{m}V_{j}\qquad\text{where}\qquad[V_{1},V_{j}]_{ \mathfrak{g}}=V_{j+1}\quad\text{for $1\leqslant j\leqslant m$}. \tag{3.2}\] We say a Lie group \(G\) is a _graded_ (respectively, _stratified_) _group_ if it a homogeneous group such that the Lie algebra \(\mathfrak{g}\) is graded (respectively, stratified). We can relate the Lie bracket on \(\mathfrak{g}\) to the group theoretic commutator on \(G\) via the Baker-Campbell-Hausdorff formula. In particular, for \(x\), \(y\in G\) define \[[x,y]_{G}:=x\mathbin{\hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule heigh t 6.0pt width 0.4pt depth 0.0pt}\cdot y\mathbin{\hbox{\vrule height 6.0pt wid th 0.4pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt} \cdot x^{-1}\mathbin{\hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule heigh t 6.0pt width 0.4pt depth 0.0pt}\cdot y^{-1}}}}.\] Then, by the Baker-Campbell-Hausdorff formula, \[[\exp(X),\exp(Y)]_{G}=\exp\big{(}[X,Y]_{\mathfrak{g}}+e_{3}(X,Y)\big{)} \tag{3.3}\] for all \(X,Y\in\mathfrak{g}\), where \(e_{3}(X,Y)\) is a linear combination of lie brackets of \(X\) and \(Y\) of order at least \(3\). ### Testing conditions for analytic submanifolds Let \(G\) be a homogeneous group and \(S\) be a smooth submanifold \(G\). We say a Borel measure \(\sigma\) on \(G\) is a _\(C_{c}^{\infty}\)-density on \(S\)_ if it is of the form \(\eta\mathrm{d}\sigma_{S}\) where \(\eta\in C_{c}^{\infty}(S)\) is a smooth, compactly supported function on \(S\) and \(\sigma_{S}\) is the natural surface measure on \(S\) induced by the Haar measure on \(G\). We consider testing conditions which ensures any \(C_{c}^{\infty}\)-density on \(S\) satisfies (CA). For analytic submanifolds, a very simple sufficient condition is provided by Ricci-Stein [18]. **Proposition 3.1** (Corollary 2.3, [18]).: _Let \(S\) be a connected analytic submanifold of a homogeneous group \(G\). If \(S\) generates the group \(G\), then any \(C_{c}^{\infty}\)-density \(\sigma\) on \(S\) satisfies (CA)._ Here we say a set \(S\subseteq G\)_generates_\(G\) if \(G=\langle S\rangle\) where \[\langle S\rangle:=\{s_{1}\cdot\ldots\cdot s_{N}:s_{1},\ldots,s_{N}\in S\cup \tilde{S}\} \tag{3.4}\] for \(\tilde{S}:=\{s^{-1}:s\in S\}\). Ricci-Stein [18] work with the ostensibly weaker condition that \(G=\operatorname{clos}(\langle S\rangle)\); however, in all cases we consider (that is, for \(S\) a connected analytic submanifold) these conditions turn out to be equivalent. We remark that the result in [18, Corollary 2.3] is in fact somewhat more general. There, the authors consider a family of connected analytic submanifolds \(S_{j}\) for \(1\leq j\leq N\) such that the iterated product set \(S_{1}\cdot\ldots\cdot S_{N}\) contains a non-trivial open subset of \(G\). For each \(j\), one fixes \(\sigma_{j}\) a smooth density on \(S_{j}\) and considers the convolution product \(\sigma_{1}*\cdots*\sigma_{N}\). To recover Proposition 3.1, we choose the \(S_{j}\) to alternate between \(S\) and the reflection \(\tilde{S}\) and, accordingly, the \(\sigma_{j}\) to alternate between \(\sigma\) and \(\tilde{\sigma}\). Using [18, Proposition 1.1], the hypothesis that \(S\) generates \(G\) implies the existence of some \(N\) such that \(S_{1}\cdot\ldots\cdot S_{N}\) contains a non-trivial open subset of \(G\), and so [18, Corollary 2.3] applies.6 Footnote 6: Alternatively, if \(e\in S\) (which we may always assume in applications to maximal functions), then the condition that \(S\) generates \(G\) is equivalent to \[G=\bigcup_{N=1}^{\infty}S_{1}\cdot\ldots\cdot S_{N}\] for \(S_{j}\) as defined above. It is then easy to show there exists some \(N\in\mathbb{N}\) such that \(S_{1}\cdot\ldots\cdot S_{N}\) contains a non-trivial open set using the Baire category theorem. ### The Koranyi sphere and extensions We return to the example of the Koranyi sphere and its extensions, as discussed in SS1.3, 2). Here we work in the setting of a graded Lie group \(G\) with \(\dim G\geq 2\). Using Proposition 3.1, we verify (CA) for a large class of measures which, in exponential coordinates, are supported on boundaries of convex domains. **Lemma 3.2**.: _Let \(G\) be a graded Lie group with \(\dim G\geq 2\) and suppose \(\Omega\) is an open convex domain in \(\mathfrak{g}\) with analytic boundary \(\Sigma:=\partial\Omega\). Then any \(C_{c}^{\infty}\)-density on \(\exp(\Sigma)\) satisfies (CA)._ Lemma 3.2 applies to the Koranyi sphere in the Heisenberg group and therefore, in view of Theorem 1.2, we obtain a significant extension of [11, Theorem 1.2]. Proof (of Lemma 3.2).: By Proposition 3.1, it suffices to show that \(\exp(\Sigma)\mathbin{\boldsymbol{\cdot}}\exp(\Sigma)\) contains an open ball in \(G\) (defined with respect to, say, the homogeneous norm). Indeed, if this is the case, then \(\exp(\Sigma)\) must generate an open ball \(B\) around the origin. Using the Baker-Campbell-Hausdorff formula, given \(x\in G\), there exists some \(y\in B\) and \(N\in\mathbb{N}\) such that \(x=y^{N}=y\mathbin{\boldsymbol{\cdot}}\ldots\mathbin{\boldsymbol{\cdot}}y\), and so \(\exp(\Sigma)\) generates \(G\). Let \(\mathfrak{g}\) be the Lie algebra associated to \(G\), which we assume admits a grading as in (3.1). For \(X\in\mathfrak{g}\), define the linear map \[\Phi_{X}\colon\mathfrak{g}\to\mathfrak{g},\qquad\Phi_{X}(Y)\colon Y\mapsto[X,Y ]_{\mathfrak{g}}.\] The key claim is that for any \(X\in\mathfrak{g}\), the kernel \(\ker\Phi_{X}\) has dimension at least \(2\). Temporarily assuming the claim, we argue as follows. Assume, without loss of generality, that \(B_{G}(0,1):=\{x\in G:|x|_{G}<1\}\subseteq\exp(\Omega)\). Choose \(x\in B_{G}(0,1)\) and let \(X:=\exp^{-1}(x)\). Let \(H_{X}\) be a subspace of \(\ker\Phi_{X}\) of dimension \(2\) and let \(S^{n-1}\) denotes the unit sphere in \(\mathfrak{g}\) with respect to the euclidean norm. By the convexity of \(\Omega\), for each \(W\in S^{n-1}\cap H_{X}\) there exist unique real numbers \(t(W)\), \(s(W)>0\) such that \[X+t(W)W,\quad X-s(W)W\in\Sigma.\] where \(t(W)W\) and \(s(W)W\) are the usual scalar multiples of \(W\). Furthermore, the mapping \[F\colon S^{n-1}\cap H_{X}\to\mathbb{R},\qquad F\colon W\mapsto t(W)-s(W)\] is continuous. Clearly, \(t(-W)=s(W)\) and \(s(-W)=t(W)\), so that \(F(-W)=-F(W)\). The set \(S^{n-1}\cap H_{X}\) is a \(1\)-dimensional (euclidean) sphere. By the intermediate value theorem, there exists some \(W_{x}\in S^{n-1}\cap H_{X}\) such that \(F(W_{x})=0\) or, equivalently, \(t(W_{x})=s(W_{x})\). Since \(W_{x}\in H_{X}\), by the Baker-Campbell-Hausdorff formula and bilinearity of the Lie bracket, \[x\mathbin{\boldsymbol{\cdot}}x=x\mathbin{\boldsymbol{\cdot}}w_{x}\mathbin{ \boldsymbol{\cdot}}w_{x}^{-1}\mathbin{\boldsymbol{\cdot}}x\in\exp(\Sigma) \mathbin{\boldsymbol{\cdot}}\exp(\Sigma),\qquad\text{where }w_{x}:=\exp(t(W_{x})W_{x}).\] As \(x\in B_{G}(0,1)\) was chosen arbitrarily, we conclude that \(\exp(\Sigma)\mathbin{\boldsymbol{\cdot}}\exp(\Sigma)\) contains an open ball in \(G\). It remains to verify the claim that \(\dim\ker\Phi_{X}\geq 2\) for all \(X\in\mathfrak{g}\). From the definition of the grading, the image of \(\Phi_{X}\) is always contained in \(\bigoplus_{j\geq 2}V_{j}\). Thus, if \(\dim(V_{1})\geq 2\), then the result immediately follows from the rank-nullity theorem. On the other hand, if \(\dim(V_{1})=1\), then we must have \([V_{1},V_{1}]=\{0\}\). Therefore, the image \(\Phi_{X}\) is contained in \(\bigoplus_{j\geq 3}V_{j}\) and we can again apply rank-nullity to deduce the desired result. ### Horizontal spheres and extensions We return to the example of the horizontal spheres and the various extensions discussed in SS1.3, 3) and 4). Here it is natural to work with a stratified Lie group \(G\), so that the Lie algebra \(\mathfrak{g}\) admits a stratification as in (3.2). Let \(\Pi_{1}\colon\mathfrak{g}\to V_{1}\) denote the subspace projection onto \(V_{1}\). The main result is as follows. **Lemma 3.3**.: _Let \(G\) be a stratified Lie group with \(\dim G\geq 2\) and \(\Sigma\subseteq\mathfrak{g}\) be a connected analytic submanifold such that \(\Pi_{1}(\Sigma)\) generates \(V_{1}\) (in terms of vector addition). Any \(C_{c}^{\infty}\)-density on \(\exp(\Sigma)\) satisfies_ (CA)_._ Clearly Lemma 3.3 applies to the horizontal and tilted sphere examples in Heisenberg (and Metivier) groups and therefore, combined with Theorem 1.2, we obtain a significant extension of [2, Theorem 1.1] (and also \(L^{p}\) boundedness results mentioned in passing in [19]). Of course, Lemma 3.3 has a much broader scope, and provides a rich class of examples of arbitrary dimension which need not be associated to any \(d\)-plane distribution in the group. In view of Proposition 3.1, the proof of Lemma 3.3 is reduced to showing the following lemma. **Lemma 3.4** (Generator test).: _Let \(G\) be a stratified Lie group and \(S\subseteq G\). Then \(S\) generates \(G\) if and only if \(\Pi_{1}\circ\exp^{-1}(S)\) generates \(V_{1}\) (in terms of vector addition)._ Before presenting the proof, we introduce some helpful notation and consequences of the Baker-Campbell-Hausdorff formula (3.3). Let \(G\) be a graded Lie group with Lie algebra \(\mathfrak{g}\). For \(1\leq\ell\leq m\), define the mapping \[\Phi_{\ell}\colon\mathfrak{g}^{\ell}\to\mathfrak{g},\qquad\Phi_{\ell}\colon(X_ {1},\ldots,X_{\ell})\mapsto[X_{1},[X_{2},\ldots,[X_{\ell-1},X_{\ell}]_{ \mathfrak{g}}\ldots]_{\mathfrak{g}}]_{\mathfrak{g}},\] which takes a nested sequence of Lie brackets of \(\ell\) algebra elements. Note that \(\Phi_{\ell}\) maps the subspace \(V_{1}^{\ell}\) into \(V_{\ell}\). If \(G\) is stratified, then this restricted mapping is a surjection. On the other hand, for \(1\leq\ell\leq m\), define the mapping \[\phi_{\ell}\colon G^{\ell}\to G,\qquad\phi_{\ell}\colon(x_{1},\ldots,x_{\ell} )\mapsto[x_{1},[x_{2},\ldots,[x_{\ell-1},x_{\ell}]_{G}\ldots]_{G}]_{G},\] which takes a nested sequence of commutators of \(\ell\) group elements. By iteratively applying the Baker-Campbell-Hausdorff formula (3.3), we have \[\phi_{\ell}(\exp(X_{1}),\ldots,\exp(X_{\ell}))=\exp\big{(}\Phi_{\ell}(X_{1}, \ldots,X_{\ell})+e_{\ell+1}(X_{1},\cdots,X_{\ell})\big{)}, \tag{3.5}\] for all \(X_{1},\ldots,X_{\ell}\in\mathfrak{g}\), where \(e_{\ell+1}(X_{1},\cdots,X_{\ell})\) is a linear combination of Lie brackets of \(X_{1},\ldots,X_{\ell}\) of order at least \(\ell+1\). Generalising the definition of \(\Pi_{1}\) introduced above, for \(j\in\mathbb{N}\), let \(\Pi_{j}\colon\mathfrak{g}\to V_{j}\) denote the subspace projection onto \(V_{j}\) and \[\pi_{j}:=\exp\circ\Pi_{j}\circ\exp^{-1}\colon G\to G\] the corresponding map in the Lie group. We may then reinterpret (3.5) as \[\pi_{i}\circ\phi_{\ell}(\exp(X_{1}),\ldots,\exp(X_{\ell}))=\exp\big{(}\Pi_{i} \circ\Phi_{\ell}(X_{1},\ldots,X_{\ell})\big{)}\quad\text{for $1\leq i\leq\ell$}. \tag{3.6}\] Furthermore, given any \(X_{1},\ldots,X_{\ell}\in\mathfrak{g}\), as a consequence of (3.1), we have \[\Pi_{i}\circ\Phi_{\ell}(X_{1},\ldots,X_{\ell})=0\qquad\text{for $1\leq i\leq \ell-1$} \tag{3.7}\] and \[\Pi_{\ell}\circ\Phi_{\ell}(X_{1},\ldots,X_{\ell})=\Phi_{\ell}\big{(}\Pi_{1}(X_ {1}),\ldots,\Pi_{1}(X_{\ell})\big{)}. \tag{3.8}\] On the other hand, given any \(x_{1},\ldots,x_{\ell}\in G\), by combining (3.6) with (3.7) and (3.8), we have \[\pi_{i}\circ\phi_{\ell}(x_{1},\ldots,x_{\ell})=\epsilon\qquad\text{for $1\leq i \leq\ell-1$} \tag{3.9}\] and \[\pi_{\ell}\circ\phi_{\ell}(x_{1},\ldots,x_{\ell})=\phi_{\ell}\big{(}\pi_{1}(x _{1}),\ldots,\pi_{1}(x_{\ell})\big{)}. \tag{3.10}\] Similarly, the map \(\Pi_{1}\circ\exp^{-1}\colon G\to V_{1}\) is a group homomorphism in the sense that \[\Pi_{1}\circ\exp^{-1}(x\boldsymbol{\cdot}y)=\Pi_{1}\circ\exp^{-1}(x)+\Pi_{1} \circ\exp^{-1}(y)\qquad\text{for all $x$, $y\in G$}, \tag{3.11}\] where the right-hand sum is in terms of vector addition. Proof (of Lemma 3.4).: One direction is clear and so we assume that \(\Pi_{1}\circ\exp^{-1}(S)\) generates \(V_{1}\). We aim to show that \(S\) generates \(G\). As in (3.4), let \(\langle S\rangle\) denote the subgroup of \(G\) generated by \(S\). We first claim that for any \(x=\exp(X)\in G\) with \(X\in V_{1}\), there exists some \(\mathbf{g}(x)\in G\) such that \[\mathbf{g}(x)\in\langle S\rangle\quad\text{and}\quad\pi_{1}(\mathbf{g}(x))=x. \tag{3.12}\] Indeed, from our hypothesis on \(S\) there exists a finite sequence of elements \[s_{1},\dots,s_{k}\in S\quad\text{such that}\quad X=\Pi_{1}\circ\exp^{-1}(s_{1})+ \dots+\Pi_{1}\circ\exp^{-1}(s_{k}).\] If we define \(\mathbf{g}(x):=s_{1}\mathbin{\raise 1.0pt\hbox{$\,\cdot\,$}}\dots \mathbin{\raise 1.0pt\hbox{$\,\cdot\,$}}s_{k}\), then clearly \(\mathbf{g}(x)\in\langle S\rangle\) whilst, by (3.11), we also have \[\Pi_{1}\circ\exp^{-1}(\mathbf{g}(x))=\Pi_{1}\circ\exp^{-1}(s_{1})+\dots+\Pi_{1} \circ\exp^{-1}(s_{k})=X,\] which immediately implies (3.12). We therefore obtain a function \(\mathbf{g}\colon\exp(V_{1})\to G\) satisfying (3.12). This function is not uniquely defined, but for our purposes it suffices to work with _some_ such \(\mathbf{g}\).7 Footnote 7: We could easily stipulate additional conditions to ensure \(\mathbf{g}\) is uniquely defined and thus avoid arbitrary choices in the definition. Assuming \(G\) is an \(m\)-step group, we now use induction to prove that \[G_{\ell}:=\Big{\{}\exp(Y):Y\in\bigoplus_{i=\ell}^{m}V_{i}\Big{\}}\subseteq \langle S\rangle \tag{3.13}\] for all \(1\leqslant\ell\leqslant m+1\), where \(G_{m+1}\) is interpreted as \(\{0\}\). For \(\ell=1\), the above statement becomes \(G=\langle S\rangle\), which is precisely the content of the lemma. We take \(\ell=m+1\) as the base of the induction, in which case (3.13) is trivial. Let \(2\leqslant\ell\leqslant m+1\) and suppose, by way of induction hypothesis, that \(G_{\ell}\subseteq\langle S\rangle\). To complete the argument, it suffices to show \(G_{\ell-1}\subseteq\langle S\rangle\). Fix \(y\in G_{\ell-1}\) so that \[y=\exp(\sum_{i=\ell-1}^{m}Y_{i})\qquad\text{for some $Y_{i}\in V_{i}$, $\ell-1\leqslant i\leqslant m$}.\] Since \(G\) is stratified, we can find \(X_{1},\dots,X_{\ell-1}\in V_{1}\) such that \[\Phi_{\ell-1}(X_{1},\dots,X_{\ell-1})=Y_{\ell-1}.\] Let \(x_{j}:=\exp(X_{j})\) for \(1\leqslant j\leqslant\ell-1\). It follows from (3.10) and (3.12) that \[\pi_{\ell-1}\big{(}\phi_{\ell-1}(\mathbf{g}(x_{1}),\dots,\mathbf{ g}(x_{\ell-1}))\big{)} =\phi_{\ell-1}\big{(}\pi_{1}(\mathbf{g}(x_{1})),\dots,\pi_{1}( \mathbf{g}(x_{\ell-1}))\big{)}\] \[=\phi_{\ell-1}(x_{1},\dots,x_{\ell-1})\] \[=\exp(Y_{\ell-1}),\] where the last step is due to (3.6) and (3.8). On the other hand, from (3.9) we have \[\pi_{i}\big{(}\phi_{\ell-1}(\mathbf{g}(x_{1}),\dots,\mathbf{g}(x_{\ell-1})) \big{)}=e\qquad\text{for $1\leqslant i\leqslant\ell-2$}.\] Consequently, we may write \[z:=\phi_{\ell-1}(\mathbf{g}(x_{1}),\dots,\mathbf{g}(x_{\ell-1}))=\exp\Big{(}Y _{\ell-1}+\sum_{i=\ell}^{m}Z_{i}\Big{)}\] for some \(Z_{i}\in V_{i}\) for \(\ell\leqslant i\leqslant m\). In view of (3.12) and the definition of \(z\) in terms of commutators of the \(\mathbf{g}(x_{j})\), we have \(z\in\langle S\rangle\). By the Baker-Campbell-Hausdorff formula, there exist polynomial mappings \[P_{z,i}\colon\bigoplus_{j=\ell}^{m}V_{j}\mapsto V_{i}\] such that if \(u=\exp(U):=\exp(\sum_{i=\ell}^{m}U_{i})\in G_{\ell}\) with \(U_{i}=\Pi_{i}U\in V_{i}\), then \[\pi_{i}(u\boldsymbol{\cdot}\,z)=\exp(U_{i}+P_{z,i}(U))\qquad\text{for $\ell\leq i \leq m$}\] where \(P_{z,i}\) depends only \(U_{\ell},\dots,U_{i-1}\) and \(z\). In particular, \(P_{z,i}(U)\) is independent of \(U_{i},\dots,U_{m}\) and so the polynomial \(P_{z,\ell}\) is constant as a function of \(U\) (in fact, \(P_{z,\ell}(U)=Z_{\ell}\)). On the other hand, the remaining projections are given by \[\pi_{i}(u\boldsymbol{\cdot}\,z)=e\quad\text{for $1\leq i\leq\ell-2$}\quad \text{and}\quad\pi_{\ell-1}(u\boldsymbol{\cdot}\,z)=\exp(Y_{\ell-1})\] For any \(u\in G_{\ell}\) as above, the induction hypothesis implies that \(u\boldsymbol{\cdot}\,z\in\langle S\rangle\). In view of the dependence properties of the \(P_{z,i}\), it is possible to inductively choose the \(U_{i}\) so that \[Y_{i}=U_{i}+P_{z,i}(U)\qquad\text{for $\ell\leq i\leq m$}.\] Thus, from the preceding observations, \(y=u\boldsymbol{\cdot}\,z\in\langle S\rangle\) and we conclude that \(G_{\ell-1}\subseteq\langle S\rangle\). This closes the induction and completes the proof. ## Appendix A Existence of the Littlewood-Paley decomposition Here we provide a proof of Proposition 2.1. For this, it is convenient to adopt slightly different notation from that used in the rest of the paper: given \(f\in C(G)\) and a continuous parameter \(t>0\) we shall write \(f_{t}:=t^{-Q}f\circ\delta_{t^{-1}}\), so that the notation \(f_{k}\) used previously in the paper corresponds to \(f_{2^{k}}\). Proposition 2.1 follows from a basic result on \(L^{p}\) approximate identities. Consider \(\phi\in C_{c}^{\infty}(G)\) satisfying \[\int_{G}\phi=1.\] (A.1) Given any \(f\in C_{c}(G)\), it follows that \[\|f\ast\phi_{t}-f\|_{L^{p}(G)}\to 0\quad\text{as $t\to 0_{+}$}\qquad\text{for all $1\leq p\leq\infty$}\] (A.2) and \[\|f\ast\phi_{t}\|_{L^{p}(G)}\to 0\quad\text{as $t\to\infty$}\qquad\text{for all $1<p\leq\infty$};\] (A.3) the standard proofs are left to the reader (also see [8, Proposition 1.20]). Proof (Proposition 2.1).: Suppose \(\phi\in C_{c}^{\infty}(G)\) satisfies (A.1) as above. By (A.2) and (A.3), we have \[f=\lim_{K\to\infty}f\ast\phi_{2^{-K}}-f\ast\phi_{2^{K}}\] and so, by the fundamental theorem of calculus, \[f=-\lim_{K\to\infty}\int_{2^{-K}}^{2^{K}}f\ast\Big{(}\frac{\partial\phi_{t}}{ \partial t}\Big{)}\,\mathrm{d}t=-\sum_{k\in\mathbb{Z}}f\ast\Big{(}\int_{2^{k}} ^{2^{k+1}}\frac{\partial\phi_{t}}{\partial t}\,\mathrm{d}t\Big{)},\] (A.4) where in each case the convergence holds in \(L^{p}(G)\) for \(1<p\leq\infty\). A computation shows \[\frac{\partial\phi_{t}}{\partial t}(x)=-t^{-1}h_{t}(x)\qquad\text{for some $h\in C_{c}^{\infty}(G)$}.\] Moreover, if we define \[\psi(x):=\int_{1}^{2}h_{t}(x)\,\frac{\mathrm{d}t}{t},\] (A.5) then, by a simple change of variables, \[-\int_{2^{k}}^{2^{k+1}}\frac{\partial\phi_{t}}{\partial t}(x)\,\mathrm{d}t=\int_{ 2^{k}}^{2^{k+1}}h_{t}(x)\,\frac{\mathrm{d}t}{t}=\psi_{2^{k}}(x).\] (A.6) Combining (A.4) and (A.6), we see that (2.3) holds for \(\psi\) as defined in (A.5). It remains to show \(\psi\) is of mean zero. Clearly, it suffices to show the same property hold for the function \(h\). However, since \[\int_{G}h(x)\,\mathrm{d}x=-t\frac{\partial}{\partial t}\int_{G}\phi_{t}(x)\, \mathrm{d}x\Big{|}_{t=1},\] the mean zero property for \(h\) is an immediate consequence of (A.1).
2310.11805
GMC-Pos: Graph-Based Multi-Robot Coverage Positioning Method
Nowadays, several real-world tasks require adequate environment coverage for maintaining communication between multiple robots, for example, target search tasks, environmental monitoring, and post-disaster rescues. In this study, we look into a situation where there are a human operator and multiple robots, and we assume that each human or robot covers a certain range of areas. We want them to maximize their area of coverage collectively. Therefore, in this paper, we propose the Graph-Based Multi-Robot Coverage Positioning Method (GMC-Pos) to find strategic positions for robots that maximize the area coverage. Our novel approach consists of two main modules: graph generation and node selection. Firstly, graph generation represents the environment using a weighted connected graph. Then, we present a novel generalized graph-based distance and utilize it together with the graph degrees to be the conditions for node selection in a recursive manner. Our method is deployed in three environments with different settings. The results show that it outperforms the benchmark method by 15.13% to 24.88% regarding the area coverage percentage.
Khattiya Pongsirijinda, Zhiqiang Cao, Muhammad Shalihan, Benny Kai Kiat Ng, Billy Pik Lik Lau, Chau Yuen, U-Xuan Tan
2023-10-18T08:52:48Z
http://arxiv.org/abs/2310.11805v1
# GMC-Pos: Graph-Based Multi-Robot ###### Abstract Nowadays, several real-world tasks require adequate environment coverage for maintaining communication between multiple robots, for example, target search tasks, environmental monitoring, and post-disaster rescues. In this study, we look into a situation where there are a human operator and multiple robots, and we assume that each human or robot covers a certain range of areas. We want them to maximize their area of coverage collectively. Therefore, in this paper, we propose the Graph-Based Multi-Robot Coverage Positioning Method (GMC-Pos) to find strategic positions for robots that maximize the area coverage. Our novel approach consists of two main modules: graph generation and node selection. Firstly, graph generation represents the environment using a weighted connected graph. Then, we present a novel generalized graph-based distance and utilize it together with the graph degrees to be the conditions for node selection in a recursive manner. Our method is deployed in three environments with different settings. The results show that it outperforms the benchmark method by 15.13% to 24.88% regarding the area coverage percentage. ## I Introduction Area coverage has been playing a significant role in achieving various robotics-related tasks, such as path planning and area exploration. This topic is also critically important for search and rescue (SAR) operations [1], such as post-disaster monitoring, awareness of victims' conditions, and target searching. At the same time, utilizing the robot as relaying nodes for wireless ad hoc networks has also become mainstream in recent years. According to current research, there is room to enhance how robots should efficiently locate in order to maximize the area coverage as much as possible. According to the survey [2], currently, making the robots meet at their initial or some rendezvous positions is currently one of the most used assignments for rearranging the robot positions. Some strategies were proposed for different research objectives. There was a graph-based rendezvous [3] proposed to be used together with exploration. Bio-inspired techniques, such as the ant algorithm [4] and bacterial chemotaxis [5], were also applied for post-exploration meetings. Some works focused on connectivity-preserving [6, 7], and communication-limited rendezvous [8]. In spite of the fact that gathering robots by rendezvous techniques is systematic, it does not satisfy our primary goal, which is to cover the environment. Although previous studies have not directly treated it in much detail, map-grid-based and graph-based strategies are the most popular approaches due to the benefits associated with map representation. The area coverage problem is often studied together with other topics. There are applications in various domains, such as path and motion planning [9, 10, 11, 12, 13], pathfinding [14], multi-robot exploration [15, 16, 17], SAR tasks [18], or even strategic positioning for robot soccer teams [19, 20]. One of the related research matches our purposes and conditions. It is about the multi-robot coverage of a known environment [21]. Since, in our case, we have a map image as a prior before the positioning stage, it can also be considered a known environment. However, the existing methods that apply the map-grid-based approach [13, 20, 21] can have problems from unbalanced grid cell size and high computational time, especially when deploying a high number of robots. On the other hand, the existing graph-based methods utilize different types of graphs, for example, Voronoi diagrams [11, 12, 15, 19], Delaunay triangulation [14], and bipartite graphs [19], which are not aimed to be used for maximum area coverage purposes. There also have been studies and applications of the Maximal Covering Location Problem (MCLP) [22, 23, 24, 25], which is known to be NP-hard [26]. The mentioned problem aims to locate a number of facilities to maximize the amount of covered demand. For our problem studied in this paper, Fig. 1: Overview of the GMC-Pos framework if we solve it in the same sense, we may consider the environment area as the demand and the robots as the facilities. However, our problem setting will have even more constraints than the standard MCLP [22]. Firstly, the demand nodes were originally conceived to be sparse discrete points, but in our case, all map grids representing the area must be taken into account since our goal is to maximize the coverage area. Secondly, the mobile robots are different from the classical facilities in MCLP. Since the desired robot positions are also selected from the same map grids, they will simultaneously be demand nodes and facilities. Therefore, our unconventional problem requires a novel approach to solve it differently. This paper proposes the **G**raph-Based **M**ulti-Robot **C**overage **P**ositioning Method (GMC-Pos) to generate balanced and high coverage for 2D maps, which consists of two modules: graph generation and node selection. Firstly, we present a graph generation method to construct the weighted connected graph to represent the environment. This graph behaves like the topological map of the environment, i.e., it will be according to the connectivity and structure of the environment. The generated graph nodes are placed in reachable areas and not obstructed by obstacles and walls. This is further enhanced by the property of the connected graphs, in which there is always at least one path between graph nodes. In addition, each edge of this map-represent graph is also assigned with its length as they will be beneficial for calculating the inter-position distances afterward. Secondly, we introduce a novel node selection strategy. In particular, the process is based on a recursive fashion with some graph-related requirements. Note that there are multiple robots and also one human operator in our setting, which can be treated like a robot as well, i.e., able to move around the environment. The robots will spread through the map according to the human position. In this module, we also construct a novel graph-based generalized distance combining the Euclidean distance and Dijkstra's shortest path length. This distance is used for all the relevant situations since it can realistically measure the distance between positions in the environment for both those that are and are not the graph nodes. Subsequently, as we aim to select the nodes as the robot positions for maximizing the area coverage, the nodes are chosen bidirectionally starting from the human position in the range of each robot area coverage radius. We then focus on the nodes with the highest degree in the range since they mostly represent the intersection or the centers of sub-areas. Finally, among these nodes, we select the one that is the furthest from the previous nodes in order to spread through the environment as much as possible. The main contributions of this paper are shown as follows: * To efficiently represent the environment for distance-calculated purposes, we propose a graph generation method to create a map-represent connected graph with the edge length as the weight. * To obtain the maximum area coverage possible in any given environment, we propose a new recursive node selection strategy, which is based on the novel graph-based generalized distance and the graph nodes' degrees. * We implement the GMC-Pos and test it in six scenarios using three different maps. Our method is compared with a benchmark to show its excellent performance in area coverage percentage. The remainder of this paper proceeds as follows: Firstly, in Section II, the GMC-Pos is described in two subsections, namely graph generation and node selection. Next, in Section III, the details about the simulation settings, evaluation metric, and benchmark method will be provided. Then, the results will be presented and discussed in Section IV. Finally, in Section V, the main findings are concluded, and the directions of future research are addressed. ## II GMC-Pos Positioning Method This section describes the details of our proposed method, GMC-Pos, which consists of two modules: graph generation and node selection. Given a fully explored map, we have obtained map information. Then we can start generating a connected graph and select the appropriate nodes based on our novel strategy to be robot positions further. Each process will be explained in the following subsections. ### _Graph Generation_ The representing graph for the fully explored map is generated in the form of a connected graph by the Voronoi distillation [9], which was implemented as a part of the ROS package, \(tuw\_voronoi\_graph\)[27, 28]. We use the graph-based approach because considering all the positions on the map requires high computational time. Moreover, the main benefit of this graph is that it is connected and spanned thoroughly on the map. That means there is always at least one path between each graph node. The nodes also locate only in the explored area and do not overlap with obstacles. Thus, the graph nodes and edges are already sufficient for Fig. 2: An example of the generalized graph-based distance from \(A\) to \(B\): \(d_{G}(A,B)\) representing the environment. The graph generation can be customized by changing values of segment length, crossing optimization, and end segment optimization. However, since the unweighted graph created by the \(tuw\_voronoi\_graph\) always uses the bottom left corner of the map image as the position \((0.0,0.0)\), we adjust the graph to have proper node coordinates for any map origin. Otherwise, the coordinate of each node will not be its actual position on the map. Moreover, as the distance between nodes will be presented in Section II-B, we transform the graph into a weighted graph in the structure of a Python package, \(NetworkX\)[29]. For the sake of convenience, each node in the generated graph is represented by its coordinate, while the distance between the connected nodes is assigned to be the weight of the corresponding edges. Therefore, let \(G_{0}=(V_{0},E_{0})\) be an unweighted graph generated by the \(tuw\_voronoi\_graph\) where \[V_{0}=\{(x_{v}^{0},y_{v}^{0})\in\mathbb{R}^{2}\}. \tag{1}\] We reconstruct \(G_{0}\) into a weighted graph \(G=(V,E)\). Let \((x_{\text{map}},y_{\text{map}})\) be the map origin from the map image. The set of nodes can be denoted as follows: \[V=\{(x_{v}^{0}+x_{\text{map}},y_{v}^{0}+y_{\text{map}})\in\mathbb{R}^{2}\} \text{ for all }(x_{v}^{0},y_{v}^{0})\in V_{0}. \tag{2}\] The edges in \(E\) are still based on \(E_{0}\), but are updated with the adjusted nodes in \(V\). The weight of each edge \(e=(u,v)\in E\) is assigned as follows: \[w(e)=d(u,v)\text{,} \tag{3}\] where \(d\) is the Euclidean distance. This graph \(G\) will be used afterward for node selection, which will be presented in Section II-B. ### _Node Selection_ Before looking into the selection process, we introduce a novel distance called the generalized graph-based distance. The generalized graph-based distance can be illustrated in Fig. 2. This distance is a better measurement than the Euclidean distance because it depends on the map-represent graph paths. So, it acts in accordance with the connectivity and structure of the map. We define it as follows: \[d_{G}(A,B)=d(A,v_{A})+\delta(v_{A},v_{B})+d(v_{B},B)\text{,} \tag{4}\] where \(d\) is the Euclidean distance, \(\delta\) is the length of Dijkstra's shortest path [30] calculated by a \(NetworkX\) function, and \(v_{A}\), \(v_{B}\) are the nodes that are closest to \(A\), \(B\) in the Euclidean manner, respectively. The generalized graph-based distance can realistically measure the distance between any point on the map for both the graph nodes and those that are not. For example, in the case of the distance between nodes \(u\) and \(w\), we have \(u=v_{u}\) and \(w=v_{w}\). Hence, \[d_{G}(u,w)=\delta(u,w)\text{,} \tag{5}\] which is just the length of Dijkstra's shortest path between \(u\) and \(w\). Moving to look at the node selection, the overall process is shown in Algorithm 1. We will choose the nodes as the robot positions based on the novel strategy in a balanced bidirectional manner to maximize the total area coverage. Let \(\Lambda=\{\lambda_{0},\lambda_{1},...,\lambda_{N-1}\}\subseteq V\) be the set of nodes that are selected as the positions for \(N\) robots. The human operator position \(P\) and the robot area coverage radius \(r\) are required as the input. The selection process is constructed using the recursion as follows: \[\lambda_{0}=\operatorname*{arg\,max}_{v\in V_{P}^{*}}\{d_{G}(v,P )\}\text{,}\quad\lambda_{1}=\operatorname*{arg\,max}_{v\in V_{P}^{*}}\{d_{G}(v, P)\}\text{,}\] \[\lambda_{i}=\operatorname*{arg\,max}_{v\in V_{X_{i-2}}^{*}}\{d_{G} (v,\lambda_{i-2})\}\quad\text{if }i=2,3,...,N-1\text{,} \tag{6}\] where \[V_{\eta}^{*}=\operatorname*{arg\,max}_{v\in V_{\eta}}\{\text{ deg}(v)\}\text{,} \tag{7}\] \[V_{\eta}=\{v\in V|d_{G}(v,\eta)<2r\wedge d_{G}(v,\lambda)\geq \alpha,\] \[\quad\quad\forall\lambda\in\Lambda\cup\{P\}\}\setminus\Lambda\}. \tag{8}\] Since the proposed selection strategy contains various novel components, it is important to clarify which each equation is used for which purposes. Starting from eq. (8), \(V_{\eta}\) contains the nodes within \(2r\). At the same time, the balance of nodes in \(V_{\eta}\) spread thoroughly from all previously selected nodes in \(\Lambda\) needs to be considered. Thus, we choose \(\alpha\) as \[\alpha=\frac{\max(\{H,W\})}{N}\text{,} \tag{9}\] where \(H=H_{0}\cdot Res\), \(W=W_{0}\cdot Res\), and \(H_{0}\), \(W_{0}\), \(Res\) are the height, width, and resolution of the map, respectively. Next, for eq. (7), \(V_{\eta}^{*}\) is a subset of \(V_{\eta}\) in eq. (8), but it will contain only the nodes with the highest degree. We prefer these nodes because we can infer they have many connections in the map. Therefore, in most cases, they are the intersections or the centers of separate rooms, which will consequently affect the area coverage. Finally, the selection process (6) is the last step after we have obtained the nodes with a maximum degree from eq. (7). Among those nodes, we select the one that is the furthest from \(\eta\) according to the bidirectional manner. ## III Simulation The details of the simulation and results of the GMC-Pos will be presented in this section. In the first subsection, we will describe how the simulations are conducted. Then, in the second subsection, we will introduce the evaluation metric. And finally, the method that we use as the benchmark will be explained in the third subsection. ### _Simulation Setup_ All simulations are conducted using ROS Melodic with Ubuntu 18.04 on a Desktop PC with Xeon(R) CPU E5-1680 v3 @ 3.20GHzx16 and 31.3 GB RAM, RViz is used for visualization. We perform multi-robot simulations with one human operator in three environments, as shown in Fig. 3. Map 1 is a loop corridor of size 12.20m\(\times\)12.20m, Map 2 is an actual indoor area of size 27.10m\(\times\)32.20m set up to be more complicated by using boxes and partitions, and Map 3 from [27] is a floor map of size 37.37m\(\times\)23.38m containing long narrow corridors and rooms. Subsequently, we have the scenarios as shown in Tab. I. The purpose of considering Scenarios 1A and 1B is to preliminarily determine if the GMC-Pos can accurately handle when the best robot positions are heuristically known. Meanwhile, the rest of the scenarios are mainly for testing the efficiency of GMC-Pos in various conditions, which have different numbers of robots and operator locations. ### _Evaluation Metric_ First of all, in this paper, we assume that robots and the operator have the same area coverage range \(r=6\)m. So, since our goal is strategically positioning robots to maximize the coverage area all over the map, we newly introduce a metric for evaluating this factor. Let \(\bar{O}\) be the set of occupancy grid cells of the map area scaled by the map resolution and origin. We define the total map area \(A\) and the area covered by all robots and the operator \(A_{\text{cover}}\) as follows: \begin{table} \begin{tabular}{|c|c|c|c|} \hline Scenario & Map & Number of Robots & Operator Location \\ \hline 1A & Map 1 & 3 & Bottom left \\ \hline 1B & & 3 & Top right \\ \hline 2A & Map 2 & 5 & Center \\ \hline 2B & & 6 & Right side \\ \hline 3A & & 5 & Right side \\ \hline 3B & & 6 & Left side \\ \hline \end{tabular} \end{table} TABLE I: Simulation scenarios Fig. 3: Environment maps \[A =\big{|}\bar{O}\big{|} \tag{10}\] \[A_{\text{cover}} =\Bigg{|}\bigcup_{\lambda\in\Lambda\cup\{P\}}B_{r}[\lambda]\Bigg{|}, \tag{11}\] where \[B_{r}[\lambda]=\{o\in\bar{O}|d(o,\lambda)\leq r\}. \tag{12}\] We can see from eq. (11) that \(A_{\text{cover}}\) is the union of the coverage ranges of all robots and the operator. Therefore, the area coverage percentage (\(ACP\)) is defined as \[ACP=\frac{A_{\text{cover}}}{A}\cdot 100. \tag{13}\] ### _Benchmark Method_ Turning now to the benchmark, we implement a method called Conditional Random, as presented in Algorithm 2, to compare with our GMC-Pos, in which by this method, the robot positions will be chosen based on the occupancy grid and the Euclidean distance. Let \(O\subseteq\bar{O}\) be the set of unoccupied occupancy grid cells of the map area scaled by the map resolution and origin, \(\Psi=\{\psi_{0},\psi_{1},...,\psi_{N-1}\}\subseteq O\) be the set of cells that are selected as the positions for \(N\) robots. The selection process is constructed using the recursion as follows: \[\psi_{0} =\operatorname{random.}\operatorname{sample}(O_{p},1),\] \[\psi_{1} =\operatorname{random.}\operatorname{sample}(O_{p},1),\] \[\psi_{i} =\operatorname{random.}\operatorname{sample}(O_{\psi_{i-2}},1)\] if \[i=2,3,...,N-1\], (14) where \[O_{\phi}=\{o\in O|\alpha\leq d(o,\phi)<2r\}\setminus\Psi. \tag{15}\] Recall that \(d\) is the Euclidean distance, \(\alpha\) is the same as in eq. (9), and \(\operatorname{random.}\operatorname{sample}(S,k)\) is a function that chooses random \(k\) items from the set \(S\). Here the process (14) is in Fig. 4: Results of the GMC-Pos (the proposed method) and the Conditional Random. The green circles represent the robots, the red square represents the human operator, the blue translucent circles represent the areas covered by the corresponding robot, the yellow translucent circle represents the area covered by the human operator, and the red boxes indicate the areas that are not covered. a random manner because of the high number of grid cells needed to be considered and filtered under the conditions. Therefore, for the Conditional Random method, we will use the average of 50 iterations of each scenario for comparison in the following section. ## IV Results and Discussion The simulations for the GMC-Pos and the Conditional Random are conducted in the six scenarios we mentioned previously, as presented in Fig. 4. Note that for the Conditional Random, the figures shown are an iteration of each scenario setting. They are evaluated using the \(ACP\), as shown in Fig. 5. We can see that the GMC-Pos performs better than the Conditional Random in all the scenarios. Firstly, in Scenarios 1A and 1B, shown in Fig. 4(a) to 4(d), our purpose is to preliminarily check if the GMC-Pos works correctly on Map 1, which is a simple map. We can see that the robots using the GMC-Pos successfully cover the whole area as the node selection does not fail in choosing the three apparently best robot positions. On the other hand, those using the Conditional Random still have some uncovered areas around one of the corners since the robot positions are not selected efficiently using a graph-based approach. So, the results show that the GMC-Pos has 15.13% and 16.23% higher \(ACP\) than the Conditional Random in Scenarios 1A and 1B, respectively. Secondly, in Scenario 2A, shown in Fig. 4(e) and 4(f), the robots using the Conditional Random have an unbalanced positioning between the left and right sides of the map. On the other hand, those using the GMC-Pos can spread equally and cover most of the whole area well, resulting in a 24.88% improvement of \(ACP\). Thirdly, in Scenario 2B, shown in Fig. 4(g) and 4(h), there is a space and unbalanced distribution among the robots using Conditional Random. The main reason is that the Euclidean distance used in the Conditional Random can sometimes be suboptimal. However, GMC-Pos that uses \(d_{G}\) still performs well in this scenario, resulting in a 16.54% improvement of \(ACP\). Fourthly, in Scenario 3A, shown in Fig. 4(i) and 4(j), although the robot using the Conditional Random can span through the map, the selected robot positions are not good enough to cover the map. In contrast, the coverage by robots using GMC-Pos is almost the whole map, as we can see 18.54% higher \(ACP\). Finally, in Scenario 3B, shown in Fig. 4(k) and 4(l), we can see that the distances between robots using the Conditional Random is unbalanced. Consequently, the coverage is not as high as the ones using the GMC-Pos, which almost perfectly covers all parts of the maps, resulting in a 21.29% higher \(ACP\). Moreover, we observe another advantage of the GMC-Pos, which is all the selected positions are located in accessible and practical areas, i.e., they are quite visible to the human operator and not too close to the wall. ## V Conclusion and Future Work This paper proposes the GMC-Pos, a novel positioning method for multiple robots to maximize the environment area coverage. Our approach consists of two modules. Firstly, the graph generation module is for representing the environment map in a practical structure using a connected graph. All the graph nodes are in the accessible area, and the weighted edges show the connectivity and structure of the environment. Secondly, the node selection module is for strategically choosing appropriate positions for robots. We newly introduce the generalized graph-based distance, which combines the Euclidean distance and Dijkstra's shortest path length. Also, our selection process is based on the recursion with some conditions based on the maximum node degree and the mentioned distance to ensure the chosen positions have the highest area coverage possible. For the simulation, we have six scenarios: multi-robot simulations in a simple map of size 12.20m\(\times\)12.20m and two challenging maps of size 27.10m\(\times\)32.20m and 37.37m\(\times\)23.38m. We compare the positioning performance between our proposed GMC-Pos and the Conditional Random method. The results show that our approach is more well-performed against the Conditional Fig. 5: Bar plot for the \(ACP\) of the GMC-Pos and the Conditional Random in six scenarios. Random regarding the area coverage percentage in all scenarios. There can be further studies to extend the GMC-Pos for making the robots reposition to maintain the area coverage according to the position of the moving human operator. It is also interesting to make physical obstructions such as dense walls and furniture exert influence on the coverage range. Moreover, applying the GMC-Pos for the SAR is a useful and possible topic for future work.
2308.04924
A fundamental mechanism of solar eruption initiation in multipolar magnetic field
Recently we established a fundamental mechanism of solar eruption initiation, in which an eruption can be initiated from a bipolar field through magnetic reconnection in the current sheet (CS) that is formed slowly in the core field as driven by photospheric shearing motion. Here using a series of fully 3D MHD simulations with a range of different photospheric magnetic flux distributions, we extended this fundamental mechanism to the quadrupolar magnetic field containing a null point above the core field, which is the basic configuration of the classical breakout model. As is commonly believed, in such multipolar configuration, the reconnection triggered in the CS originated at the null point (namely, the breakout reconnection) plays the key role in eruption initiation by establishing a positive feedback-loop between the breakout reconnection and the expansion of the core field. However, our simulation showed that the key of eruption initiation in such multipolar configuration remains to be the slow formation of the CS in the sheared core rather than the onset of fast breakout reconnection. The breakout reconnection only helps the formation of the core CS by letting the core field expand faster, but the eruption cannot occur when the bottom surface driving is stopped well before the core CS is formed, even though the fast reconnection has already been triggered in the breakout CS. This study clarified the role of breakout reconnection and confirmed formation of the core CS as the key to the eruption initiation in a multipolar magnetic field.
Xinkai Bian, Chaowei Jiang, Xueshang Feng, Pingbing Zuo, Yi Wang
2023-08-09T12:46:41Z
http://arxiv.org/abs/2308.04924v1
# A fundamental mechanism of solar eruption initiation in multipolar magnetic field ###### Abstract Recently we established a fundamental mechanism of solar eruption initiation, in which an eruption can be initiated from a bipolar field through magnetic reconnection in the current sheet (CS) that is formed slowly in the core field as driven by photospheric shearing motion. Here using a series of fully 3D MHD simulations with a range of different photospheric magnetic flux distributions, we extended this fundamental mechanism to the quadrupolar magnetic field containing a null point above the core field, which is the basic configuration of the classical breakout model. As is commonly believed, in such multipolar configuration, the reconnection triggered in the CS originated at the null point (namely, the breakout reconnection) plays the key role in eruption initiation by establishing a positive feedback-loop between the breakout reconnection and the expansion of the core field. However, our simulation showed that the key of eruption initiation in such multipolar configuration remains to be the slow formation of the CS in the sheared core rather than the onset of fast breakout reconnection. The breakout reconnection only helps the formation of the core CS by letting the core field expand faster, but the eruption cannot occur when the bottom surface driving is stopped well before the core CS is formed, even though the fast reconnection has already been triggered in the breakout CS. This study clarified the role of breakout reconnection and confirmed formation of the core CS as the key to the eruption initiation in a multipolar magnetic field. Sun: coronal mass ejections (CMEs); Sun: Magnetic fields; Methods: numerical; Sun: corona; Magnetohydrodynamic (MHD) 0000-0002-4880-788X]Xinxai Bian 0000-0002-4883-0888]Chaowei Jiang 0000-0002-4883-0888]Xueshang Feng 0000-0002-4883-0888]Pingbing Zuo 0000-0002-4883-0888]Yi Wang ## 1 Introduction Coronal mass ejections (CMEs) are the most spectacular eruptive phenomenon in the solar atmosphere, and its energy is derived from the free magnetic energy stored in the coronal magnetic field. Due to photospheric line-tied effect, the coronal magnetic field is stressed and deviates from the potential field under the action of various photospheric motions (such as shear and rotational flows), during which free magnetic energy accumulates. This evolution process is quasi-static, and the coronal magnetic field is in an approximately force-free state, that is, the outward magnetic pressure of the low-lying flux is balanced with the inward magnetic tension force of the overlying flux. At a critical point, this force equilibrium is disrupted and the eruption begins suddenly, during which the free magnetic energy is rapidly converted into impulsive heating and fast acceleration within the plasma. However, how the balance of forces is disrupted, that is, the initiation mechanism of solar eruption, remains an open question. Earlier studies generally suggested that CMEs arise from regions where the magnetic field is closed, while the occurrence of CMEs needs these closed magnetic structures to fully open. However, according to the Aly-Sturrock conjecture (Aly, 1991; Sturrock, 1991), the energy of a fully open field is the upper limit of the energy of all possible force-free fields with a given magnetic flux distribution on the bottom and a simply connected topology, so CME cannot occur from the release of the magnetic energy, which is known as the Aly-Sturrock paradox. Therefore, all the theories of CME initiation need to avoid this paradox in some way (Forbes et al., 2006; Shibata & Magara, 2011; Chen, 2011; Schmieder et al., 2013; Aulanier, 2014; Janvier et al., 2015), such as that the magnetic field before eruption is not simply connected, i.e., there is a pre-existing magnetic flux rope (MFR), or that the magnetic structure after eruption is not fully open (i.e., partially open) and magnetic reconnection plays a key role in the eruption. For the models based on magnetic reconnection, many efforts have been made in early simulations of solar erup tion initiated from the simplest magnetic configuration, i.e., a bipolar magnetic field, in two-dimensional (2D) or transformation-invariant coordinate systems (Choe and Lee, 1996; Mikic and Linker, 1994; Amari et al., 1996). All these simulations show that by continuously applying shear to the bipolar field, the magnetic structure tends to approach the open field with a current sheet (CS) being formed over the polarity inversion line (PIL). Once the magnetic resistivity is applied, magnetic reconnection immediately sets in at the CS and initiates the eruption. Unfortunately, such a simple scenario failed to work in any fully three-dimensional (3D) simulation at that time. Moreover, in the 2D cases, the eruption requires full open of the overlying field, which is inconsistent with the observation. As such, Antiochos et al. (1999) proposed an alternative scenario, called as the breakout mechanism, based on a multipolar field, in which only the core flux erupts and neighboring flux is still closed. This mechanism requires a quadrupolar magnetic field consisting of a compact core bipolar field and a background bipolar field of opposite orientation to that of the core field. Then magnetic null point naturally forms above the core field. Through continuous shearing of the core field, which energizing the system, the core field expands and squeezes the magnetic null point to form a CS (referred to as the breakout CS). When the finite resistivity is considered, magnetic reconnection will occur and remove the un-sheared, overlying flux above the low-lying, sheared core flux, thus allowing the core field to further expand and enhance the breakout CS. This positive feedback loop eventually causes the eruption. There have been many developments since the breakout mechanism was proposed (MacNeice et al., 2004; Amari et al., 2007; Lynch et al., 2009; Pariat et al., 2010; Wyper et al., 2016; Dahlin et al., 2019; Kumar et al., 2021). For example, Lynch et al. (2008) reproduced the process of the breakout mechanism initiating eruption using 3D MHD simulation, and DeVore and Antiochos (2008) demonstrated that homologous eruptions could be triggered by the breakout mechanism, although without producing CMEs. Also, a well-observed eruption was found to be in good qualitative agreement with the topology and dynamical evolution of the breakout mechanism (Chen et al., 2016). This mechanism is also applicable to jet-related eruptions, suggesting that breakout mechanism may be a universal model for solar eruption from large scale CMEs to small-size eruptive activities (Wyper et al., 2017). In the ultra-high resolution 2D MHD simulation of the breakout mechanism (Karpen et al., 2012), it was further found that the start of fast breakout reconnection (indicated by Alfvenic outflows in the breakout reconnection), rather than the initial reconnection at the null point, corresponds to the eruption onset. Meanwhile, it is found that the explosive release of magnetic energy and the rapid acceleration of CME are caused by the reconnection in the flare CS which is the central CS formed in the core field during its dynamic expansion as driven by the breakout reconnection. In our recent works based on high-accuracy, fully 3D MHD simulations (Jiang et al., 2021; Bian et al., 2022, 2020, 2020), we for the first time found that in the absence of background bipolar field, that is, without the magnetic null point and the breakout reconnection, solar eruption can still occur in the simply connected bipolar field due to the reconnection of the core CS that is formed slowly as driven by photospheric shearing motion, and we suggest this model to be a fundamental mechanism of solar eruption due to its simplicity and efficacy. Then the question naturally arises, since applying shearing flow in a single bipolar field can form a core CS and produce an eruption, what effect does the addition of a background field (that forms a magnetic null point) have on the initiation of eruption? and whether the fast breakout reconnection can be used a sign of the eruption onset? In order to answer these two questions, here we conducted a series of fully 3D MHD simulations with the same core bipolar field but different background bipolar field, so they have different location of magnetic null point. Meanwhile, we also conducted simulation without the background field for comparison. We found that all simulations with continuously bottom driving (for energizing the core field) produced an eruption, and the onset of the eruption is related to the initial height of null point, namely, the smaller the height is, the earlier the eruption starts. This indicates that the breakout reconnection indeed helps the formation of the core CS by letting the core field expand faster. But the breakout reconnection releases very little of the magnetic energy, and the rapid release of magnetic energy only corresponds to the reconnection of the core CS. In addition, we selected some simulations and stopped driving at certain moments when the fast breakout reconnection has started while the core CS has not been formed yet. These experiments show that there exists a critical point in time which may correspond to the transition from a quasi-static evolution phase to a slow-rise phase prior to the eruption onset. Before the critical point, the system cannot produce eruption without the bottom driving, because the core CS fails to form, even though the fast breakout reconnection has been triggered. In such case, the super-Alfvenic velocity provided by the fast breakout reconnection will be dissipated with the relaxation of the system to a stable state, and cannot lead to a feedback loop between the breakout reconnection and the expansion of the core field. On the other hand, when the system has passed the critical point, the eruption is inevitable since the core CS can spontaneously form without the bottom driving. Such behavior exists in both the single bipolar field and the quadrupolar field, and thus is not dependent on the breakout reconnection. This shows that the formation of core CS is the key to eruption. Therefore, the fundamental mechanism can be extended to the multipolar magnetic fields. This paper is organized as follows. In Sect. 2, we define the magnetograms for the quadrupolar field with different background bipolar field, and briefly introduce our MHD model. Then we show the results of five simulations with continuously bottom driving and analyze the role of breakout reconnection in Sect. 3.1 and 3.2. The simulation and analysis revealing the key of the eruption are given in Sect. 3.3. Then, the analysis of the slow rise phase is presented in Sect. 3.4. Finally, our conclusion and discussion are given in Sect. 4. ## 2 MHD Model We numerically solve the full MHD equations in 3D Cartesian coordinate to study the dynamic evolution of solar corona. The MHD solver is based on the conservation element and solution element (CESE) method and uses adaptive mesh refinement (AMR) grid (Feng et al., 2010; Jiang et al., 2010, 2016, 2021). Since the controlling equations and the numerical code are the same as used in Jiang et al. (2021), we will not repeat them here, except some key settings and parameters as follows. Our model includes solar gravity and plasma pressure in the momentum equation, and the initial plasma density distribution satisfies hydro static equilibrium. Meanwhile, a small viscosity \(\nu=0.05\frac{(\Delta x)^{2}}{\Delta t}\) (where \(\Delta x\) and \(\Delta t\) are the local grid resolution and time step, respectively) is used in the momentum equation to keep numerical stability during the dynamic phase of the simulated eruptions. There is no explicit resistivity used in the magnetic induction equation, but magnetic reconnection can still occur due to numerical resistivity when the thickness of a current layer is close to the local grid resolution. ### Boundary Condition In order to comprehensively understand the role of the magnetic null point (breakout reconnection) in producing eruption, we conducted a series of MHD simulations using photospheric magnetograms with the same core bipolar field but with different background field, thus forming different quadrupolar configurations. At the bottom surface (i.e., \(z=0\) plane), the core bipolar field is defined as the composition of two Gaussian functions (Amari et al., 2003; Jiang et al., 2021), \[\begin{split} B_{z,\text{core}}(x,y,0)=B_{0}e^{-x^{2}/\sigma_{x} ^{2}}(e^{-(y-y_{c})^{2}/\sigma_{y}^{2}}\\ -e^{-(y+y_{c})^{2}/\sigma_{y}^{2}}),\end{split} \tag{1}\] and the background bipolar field is defined as, \[\begin{split} B_{z,\text{back}}(x,y,0)=-\epsilon B_{0}e^{-x^{2}/ \sigma_{x}^{2}}(e^{-(y-y_{b})^{2}/\sigma_{y}^{2}}\\ -e^{-(y+y_{b})^{2}/\sigma_{y}^{2}}),\end{split} \tag{2}\] where \(B_{0}\) is a constant such that the maximum value of photospheric \(B_{z}\) is \(37.2\) G. The parameters \(\sigma_{x}\) and \(\sigma_{y}\) are \(28.8\) Mm and \(14.4\) Mm, respectively, which control the magnetic flux distribution range in the \(x\) and \(y\) direction. The parameter \(y_{c}\) controls the distance between the positive and negative poles of the core field, with a value of \(11.52\) Mm, and the parameter \(y_{b}\) controls the distance between the positive and negative poles of the background field. The parameter \(\epsilon\) is the ratio of the maximum \(B_{z}\) of the background field to that of the core field on the bottom. By setting different \(y_{b}\) and \(\epsilon\), we can obtain a series of different distributions of background bipolar field while their core field are the same. As a result, the quadrupolar configurations have different locations of null point and different ratios of the background magnetic flux to the core flux. Table 1 gives the specific parameter settings. These magnetograms used in the simulation are shown in Figure 1, and will be referred to as M0 to M4. Our series of MHD simulations all begin with the potential field which is computed using the Green's function method. Therefore, the initial potential field has a magnetic null point above the core field PIL, and its height can be obtained, as shown in Figure 1F and Table 1. Since the parameter \(\epsilon\) is set to be less than \(1\), the total unsigned flux of the background field is smaller than that of the core field, which is often the case in the realistic solar active regions. Meanwhile, the distance of the background bipolar field also affects the height of the magnetic null point, namely, the smaller the distance is, the lower the null point resides. To add free magnetic energy to the system, the simulations are driven by rotational flow applied at the footpoints of the core field, which creates magnetic shear along the core PIL and does not modify the flux distribution on the bottom, as shown in Figure 1A. The rotational flow is defined as \[v_{x}=\frac{\partial\psi(B_{z,\text{core}})}{\partial y};v_{y}=\frac{\partial \psi(B_{z,\text{core}})}{\partial x}, \tag{3}\] with \(\psi\) given by \[\psi=v_{0}B_{z,\text{core}}^{2}e^{-(B_{z,\text{core}}^{2}-B_{z,\text{max}}^{2} )/B_{z,\text{max}}^{2}}, \tag{4}\] \begin{table} \begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{Map} & \multicolumn{2}{c}{Parameters} & \multicolumn{2}{c}{\(\Phi_{\text{back}}\)} & Height of magnetic \\ \cline{2-5} & \(y_{b}\) (Mm) & \(\epsilon\) & \(\frac{\Phi_{\text{core}}}{\Phi_{\text{core}}}\) & null point (Mm) \\ \hline M0 & 0 & 0 & 0 & — \\ M1 & 57.6 & 0.5 & 0.67 & 53.3 \\ M2 & 86.4 & 0.5 & 0.67 & 65.2 \\ M3 & 115.2 & 0.5 & 0.67 & 76.9 \\ M4 & 115.2 & 0.2 & 0.27 & 145.9 \\ \hline \hline \end{tabular} \end{table} Table 1: Parameters \(y_{b}\) and \(\epsilon\) that defines the five magnetograms of M0 to M4. In the table, the ratio of the total unsigned flux of the background field to the core field and the height of the magnetic null point are also given. where \(B_{z,\max}\) is the maximum value of the photospheric vertical magnetic component \(B_{z}\), and \(v_{0}\) is a constant for scaling such that the maximum of the surface velocity is \(4.4\) km s\({}^{-1}\), close to the typical flow speed in the photosphere (\(\sim\)\(1\) km s\({}^{-1}\)). The flow speed is smaller than the sound speed (\(110\) km s\({}^{-1}\)) by two orders of magnitude and the local Alfven speed (the largest Alfven speed is \(2300\) km s\({}^{-1}\)) by three orders, respectively, thus representing a quasi-static stress of the coronal magnetic field. ### Grid Setting The computational domain is large enough for simulating the eruption initiation process, spanning a Cartesian box of \((-270,-270,0)\) Mm \(\leq(x,y,z)\leq(270,270,540)\) Mm. The full volume is resolved by a block-structured grid with AMR in which the base resolution is \(2.88\) Mm, and the highest resolution of \(360\) km is used to capture the formation of CSs and the subsequent reconnections. Besides capturing the CS and reconnection in the core field as done in Jiang et al. (2021), the grid refinement is controlled by the following criteria to ensure that the formation of the breakout CS and the subsequent reconnection were resolved using the highest resolution, \[\begin{cases}B<0.1,\\ \dfrac{J}{B}>\dfrac{0.1}{\delta},\\ \nabla\rho<0.0001,\end{cases} \tag{5}\] where \(\delta\) is the length of the local grid. The first two conditions are used to locate the signature of breakout CS, and the third condition is used to exclude unnecessary grid refinement of shock due to super-Alfvenic velocity. If all the three conditions in Eq. 5 are met, the grid will be refined. Note that all quantities used here are expressed in their normalized values with units of magnetic field as \(1.86\) G, length \(11.52\) Mm, and density \(2.29\times 10^{-15}\) g cm\({}^{-3}\), respectively. ## 3 Results We conducted a series of simulations using five different magnetograms, and carried out a comparative analysis of these different simulations. We first briefly described the evolution of a typical eruption and then analyzed the role of breakout reconnection by comparing the results of five continuously-driven simulations. Furthermore, we selected three simulations to stop the bottom driving at three different moments and let the system evolve spontaneously to find the key factor determining whether the eruption can occur. Finally, we use an artificial frictional force in some driving-stopped simulations to constrain the velocity of the system (thus can reduce the effect of inertia) and conduct analysis of the slow-rise phase. In total, twenty-two sets of simulations have been carried out in this study. ### A typical evolution Figure 2 (and its animation) show the evolution of 3D magnetic field lines, breakout CS, and the vertical cross section of the current density, velocity and Alfvenic Mach number in the continuously-driven simulation with magnetogram M1. After a period of surface rotational flow, the magnetic structure has a significant expansion, evolving from the initial potential field with magnetic null point to a configuration with strong shear above the core PIL and with the breakout CS formed. The 3D structure of breakout CS appears like a "paraglider" and in the 2D cross section it shows an arc on top of the core field. At \(t=70\) (the time unit is \(\tau=105\) s), with ongoing of the reconnection in the breakout CS, super Alfvenic velocity flow appeared around the breakout CS, and the maximum speed has reached \(47\) km s\({}^{-1}\) and the Alfven Mach number of \(1.6\). However, in the core field below the breakout CS, it is still a quasi-static evolution since the speed is far smaller than the local Alfven speed. Also in the core field region, there is no CS structure, but a volumetric current distribution. With the further injection of energy from the bottom surface, the system expands more and more, and the breakout CS is squeezed to grow continually and plasmoid instability is triggered there, which leads to the reconnection in a turbulent way. Meanwhile, the core volumetric current was squeezed to a current layer and gradually formed a vertical CS until \(t=93\). The continuous increase of the energy in the system shown in Figure 3A (red line) is consistent with the energy injected continuously from the bottom surface. During this process, the total kinetic energy remains at a very low level. Even the onset of the fast breakout reconnection does not change the magnetic and kinetic energies significantly. The key transition occurs at \(t=93\), when the thickness of the core current layer decreases to close to the grid resolution (Figure 4), then magnetic reconnection kicks in there, resulting in rapid release of magnetic energy and sharp increase of kinetic energy. Obviously, that moment defines the eruption onset. After the eruption starts, the sheared magnetic arcades of the core field form a complex MFR through the reconnection of the core CS, which rapidly expands and rises. Due to the sharp rise of the MFR, a fast magnetosonic shock is formed in front of it, sweeps through the breakout CS, and finally reaches the outer edge of the entire explosive structure, as shown in Figure 2 and its animation. The initial reconnection height of core CS is around \(20\) Mm above the core PIL (Figure 2A). By tracing the temporal evolution of the field line that roots in the center of the core polarities, as shown in Figure 5A, we find that the two groups of magnetic arcades with initial reconnection are lower than this magnetic field line, that is, the initial core reconnection occurs between the shear magnetic arcades of the core field. As the eruption con tinues, this magnetic field line first undergoes the breakout reconnection above and becomes part of the neighbor field, and then undergoes flare reconnection into a flare loop, as shown in animation of Figure 2. The simulation with magnetogram M1 show that overall the global evolution of the system agrees with the breakout model in respect of magnetic topology. ### The role of breakout reconnection in eruption All five simulations with the different magnetogram from M0 to M4 produce eruption under continuously bottom driving, as shown in Figure 3 (and its animation). The energy evolutions of these five simulations are similar, as shown in Figure 3A. The magnetic energies first increase almost linearly due to the continual injection of Poynting flux from the bottom surface, and then drop sharply at a certain time, that is, at the eruption onset. Note that the initial magnetic energy (i.e., the potential field energy) differs in the different simulations because it is directly related to the different distributions of magnetic flux. However, since all magnetograms have the same core field and driving speed, their energy increase rates are equal, that is, these curves of magnetic energy are nearly parallel to one another before the eruption onset. Even though the breakout reconnection occurred at different times in the four simulations (M1 to M4), the magnetic energy reduction was so small that it changes very limited in the energy evolution curve (compare with M0). Interestingly, the eruption onset time is clearly correlated with the initial height of the null point, as the lower the null point is, the earlier the eruption begins. The simulation with magnetogram M0 of the bipolar field can be regarded as with an infinitely far and small background field such that the null point is infinitely high, therefore it also satisfies the above relationship. Figure 4 shows the temporal evolution of the thickness of core current layer in these five simulations. Note that for a better comparison, the times of the different simulations are shifted such that all the eruption onset times are \(t=0\). All the simulations show that the thickness of core CS at the eruption onset is close to the grid resolution, of \(3\sim 4\) grids at which the reconnection is triggered. This result clearly indicates that the reconnection of core CS results in the violent eruption in all the simulations. The thinning speed of the current layer is no more than \(16\) km s\({}^{-1}\), and is far below the local Alfven speed (on the order of \(1500\) km s\({}^{-1}\)). Thus the formation of the CS is a quasi-static evolution, even though the outflow of fast breakout reconnection has reached \(160\) km s\({}^{-1}\) in simulation M1. It can also be seen that thinning speed is related to the initial height of the null point, that is, the lower the null point, the greater the speed (and thus the less time to form CS). Figure 5 shows the temporal evolution of the apex of the magnetic field line that is anchored at the positive polarity center of the core field in the five simulations. The rising of this field line indicates the expansion of the core field. As can be seen, the lower the null point is, the faster this magnetic field line rises, indicating that the whole core field expands faster. Through the analysis and comparison of these simulations, we found that the magnetic null point or breakout reconnection indeed helps the eruption to occur. The reason is that an earlier and stronger breakout reconnection (due to a lower height of null point) would make the core field expand faster, thus the core CS can form earlier, and the reconnection of core CS immediately initiates the eruption. ### The key factor for initiation of eruption To investigate the key factor determining whether an eruption can be initiated, we selected three simulations (M0, M1, and M3) to stop the bottom driving (i.e., the surface shearing motion) at three different moments before the eruption onset to see whether an eruption can still occur and analyze the reason. The three moments as selected to stop driving are, respectively, the first one (denoted by S1) is very close to the eruption onset (the core CS is almost formed), the second one (denoted by S2) is far away from the eruption onset (the core CS is just beginning to form), and the third one (denoted by S3) is even further ahead of the eruption onset. Note that for the simulations of quadrupolar field (i.e., M1 and M3), all the three moments of stopping driving are after the start of fast breakout reconnection. See Table 2 for specific selection of simulations and moments. Figure 6 show the temporal evolution of energies in these driving-stopped simulations, where the red, blue, and green curves show respectively the results with the three incremental earlier moments of stopping the bottom driving (as denoted by S1, S2, and S3, respectively). In all the three cases with different magnetograms, the S1 simulations produce eruption within a short time after stopping the driv \begin{table} \begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{Map} & \multicolumn{2}{c}{Eruption} & \multirow{2}{*}{Start of FBR} & \multirow{2}{*}{Simulation} & Time \\ & Onset & & & (stop driving) \\ \hline \multirow{3}{*}{M0} & \multirow{3}{*}{123} & & M0S1 & 120 \\ & & & M0S2 & 116 \\ & & & M0S3 & 112 \\ \hline \multirow{3}{*}{M1} & \multirow{3}{*}{93} & & M1S1 & 88 \\ & & & M1S2 & 86 \\ & & & M1S3 & 70 \\ \hline \multirow{3}{*}{M3} & \multirow{3}{*}{114} & & M3S1 & 110 \\ & & & M3S2 & 104 \\ \cline{1-1} & & & M3S3 & 100 \\ \hline \hline \end{tabular} \end{table} Table 2: The definition of nine driving-stopped simulations. The Start of FBR is the start time of fast breakout reconnection. ing, and the eruption starts only slightly later than that in the continuously-driven simulation. Similarly, all S2 simulations also produce eruption that starts somewhat later than those of original simulations. We also show the temporal evolution of thickness of core current layer in Figure 7. As can be seen, all the eruptions in S1 and S2 simulations start once the thickness of the core current layer decreases to close to the grid resolution. Figure 8 also shows the temporal evolution of the magnetic field line with same footpoint in the nine driving-stopped simulations, as shown in Figure 5. The rising speed of this magnetic field line in all the simulations is slower than that of the original simulation at the moment of stopping driving. In contrast, all the S3 simulations do not produce eruption, and here we study why they fail to erupt despite the fact that the fast breakout reconnection has already began. Figure 9 (and its attached animation) shows evolutions of the breakout CS, core current distribution and velocity in the M1S3 simulation (and M0S3, M3S3). After stopping the driving, the breakout CS gradually shrank, and meanwhile the fast reconnection outflow also decayed. Eventually the system relaxed to an equilibrium with residual velocity of around \(10\) km s\({}^{-1}\) at \(t=200\), and the total magnetic energy is slightly dissipated by only \(2.8\) percent in the long relaxation process without impulsive decrease (i.e., eruption). In the final equilibrium, the breakout CS is still present due to upward pressing effect of the already sheared arcade below, but the reconnection outflow velocity has dropped to very low levels, which indicates that this breakout reconnection cannot self maintain if without the driving. With relaxation of the system, the thickness of the core current layer slowly increases, i.e., a reversed process of CS formation. During this process, although the core field also shows expansion, its speed is much slower that in the simulations with successful eruption (Figure 8 and animation of Figure 9). Through the S3 simulations, we find that the eruption does not occur because the core CS in the system cannot form, although the fast breakout reconnection has already started. Therefore, our simulation suggests that the fast breakout reconnection cannot be used as a sign of the eruption onset. Moreover, the simulations M0S1 and M0S2 show that in the absence of breakout reconnection, the core CS can still form after stopping the driving, which indicates that breakout reconnection is not the necessary condition for triggering eruption. When the system is driven by the bottom surface flow, the core field expand upward continuously and its current distribution is squeezed gradually (i.e., the thickness of the core current layer decreases). However, when the driving stops, both the squeezing speed and the expansion speed gradually weaken with relaxation of the magnetic field as aided by the small momentum viscosity. When these two speeds are dissipated and die before the core CS is formed, the eruption does not occur, while on the other hand, if they can survive until the core CS is formed, then the eruption can be triggered. That is, the formation of core CS is the key to the eruption. The breakout reconnection does partly drive the formation of core CS, but the effect appears to be less important than that of the bottom driving in our simulations. ### The slow rise phase Although our simulations show that the formation of core CS is the key factor for initiation of eruption, it does not reveal at what stage or after what features the eruption is inevitable even without the bottom driving. All the S1 and S2 simulations form core CS and produce an eruption. Is it because the residual velocity, i.e., the inertia effect, dominates the evolution in those simulations and thus successfully buildup the core CS? Or because the system has passed a critical point after which the system is no more quasi-static in nature and the eruption is inevitable? To avoid the influence of residual velocity, we use an artificial frictional force to constrain the velocity of the system in the driving-stopped simulations. Specifically, we add an artificial friction term to the momentum equation, which is given as: \[\mathbf{F}=-f\frac{\rho\mathbf{v}}{t_{A}} \tag{6}\] where \(f\) is an adjustable coefficient and \(t_{A}\) is the local Alfvenic time, namely, \(t_{A}=\frac{1}{v_{A}}=\frac{\sqrt{\rho}}{B}\). This frictional term means that the velocity is dissipated to zero within the time of \(\frac{1}{f}t_{A}\). We note that such friction does not exist in the real corona, while here it is used only for examining the effect of the inertia of the system in the initiation of the eruption. If the MHD system evolves in a quasi-static nature, the frictional force can help it relax to an equilibrium quickly, and therefore has been frequently used in coronal force-free field reconstructions (Valori et al., 2007; Jiang & Feng, 2013; Guo et al., 2016). For comparison, we select two simulations, M0S2 and M1S2, and use three different values of \(f=0.1\), \(0.3\), and \(1\), respectively, which represents a small, median, and large frictional force. Furthermore, the continuously-driven simulations M0 and M1 are also run with frictional force of \(f=0.3\), for the aim to show how the frictional force affects the eruption behavior as compared to those with no frictional force. See Table 3 for specific friction setting and eruption onset. Figure 10 shows evolution of energies in the eight simulations with friction and compared with those four cases without friction. With the friction, the residual kinetic energy in all the M0S2 and M1S2 simulations decays quickly and substantially, as shown clearly by the logarithmic scale of the kinetic energy in Figure 10B and D. Therefore the inertia effect in driving the evolution is largely removed. How ever, it can be seen that, although different values of friction give different results, all simulations produce eruption, i.e., a rapid release of magnetic energy, and formation and rise of the MFR as driven by the reconnection (see animation of Figure 10). Due to the applied friction, the onset of eruption is delayed, and the greater the friction coefficient, the later the onset of eruption. It is obvious that the application of friction reduces the velocity of the evolution in the system, including the thinning speed of core current layer (Figure 11) and the expansion speed of the core field (Figure 12). For instance, in the simulation with the very large friction coefficient \(f=1\), the eruption also occurs, despite that the rates of magnetic energy release and kinetic energy increase are much weaker than those with smaller friction. This is because the strong friction slows down significantly the eruptive process of the system, but it cannot prevent the occurrence of eruption. In the evolution of the core current layer as shown in Figure 11 (green curves), it can still be seen that the thickness of current layer continuously decreases to the grid resolution, though at a low thinning speed, and finally triggers magnetic reconnection. During this process, the core field also expands continuously at a low speed (Figure 12). Combining these simulation results of M0S2 and M1S2 with friction, we find that the eruption becomes inevitable after the system evolves to a certain moment, regardless of whether it is a single bipolar field or a quadrupolar field. Certainly, this moment is close to the onset of eruption. We consider that the system at this moment has entered a slow rise phase which is no more quasi-static since the friction cannot settle down it. The slow-rise phase is often observed in filament eruptions for a short period (of a few to tens of minutes) in which the filament shows a slow rise with a speed faster than that of the quasi-static phase but significantly slower than that of the eruption phase (e.g., Cheng et al., 2020). Once the system is driven to enter this stage, it will always evolve spontaneously to produce an eruption, regardless of whether there is still a bottom driving. This is independent of the residual velocity in the system. ## 4 Conclusion The fundamental mechanism of solar eruption initiation, in which a CS slowly forms within the sheared core and reconnection in the CS triggers and drives an eruption, can be applied to the bipolar magnetic field that exists universally on the Sun (Jiang et al., 2021; Bian et al., 2022, 2020), but most violent solar eruptions often originate from complex sunspot group that consist of multipolar magnetic fields, in particular, the \(\delta\) sunspot groups (Kunzel, 1959; Guo et al., 2014; Yang et al., 2017; Toriumi and Wang, 2019; Toriumi, 2021). In this paper, we extended this fundamental mechanism to a typical multipolar magnetic field: a quadrupolar magnetic configuration containing a null point above the core field, which is the basic configuration of the classical breakout model. To this end, we have carried out a series of fully 3D MHD simulations with comparative analysis. These simulations all have the same core bipolar field but different background fields (and with a reference case without background field), so they have magnetic null point at different heights. By continuously shearing (i.e., energizing) the core field at the bottom surface, all these simulations show magnetic evolution to eruption in a similar process, which, in the point of view of magnetic topology, is identical to the classical breakout model. Initially, the core field expand upwards, squeezing the null point into a horizontal CS with breakout reconnection subsequently occurring. Meanwhile a vertical CS gradually forms within the sheared core field, and finally the eruption is triggered with a twisting flux rope expelled out from the core field. The evolutions of the magnetic and kinetic energies of the system indicate clearly the eruption onset only begins when reconnection starts in the core CS. Comparison of the different simulations show that the lower the height of null point is, the earlier the breakout reconnection starts, and also the earlier the eruption onset is. This is because that the breakout reconnection can help the formation of core CS by letting the core field expand faster. Nevertheless, the fast breakout reconnection cannot lead the core field into a dynamic evolving phase, and the thinning of the core current layer still proceeds in a slow way until the eruption. To pin down the key factor in initiation of the eruption, we carried out controlled experiments by stopping the bottom driving at particular moments that are after the beginning of fast breakout reconnection but prior to the formation of the core CS. These experiments show that the system cannot produce eruption if the core CS fails to form (as the bottom driving is stopped too early), even though the fast breakout reconnection has started. Furthermore, the fast breakout reconnection cannot be self-maintained if without the driving, therefore not able to establish a positive feedback-loop between the breakout reconnection and the expansion of the core field. Thus, our simulation suggests that the key to eruption initiation in such multipolar configuration remains to be \begin{table} \begin{tabular}{c c c c} \hline \hline \multirow{2}{*}{Map} & \multirow{2}{*}{\begin{tabular}{c} Eruption \\ Onset \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} Simulation \\ (with friction) \\ \end{tabular} } & \multicolumn{1}{c}{Eruption Onset} \\ & & & (with friction) \\ \hline \multirow{4}{*}{M0} & \multirow{4}{*}{123} & \multirow{4}{*}{\begin{tabular}{c} M0 (\(f=0.3\)) \\ M0S2 (\(f=0.1\)) \\ M0S2 (\(f=0.3\)) \\ M0S2 (\(f=1\)) \\ \end{tabular} } & \multirow{4}{*}{\begin{tabular}{c} M0 (\(f=0.3\)) \\ 128 \\ 135 \\ 140 \\ \end{tabular} } \\ & & & M0S2 (\(f=0.3\)) \\ \hline \multirow{4}{*}{M1} & \multirow{4}{*}{93} & \multirow{4}{*}{\begin{tabular}{c} M1 (\(f=0.3\)) \\ M1S2 (\(f=0.1\)) \\ M1S2 (\(f=0.3\)) \\ M1S2 (\(f=1\)) \\ \end{tabular} } & \multirow{4}{*}{ \begin{tabular}{c} M1 (\(f=0.3\)) \\ 105.5 \\ 111 \\ \end{tabular} } \\ & & & M1S2 (\(f=0.1\)) \\ \hline \hline \end{tabular} \end{table} Table 3: The definition of eight simulations with friction. the slow formation of the CS in the sheared core rather than the onset of fast breakout reconnection. A further set of experiments with frictional forces shows that the eruption is inevitable after the system enters a slow rise phase. The physical nature of the system at this phase is different from the quasi-static evolution since, even with a strong friction applied, the system still expands and forms a core CS, although with a reduced speed. Such behavior exists in both the single bipolar field and the quadrupolar field, and thus is not dependent on the breakout reconnection. Observations of filament eruptions seem to show that the slow rise phase can be clearly distinguished by fitting the height-time curve, as well as the velocity-time and acceleration-time curve (Zhang et al., 2001, 2004; Fan & Liu, 2019; Cheng et al., 2020; Liu & Su, 2021). A future study will be performed to study the physical nature of the system at the slow rise phase and to find suitable parameters to identify whether the system enters this stage. In summary, through comparative analysis of a series of full 3D MHD simulations of eruption initiated within a quadrupolar magnetic configuration, we show that the breakout reconnection indeed helps the formation of the core CS by letting the core field to expand faster, but the eruption cannot occur when the bottom surface driving is stopped well before the core CS is formed, even though fast reconnection has already been triggered in the breakout CS. The breakout reconnection alone seems not able to establish a positive feedback loop between itself and the expansion of the core field (that leads to a fast formation of the core CS). This study clarified the role of breakout reconnection, and confirmed formation of the core CS as the key to the eruption initiation in a multipolar magnetic field, which is consist with the fundamental mechanism of solar eruption in our previous series of bipolar field simulations (Jiang et al., 2021; Bian et al., 2022; 2022; 2022; 2022). This work is jointly supported by National Natural Science Foundation of China (NSFC 42174200), the Fundamental Research Funds for the Central Universities (HIT.OCEF.2021033), Shenzhen Science and Technology Program (RCJC20210609104422048), and Shenzhen Technology Project JCYJ20190806142609035. The computational work was carried out on TianHe-1(A), National Supercomputer Center in Tianjin, China and the ISSAT Cluster computing system (HIT, Shenzhen).
2310.19281
Distinguishing Ion Dynamics from Muon diffusion in Muon Spin Relaxation
We propose a model to describe the fluctuations in the internal magnetic field due to ion dynamics observed in the muon spin relaxation ($\mu$SR) by an Edwards-Anderson type autocorrelation function that separates the quasi-static and dynamic components of the correlation by a parameter $Q$ (where $0\le Q\le1$). Our Monte Carlo simulations for this model showed that the time evolution of muon spin polarization deviates significantly from the Kubo-Toyabe (KT) function. To further validate the model, the results of simulations were compared with the $\mu$SR spectra observed in a hybrid organic-inorganic perovskite FAPbI$_3$ [with FA referring to HC(NH$_2)_2$], where local field fluctuations associated with the rotational motion of FA molecules and quasi-static fields from the PbI$_3$ lattice are presumed to coexist. The least-squares curve fitting showed reasonable agreement with the model with $Q=0.947(3)$, and the fluctuation frequency of the dynamical component was obtained. This result opens the door to the possibility of experimentally distinguishing fluctuations due to dynamics of ions around muons from those due to self-diffusion of muons. Meanwhile, it suggests the need to carefully consider the spin relaxation function when applying $\mu$SR to the issue of ion dynamics.
Takashi U. Ito, Ryosuke Kadono
2023-10-30T05:38:37Z
http://arxiv.org/abs/2310.19281v4
# Distinguishing Ion Dynamics from Muon diffusion in Muon Spin Relaxation ###### Abstract We propose a model to describe the fluctuations in the internal magnetic field due to ion dynamics observed in the muon spin relaxation (\(\mu\)SR) by an Edwards-Anderson type autocorrelation function that separates the quasi-static and dynamic components of the correlation by a parameter \(Q\) (where \(0\leq Q\leq 1\)). Our Monte Carlo simulations for this model showed that the time evolution of muon spin polarization deviates significantly from the Kubo-Toyabe (KT) function. To further validate the model, the results of simulations were compared with the \(\mu\)SR spectra observed in a hybrid organic-inorganic perovskite FAPbI\({}_{3}\) [with FA referring to HC(NH\({}_{2}\))\({}_{2}\)], where local field fluctuations associated with the rotational motion of FA molecules and quasi-static fields from the PbI\({}_{3}\) lattice are presumed to coexist. The least-squares curve fitting showed reasonable agreement with the model with \(Q=0.947(3)\), and the fluctuation frequency of the dynamical component was obtained. This result opens the door to the possibility of experimentally distinguishing fluctuations due to dynamics of ions around muons from those due to self-diffusion of muons. Meanwhile, it suggests the need to carefully consider the spin relaxation function when applying \(\mu\)SR to the issue of ion dynamics. ## I Introduction Muon spin rotation (\(\mu\)SR) is an experimental method to probe magnetic fields in matter, where spin-polarized muons (\(\mu^{+}\)) stopped in the sample directly infer the magnitude of local field at the interstitial site(s) via the frequency of their Larmor precession [1]. The muon gyromagnetic ratio \(\gamma_{\mu}\) (\(=2\pi\times 135.539\) MHz/T) is 3.18 times greater than that of protons (\(=2\pi\times 42.577\) MHz/T, which is the largest among all the nuclei of stable elements) and thus it is the most sensitive probe of the internal magnetic field upon their implantation to materials. Muons are provided as a 100%-spin-polarized beam by proton accelerator facilities, which enables \(\mu\)SR measurements in zero magnetic field. In addition, \(\mu\)SR has various advantages: the muon implantation energy is high enough (\(\geq 4\) MeV, corresponding to the stopping range \(\geq 0.1\) g/cm\({}^{2}\)) to be surface-independent (bulk-sensitive), and the implanted muons (volume concentration \(\sim\)10\({}^{5}\) cm\({}^{-3}\) for the high-flux beams at J-PARC MLF) decay to positrons and neutrinos with an average lifetime of 2.198 \(\mu\)s, so they do not accumulate in the sample unlike other ion beam irradiation methods. Most notably, the spatial distribution of the emitted high-energy decay positrons has a large asymmetry (1/3 upon averaged over the positron energy) with respect to the muon spin polarization, and the time evolution of the spin polarization can be observed by measuring the time-dependent asymmetry (which is called "\(\mu\)SR time spectrum"). Recently, attempts have been made to observe the diffusive motion of ions in battery materials and ionic conductors by \(\mu\)SR measurements under zero or longitudinal magnetic field (ZF/LF-\(\mu\)SR) [2], where the time-dependent fluctuations of the weak random local field \(\mathbf{H}(t)\) (\(\sim\)10\({}^{-4}\) T) exerted from the magnetic dipole moments of cation nuclei is monitored via the spin relaxation of muons implanted into the materials of interest. In these studies, it is often an issue whether the observed fluctuations are caused by cation diffusion or muon self-diffusion, but it was believed that the distinction could not be made solely from the \(\mu\)SR spectra. Therefore, the attribution of the origin of the fluctuations has been based on information from other experimental methods and, more recently, on inferences from ab initio density functional theory calculations. The muon spin relaxation induced by \(\mathbf{H}(t)\) under ZF/LF is considered to be well approximated by the Kubo-Toyabe (KT) function [3], and it has been routinely used in the analysis of \(\mu\)SR time spectra by curve fitting based on the least-squares method. In deriving the KT function, the effect of the fluctuations is incorporated by a strong-collision model (an approximation of the Markovian process), where \(\mathbf{H}(t)\) is assumed to have no correlation before and after a change. As illustrated in Fig. 1(a), this approximation is considered to hold well in the case of jump diffusion of muons, because the relative configuration of all nuclear magnetic moments is swapped simultaneously in a single jump. On the other hand, in the case of ion diffusion around muons, it is naturally expected to be a rare event that all ions move simultaneously (especially for slow diffusion), resulting in a strong autocorrelation in \(\mathbf{H}(t)\) before and after the change [see Fig. 1(b)]. This is even more so when multiple kinds of nuclei contribute to the internal magnetic field (e.g., \({}^{7}\)Li and \({}^{59}\)Co in Li\({}_{x}\)CoO\({}_{2}\)) while only some of them exhibit diffusion. Nevertheless, the KT function has traditionally been used in the study of ion diffusion. A similar situation has been observed in very recent \(\mu\)SR studies of hybrid organic-inorganic perovskites (HOIPs) [4; 5], where thermally-activated local rotational motion of organic cation molecules contributes to the fluctuation of \(\mathbf{H}(t)\) while the internal fields from PbI\({}_{3}\) lattice remains quasi-static [see Fig. 1(c)]. Moreover, it has been suggested that the observed \(\mu\)SR time spectra may not always be satisfactorily reproduced by the KT function upon the onset of molecular motion. These circumstances led us to examine the corresponding longitudinal spin relaxation function for such cases. In this paper, we propose a model in which the fluctuations of \(\mathbf{H}(t)\) are described by an Edwards-Anderson type autocorrelation function with a parameter \(Q\) to part the quasi-static and dynamical components in the correlation. Our Monte Carlo simulations on the time evolution of muon spin polarization for this model indicate that the relaxation lineshape could be significantly different from that of the KT function. Such difference will open the door to the possibility of experimentally distinguishing between muon self-diffusion and cation dynamics solely from \(\mu\)SR experiment. We show that the origin of the \(\mathbf{H}(t)\) fluctuations can indeed be inferred by comparing relaxation functions based on this model with high-precision \(\mu\)SR time spectra observed in formamidinium lead iodide [FAPbI\({}_{3}\), where FA denotes HC(NH\({}_{2}\))\({}_{2}\)] [5]. Furthermore, based on this result, we will discuss the problems with the preceding studies of ion diffusion by \(\mu\)SR. ## II Model ### Relaxation function for the static internal field Since our goal is to investigate how different autocorrelations of \(\mathbf{H}(t)\) can change in the spin relaxation function, we will introduce the KT function in a simple classical way in the following. When muons are initially polarized along the \(z\) direction, and subjected to a homogeneous magnetic field \(\mathbf{H}=(H_{x},H_{y},H_{z})\), the time evolution of muon polarization is described by \[\sigma_{z}(t) = \frac{H_{z}^{2}}{H^{2}}+\frac{H_{x}^{2}+H_{y}^{2}}{H^{2}}\cos( \gamma_{\mu}Ht) \tag{1}\] \[= \cos^{2}\theta+\sin^{2}\theta\cos(\gamma_{\mu}Ht), \tag{2}\] where \(H=|\mathbf{H}|\). Such single-frequency rotation is observed when muons are subjected to a uniform external magnetic field (not parallel to \(z\)). In the case of random local fields exerted from nuclear magnetic moments, \(H\) varies with the muon position \(\mathbf{r}_{\mu}\) so that \(\mu\)SR spectrum corresponds to a random sampling of \(\mathbf{H}(\mathbf{r}_{\mu})=(H_{x}(\mathbf{r}_{\mu}),H_{y}(\mathbf{r}_{\mu}),H_{z}(\mathbf{r}_{ \mu}))\) which is described by a density distribution, \[\mathbf{n}(\mathbf{H})=\int\delta(\mathbf{H}-\mathbf{H}(\mathbf{r}_{\mu}))d\mathbf{r}_{\mu}. \tag{3}\] The time evolution of spin polarization projected to the \(z\) axis is then given by \[g_{z}(t)=\int\sigma_{z}(t)\mathbf{n}(\mathbf{H})d\mathbf{H}=\iiint_{-\infty}^{\infty} \sigma_{z}(t)\Pi_{\alpha}n_{\alpha}(H_{\alpha})dH_{\alpha}, \tag{4}\] where \(\alpha=x\), \(y\), and \(z\). Unless \(\mathbf{n}(\mathbf{H})\) is a delta function, \(g_{z}(t)\) accompanies relaxation due to the loss of phase coherence in the spin precession. Specifically, provided that the static distribution of \(\mathbf{H}(\mathbf{r}_{\mu})\) is approximated by an isotropic Gaussian distribution, \[n_{\alpha}(H)=\frac{\gamma_{\mu}}{\sqrt{2\pi\Delta}}\exp\left(-\frac{\gamma_{ \mu}^{2}H^{2}}{2\Delta^{2}}\right),\ (\alpha=x,y,z) \tag{5}\] the corresponding relaxation function is obtained analytically by executing the integral in Eq. (4) to yield the static Kubo-Toyabe function [3], \[g_{z}(t)=\frac{1}{3}+\frac{2}{3}(1-\Delta^{2}t^{2})e^{-\frac{1}{2}\Delta^{2}t ^{2}}, \tag{6}\] where the term \(1/3\) is derived by the spatial average of \(\cos^{2}\theta\) in Eq. (2), corresponding the probability that \(\mathbf{H}(\mathbf{r}_{\mu})\) is parallel to \(z\). The linewidth \(\Delta\) is determined by the second moment of the dipole field from the nuclear magnetic moments of the neighboring atoms (including non-cationic ions), \[\Delta^{2}=\gamma_{\mu}^{2}\gamma_{i}^{2}\sum_{i}\sum_{\alpha=x,y}\sum_{ \beta=x,y,z}(\hat{A}_{i}^{\alpha\beta}\mathbf{I}_{i})^{2}, \tag{7}\] where \[(\hat{A}_{i})^{\alpha\beta}=\frac{1}{r_{i}^{3}}(\frac{3\alpha_{\beta}\beta_{i }}{r_{i}^{2}}-\delta_{\alpha\beta})\ (\alpha,\beta=x,y,z), \tag{8}\] Figure 1: A schematic illustration of time-dependent fluctuation of local field \(\mathbf{H}(t)\) at muon sites for (a) muon self-diffusion and (b) ion diffusion, where ions are located at corners on a square lattice. The difference in the dipolar field from each ion (with nuclear magnetic moment orientation shown by an arrow) are visualized by different colors. While their combination at muon site changes randomly with muon diffusion in (a), the contributions from two ions at \(k\) and \(l\) sites remain unchanged while those at \(i\) and \(j\) sites migrate (b). (c) Structure of formamidinium lead iodide [FAPbI\({}_{3}\), with FA denoting HC(NH\({}_{2}\))\({}_{2}\)] in the tetragonal phase, and associated muon sites suggested from first-principles calculations (yellow-hatched areas). Internal field fluctuations associated with the rotational motion of FA molecules and static internal field from the PbI\({}_{3}\) lattice are presumed to coexist. (d) A schematic illustration of local fields consisting of static and fluctuating components [see Eq. (12) in the main text]. is the dipole tensor for the \(i\)th nuclear magnetic moment \(\gamma_{i}\mathbf{I}_{i}\) situated at \(\mathbf{r}_{i}\) from muon. Since the magnetic dipolar field decays in proportion to \(1/r_{i}^{3}\), the magnitude of \(\Delta\) is approximately determined by the nuclear magnetic moments nearest neighboring (nn) to the muon. The Gaussian distribution for \(\mathbf{n}(\mathbf{H})\) works reasonably well when the number \(N\) of the nn nuclear magnetic moments satisfies the condition \(N\geq 4\). Meanwhile, it is a relatively poor approximation for \(N\leq 3\), and it is necessary to calculate \(g_{z}(t)\) by treating the muon-nuclei cluster as a few-quantum spin system using the density matrix method [6; 7]. With recent advances in the computational environment, such calculations have been performed for nuclear magnetic moments located farther from the muon, and good agreement with experiment has been obtained [8]. ### Effect of fluctuating internal fields In general, the effect of fluctuations in linear response theory is treated by a Gaussian-Markov process. In this model, the autocorrelation of fluctuations is given by \[\frac{\langle\mathbf{H}(t)\mathbf{H}(0)\rangle}{\langle H(0)^{2}\rangle}=\frac{\langle \mathbf{H}(t)\mathbf{H}(0)\rangle}{\Delta^{2}/\gamma_{\mu}^{2}}=e^{-\nu t}, \tag{9}\] where \(\langle...\rangle\) denotes the thermal average over the canonical ensemble. (For the more rigorous treatment, see, for example, Ref. [9].) The strong-collision model (also called random Markov process or random phase approximation) is a simplified version of this model, where it is assumed that the density distribution \(n_{\alpha}(H)\) is constant at any time \(t\), and that \(\mathbf{H}(t)\) changes with an average rate \(\nu=1/\tau\) without correlation before and after a change. The resulting relaxation function is known to be a good approximation of a Gaussian process except in the \(\nu t\ll 1\) region [10]. In this case, it is known that the relaxation function \(G_{z}(t)\), which incorporates the effect of fluctuations on the static relaxation function \(g_{z}(t)\), is derived by solving the following integral equation (a linear Volterra equation of the second kind), \[G_{z}(t)=e^{-\nu t}g_{z}(t)+\nu\int_{0}^{t}d\tau e^{-\nu(t-\tau)}g_{z}(t-\tau) G_{z}(\tau). \tag{10}\] This integral equation can be solved analytically by Laplace transform of \(g_{z}(t)\) and \(G_{z}(t)\)[3] or direct numerical integration [11]. The dynamical KT function, \(G_{z}^{\rm KT}(t;\Delta,\nu)\), obtained in this way successfully reproduced the spin relaxation due to muon jump diffusion in a nonmagnetic metal [12; 13; 14]. This indicates that fluctuations in the internal magnetic field due to muon self-diffusion can be well described by the strong-collision model with Eq. (9). On the other hand, fluctuations in \(\mathbf{H}\) due to ion diffusion do not necessarily occur in its entire amplitude; let us consider the situation in Fig. 1(b) where part of \(\mathbf{H}\) is due to contributions from cations or anions that remain stationary while other ions exhibit local motion. In this case, it is clear that the autocorrelation of fluctuation differs from Eq. (9), and it may be given by an equation \[\frac{\langle\mathbf{H}(0)\mathbf{H}(t)\rangle}{\Delta^{2}/\gamma_{\mu}^{2}}\approx(1 -Q)e^{-\nu_{\mu}t}+Qe^{-(\nu_{1}+\nu_{\mu})t}, \tag{11}\] where \(Q\) is the parameter to describe the fractional amplitude exhibiting fluctuations with the rate \(\nu_{\rm i}\), and \(\nu_{\mu}\) is the fluctuation rate due to muon diffusion. The equation with \(\nu_{\mu}=0\) corresponds to a model that was introduced to describe the slowing down of magnetic fluctuations (with a rate \(\nu_{\rm i}\)) in spin glasses, where \(1-Q\) is interpreted as an order parameter to describe "ordering in time" [15; 16]. This model was also applied to derive relaxation functions for analyzing the \(\mu\)SR spectra observed in dilute-alloy spin glasses [17]. However, the corresponding \(G_{z}(t)\) has only been discussed in approximate forms obtained as analytical expressions when \(\nu_{\mu}\ll\Delta\ll\nu_{\rm i}\)[4; 17], and a prescription for the general case is still awaited. Therefore, in the following, we investigate the behavior of \(G_{z}(t)\) corresponding to arbitrary \(Q\) and \(\nu_{i}\) by Monte Carlo simulations. ### Monte Carlo simulation We performed numerical simulations of the spin relaxation function for \(Q>0\) and \(\nu_{\mu}=0\), which we call \(G_{z}^{\rm ID}(t;\Delta,\nu_{\rm i},Q)\), where the ion dynamics is the sole origin of fluctuations with the amplitude \(Q\Delta^{2}\) and the frequency \(\nu_{\rm i}\), \[\gamma_{\mu}^{2}\langle\mathbf{H}(0)\mathbf{H}(t)\rangle=[\sqrt{1-Q}\Delta]^{2}+[ \sqrt{Q}\Delta]^{2}e^{-\nu t}. \tag{12}\] Figure 1d) schematically illustrates the situation in Eq. (12) under these conditions. In our Monte Carlo simulation, the local field that each muon feels was approximately expressed as a vector sum of static and dynamical fields. These fields were randomly generated according to the Gaussian probability distribution \(n_{\alpha}(H)\) with the widths of \(\sqrt{(1-Q)}\Delta/\gamma_{\mu}\) and \(\sqrt{Q}\Delta/\gamma_{\mu}\), respectively. Note that the assignment of \(1-Q\) and \(Q\) is opposite to that in Ref. [15; 16], where \(Q\) represents the static component; this is so defined that the case \(Q=0\) should be reduced to the conventional KT function (with \(\nu_{\mu}>0\) for muon diffusion). The fluctuation of the dynamical component was implemented in accordance with the strong-collision model by regenerating the dynamical field at the average rate of \(\nu_{\rm i}\). The time evolution of a muon spin, initially oriented to the \(z\) direction, in the local field was calculated and its \(z\)-component was averaged for \(10^{8}\) muons to obtain \(G_{z}^{\rm ID}(t)\) for ZF. For finite LFs, the calculation was performed for the vector sum of the local field and the LF along the \(z\) direction. As shown in Fig. 2a)-c), in the case of \(0<Q<1\), the \(\frac{1}{3}\) term once decays as \(\nu_{\rm i}\) increases, then begins to recover with the apparent linewidth decreasing to \(\sqrt{1-Q}\Delta\). Provided that the \(\mu\)SR spectra showing such temperature dependence is analyzed with the usual KT function (Fig. 2d), the apparent fluctuation rate would exhibit an increase followed by decrease. Thus, it is likely to lead to an incorrect interpretation that the cause of the fluctuations is different from a simple thermal excitation process. An important clue in avoiding this is whether the linewidth changes at both ends of the apparent increase and decrease in \(\nu\) derived from the KT function. In Figs. 2b) and 2c), it can be seen that as \(\nu_{\rm i}\) increases, the \(Q\Delta^{2}\) contribution disappears due to motional narrowing, and the relaxation rate decreases (converging to \(\sqrt{1-Q}\Delta\)) while maintaining the Gaussian linshape. Such a behavior is actually observed in HOIP [4; 5], and the change is found to correspond approximately to the case of \(Q\simeq 0.9\). The LF dependence of \(G_{z}^{\rm ID}(t;\Delta,\nu_{\rm i},Q)\) is indispensable information for more accurate determination of the fluctuation frequency in \(\mu\)SR measurements. Let us compare the field dependence of the time spectrum for \(Q=0.5\) and \(\Delta/\nu_{\rm i}=0.5\) shown in Fig. 2e) with that of the conventional KT function in detail. First, the KT function calculated for \(\Delta/\nu_{\rm i}=\sqrt{0.5}\) is shown in Fig. 2f); this is because the linewidth for the fluctuations in the former case is \(\sqrt{Q}\Delta\) rather than \(\Delta\). The oscillations around \(\Delta t\simeq 2\) [originating from the static component corresponding to \((1-Q)\)] seen in Fig. 2e) are absent, indicating qualitatively different behavior. Furthermore, assuming practical situations of curve fitting by the KT function, the results with freely varying \(\nu_{\rm i}\) to reproduce the behavior of Fig. 2e) are shown in Figs. 2g) and 2h). Fig. 2g) reproduces the low-field spectrum well, while the fit is poor on the high-field side, while Fig. 2h) shows the opposite trend. From these comparisons, we can conclude that curve fitting with the KT function does not accurately reproduce the field dependence of the spectrum when \(Q<1\). Now, let us discuss the behavior of the relaxation function for some extreme cases, for which we define \(\Delta_{\mu}=\sqrt{(1-Q)}\Delta\) and \(\Delta_{\rm i}=\sqrt{Q}\Delta\). When the ion dynamics is fast enough to satisfy \(\nu_{\rm i}\gg\Delta\), the spin relaxation for that part is approximated to \[G_{\rm i}(t)\simeq G_{z}^{\rm KT}(t;\Delta_{\rm i},\nu_{\rm i})=e^{-\frac{2 \Delta_{\mu}^{2}t}{\nu_{\rm i}}}. \tag{13}\] Then, if the muon is quasi-static, the corresponding total relaxation function is \[G_{z}^{\rm ID}(t)\simeq G_{z}^{\rm KT}(t;\Delta_{\mu},0)G_{\rm i}(t)=[\frac{1}{ 3}+\frac{2}{3}(1-\Delta_{\mu}^{2}t^{2})e^{-\frac{1}{2}\Delta_{\mu}^{2}t^{2}}] e^{-\frac{2\Delta_{\mu}^{2}t}{\nu_{\rm i}}}, \tag{14}\] indicating that the curve fit with the conventional KT function will yield a reduced linewidth \(\sqrt{(1-Q)}\Delta\). Conversely, in the limit of slow ion dynamics (\(\nu_{\rm i}\ll\Delta\)), the change in the relaxation function is independent of linewidth, and the exponential relaxation of the \(\frac{1}{3}\) component dominates; \[G_{z}^{\rm ID}(t)\simeq\frac{1}{3}e^{-QtQt}+\frac{2}{3}(1-\Delta^{2}t^{2})e^{ -\frac{1}{2}\Delta^{2}t^{2}}, \tag{15}\] which is valid for \(Q\approx 1\). This suggests it difficult to distinguish the ion dynamics from muon diffusion in the slow exponential damping. As demonstrated in the case of FAPbI\({}_{3}\) below, the difference from the conventional KT function becomes noticeable in the intermediate region of \(\nu_{\rm i}\gtrsim\Delta\), so the spectra in such a temperature region are important in data analysis to distinguish between ion dynamics and muon self-diffusion. ### Comparison with experimental result It has been inferred from the previous \(\mu\)SR study that muons in FAPbI\({}_{3}\) are exposed to random local fields originating from the nuclear magnetic moments of both FA molecules and the PbI\({}_{3}\) lattice [5]. Among these, the magnetic field from the FA molecule (mainly due to the proton magnetic moment) is expected to exhibit fluctuation due to rotational motion induced by thermal excitation with increasing temperature. The analysis of the \(\mu\)SR time spectra in the tetragonal phase of FAPbI\({}_{3}\) by the KT function showed that the linewidth \(\Delta\) appears to decrease monotonously above \(\sim\)50 K while the fluctuation frequency \(\nu\) exhibits a weak peak around 100 K. Moreover, the global curve fit including the spectra under various LFs are relatively poor in the 50-120 K range where \(\Delta\) changes significantly. These observations point to the situation that the actual relaxation function deviates from the KT function, as demonstrated in Fig. 2. Therefore, we focused on the temperature variation of the ZF-\(\mu\)SR spectrum and investigated whether the spectral changes could be reproduced by the appropriate \(Q\), \(\Delta\), and \(\nu_{\rm i}\) when the data were analyzed with the new model. Specifically, we prepared a multi-dimensional numerical table of \(G_{z}^{\rm ID}(t;\Delta,\nu_{\rm i},Q)\) for the parameters \(\Delta t\), \(Q\), and \(\nu_{\rm i}\) by Monte Carlo simulation, and performed least-squares curve fitting of the time-dependent asymmetry using the equation \[A(t)=A_{0}G_{z}^{\rm ID}(t;\Delta,\nu_{\rm i},Q)+A_{\rm b}, \tag{16}\] Figure 2: Some examples of the simulated muon spin relaxation function \(G_{z}^{\rm ID}(t;\Delta,\nu_{\rm i},Q)\) incorporating the effect of ion dynamics by the parameter \(Q\), where \(Q\Delta^{2}\) is the amplitude of fluctuation. a)–d) under zero field with different \(Q\) and \(\nu_{\rm i}\), and e)–h) under a variety of longitudinal field \(B_{\rm LF}\) with \(Q=0.5\) and \(\Delta/\nu_{\rm i}=0.5\). Note that \(Q=1\) (and 0) corresponds to the conventional Kubo-Toyabe function. where \(A_{0}\) is the initial asymmetry depending on the instrument, and \(A_{\rm b}\) is the background from muons that missed the sample. In addition, a global fit for multiple spectra was performed, where \(A_{0}\), \(\Delta\), and \(Q\) were common for the spectra at all temperatures. As shown in Fig. 3a), the entire spectra were well reproduced (reduced net chi-square = 1.33), and the values of the two common parameters were obtained as \(Q=0.947(3)\) and \(\Delta=0.191(1)\) MHz, respectively. The values of \(\Delta\) and \(\sqrt{1-Q\Delta}\) (\(=0.044\) MHz) are in good agreement with those obtained from the KT function analysis near 50 K and 120 K [5]. Furthermore, as shown in Fig. 3b), \(\nu_{\rm i}\) shows a temperature dependence consistent with thermally activated process at higher temperatures, which is also in line with the earlier suggestion [5]. Curve fit using the Arrhenius function, \(\nu_{\rm i}=\nu_{0}\exp(-E_{\rm a}/k_{\rm B}T)+\nu_{\rm c}\), yields excellent agreement with \(\nu_{0}=3.6(6)\times 10^{8}\) s\({}^{-1}\), \(E_{\rm a}/k_{\rm B}=630(18)\) K, and \(\nu_{\rm c}=1.62(9)\times 10^{5}\) s\({}^{-1}\) (see the dashed line in Fig. 3b). \(A_{\rm b}\) is small (\(\simeq 0.015\ll A_{0}\)) and nearly independent of temperature as expected, in support of the present model. ## III Discussion We have shown that when the fluctuation of \(\mathbf{H}(t)\) is due to the ion dynamics corresponding to \(0<Q<1\) in Eq. (12), the analysis by the KT function would encounter a region of \(\nu\) where the \(\mu\)SR spectrum is not well reproduced by the KT function. Therefore, a careful analysis of the time spectrum in the temperature region that exhibits such \(\nu\) has the potential to distinguish the origin of fluctuation. Historically, the possibility of applying \(\mu\)SR to the study of ion dynamics was first discussed for the diffusion of Li\({}^{+}\) ions in Li\({}_{x}\)CoO\({}_{2}\), a typical cathode material for Li-ion batteries [2], and then its application to other transition metal oxides containing Li has been attempted [18; 19; 20; 21; 22; 23; 24; 25; 26]. In these studies, the \(\mu\)SR time spectra reflecting \(\mathbf{H}(t)\) originating from the \({}^{7}\)Li and other nuclear magnetic moments (e.g., \({}^{59}\)Co in Li\({}_{x}\)CoO\({}_{2}\)) was analyzed by the dynamical KT function, and it was commonly assumed that the obtained fluctuation rate \(\nu\) directly corresponds to the jumping frequency of Li\({}^{+}\) ions. However, the interpretation of \(\mu\)SR result has been significantly changed in a very recent paper reporting the results of "operando" \(\mu\)SR measurements in the charging and discharging of Li\({}_{x}\)CoO\({}_{2}\) incorporated in a half battery [27]. Namely, it is argued that the fluctuation rate \(\nu\) (derived from the analysis using the KT function) reflects both contributions of Li\({}^{+}\) and muon jump diffusion, and that \(\nu\) divided by the square root of the Li-muon mass ratio (\(\sqrt{m_{\rm Li}/m_{\mu}}\simeq 7.9\)) is the jump frequency of Li. This means that the observed fluctuations mainly reflect the muon jump frequency (\(\nu_{\mu}\gg\nu_{\rm i}\)). From the studies of muon diffusion in solids accumulated to date, it has been established that the thermally-activated tunneling process dominates muon diffusion at temperatures higher than about a fraction of the Debye temperature \(\Theta_{D}\)[30; 31]. The above interpretation is qualitatively predictable given the reported \(\Theta_{D}\approx 300\) K for LiCoO\({}_{2}\), since muon diffusion via the thermally-activated tunneling process is generally considered to prevail over ion diffusion in the relevant temperature range. Another important factor in terms of identifying the origin of fluctuation is the temperature dependence of \(\Delta\) derived from analysis by the KT function. While the decrease in apparent \(\Delta\) with increasing temperature may indicate a contribution of ion dynamics to spin relaxation, muon diffusion is also expected in the temperature region of interest. Our simulation results show that when the latter is dominant (\(\nu_{\mu}\gg\nu_{\rm i}\)), the spin relaxation can be well reproduced by the KT function. Therefore, when the spectra are well reproduced by the KT function even though \(\Delta\) exhibits change, it is more likely that the cause of change in \(\Delta\) is attributed to muon site issues rather than ion dynamics; note that \(\Delta\) can vary with the muon position \(\mathbf{r}_{\mu}\) in inhomogeneous materials. For example, muons may prefer to occupy atomic vacancies or voids to reduce the zero-point energy, which has a significant impact on the behavior of the light particles [28]. Provided that \(\Delta=\Delta(\mathbf{r}_{\mu})\) is small for such defect sites, the mean \(\Delta\) will decrease as the probability of a muon reaching these sites increases with increasing temperature in a diffusion-limited process. Such behavior has been actually observed, for example, in InGaZnO [29]. Figure 3: a) Zero-field \(\mu\)SR spectra observed at low temperatures in FAPbI\({}_{3}\) fitted with a relaxation function obtained by Monte Carlo simulation (global fit assuming that the initial asymmetry, \(Q\), and \(\Delta\) are common for all temperatures). All the spectra are reproduced by \(Q=0.947(3)\) and \(\Delta=0.191(1)\) MHz. b) Arrhenius plot of the internal field fluctuation frequency \(\nu_{\rm i}\) attributed to FA molecular motion obtained in this analysis. The dashed line is the fit by the sum of the thermally activated function and the constant. Inset: temperature dependence of the constant background (see text for detail). In examining previous literature reporting \(\mu\)SR studies of Li ion diffusion, there are many cases in which \(\Delta\) decreases with increasing temperature, but so far there has been little discussion on such behavior from the aforementioned viewpoint. If the observed spin relaxation is well reproduced by the KT function, it is incompatible with the ion diffusion for which the autocorrelation function is assumed to be given by Eq. (12) with \(Q<1\). To determine whether the observed fluctuations are due to ion diffusion, it will be necessary to examine the reproducibility of \(\mu\)SR spectra by the model presented here with \(Q<1\). ## IV Conclusion We have performed Monte Carlo simulations to investigate how the residual correlations in the fluctuations described by an Edwards-Anderson type parameter \(Q\) affect the lineshape of the relaxation function observed in the \(\mu\)SR experiment. As a result, we found a significant deviation from the Kubo-Toyabe function for \(0<Q<1\). We then compared the simulated relaxation function, \(G_{z}^{\rm DD}(t;\Delta,\nu_{\rm i},Q)\), with the \(\mu\)SR spectra observed in FAPbJ\({}_{3}\) in which internal field fluctuations associated with the rotational motion of FA molecules and quasi-static internal field from the PbI\({}_{3}\) lattice are presumed to coexist. The curve fits show that the model provides proper description of the temperature-dependent lineshape with reasonable \(\Delta\), \(\nu_{\rm i}\) and \(Q\), which is in line with cation molecular motion. This result paves the way for the possibility of experimentally distinguishing fluctuations ion dynamics around muons from muon self-diffusion, which has been considered difficult. It also calls into question the attribution of the internal field fluctuations entirely to ion diffusion for the \(\mu\)SR time spectra that are well reproduced by the Kubo-Toyabe function, suggesting the need for a major reexamination of the results of previous analyses. ## Acknowledgment We thank M. Hiraishi, H. Okabe, A. Koda, and K. Fukutani for helpful discussion during the preparation of the manuscript. The \(\mu\)SR results on FAPbJ\({}_{3}\) quoted in this paper were those published in Ref. [5]. The Monte Carlo simulation was conducted with the supercomputer HPE SGI8600 in Japan Atomic Energy Agency. This work was partially supported by JSPS KAKENHI (Grant No. 23K11707, 21H05102, and 20H01864) and the MEXT Program: Data Creation and Utilization Type Material Research and Development Project (Grant No. JPMXP1122683430).
2304.05737
Multicomponent KP type hierarchies and their reductions, associated to conjugacy classes of Weyl groups of classical Lie algebras
This, to a large extent, expository paper, describes the theory of multicomponent hierarchies of evolution equations of XKP type, where X=A, B, C or D, and AKP=KP, and their reductions, associated to the conjugacy classes of the Weyl groups of classical Lie algebras of type X. As usual, the main tool is the multicomponent boson-fermion correspondence, which leads to the corresponding tau-functions, wave functions, dressing operators and Lax operators.
Victor Kac, Johan van de Leur
2023-04-12T09:46:07Z
http://arxiv.org/abs/2304.05737v1
###### Abstract ###### Abstract This, to a large extent, expository paper, describes the theory of multicomponent hierarchies of evolution equations of XKP type, where X=A, B, C or D, and AKP=KP, and their reductions, associated to the conjugacy classes of the Weyl groups of classical Lie algebras of type X. As usual, the main tool is the multicomponent boson-fermion correspondence, which leads to the corresponding tau-functions, wave functions, dressing operators and Lax operators. **Multicomponent KP type hierarchies and their reductions, associated to conjugacy classes of Weyl groups of classical Lie algebras** **Victor Kac** Department of Mathematics, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, U.S.A e-mail: [email protected] **Johan van de Leur** Mathematical Institute, Utrecht University, P.O. Box 80010, 3508 TA Utrecht, The Netherlands e-mail: [email protected] ###### Contents * 1 Introduction * 2 Lie algebras * 2.1 Classical Lie algebras * 2.2 Loop algebras and affine Lie algebras * 2.3 The classical infinite matrix algebras * 2.4 Affine Lie algebras as subalgebras of classical infinite matrix Lie algebras * 2.5 Twisted affine Lie algebras as subalgebras of classical infinite matrix Lie algebras * 3 Representations of classical infinite matrix Lie algebras * 3.1 The spin module for \(a_{\infty}\) * 3.2 The spin module for \(b_{\infty}\) and \(d_{\infty}\) * 3.3 The Weyl module for \(c_{\infty}\) (cf. [8]) * 3.4 Generating fermionic and bosonic fields * 3.5 Multicomponent realizations of \(F_{x}\) and \(x_{\infty}\) * 3.6 Vertex operator realizations of \(F_{x}\) * 3.6.1 Bosonization of type \(a\), \(b\) and \(d\) * 3.6.2 Bosonization of type \(c\) * 3.7 The bosonic realization of \(F_{x}\) * 3.7.1 The bosonic realization of \(F_{x}\), the Clifford algebra case * 3.7.2 The bosonic realization of \(F_{c}\), the Weyl algebra case * 4 * 4 Hierarchies of KP type * 4.1 The fermionic description of the KP, BKP and DKP hierarchies * 4.2 The symplectic bosons description of the CKP hierarchy * 4.3 Bosonic description of the multicomponent KP, BKP, CKP and DKP hierarchies * 4.4 The wave function, Sato-Wilson and Lax equations for the KP, BKP, CKP and DKP hierarchies * 5 Conjugacy classes of the Weyl group for classical Lie algebras and generating fields for \(\hat{\mathfrak{g}}\) and \(\hat{\mathfrak{g}}^{(2)}\) * 5.1 Weyl group elements and the twisted realizations of \(\hat{\mathfrak{g}}\) * 5.2 \(gl_{n}\) * 5.3 \(sp_{2n}\) * 5.4 \(so_{2n}\) * 5.5 \(so_{2n+1}\) * 5.6 Weyl group elements and the twisted realization of \(\hat{\mathfrak{g}}^{(2)}\) * 6 A map from the conjugacy classes of the Weyl group to nilpotent elements in \(\mathfrak{g}\) * 6.1 Construction based on Section 5 * 6.2 A slightly different map * 7 Hierarchies of KP type related to \(x_{\infty}\) and \(\hat{\mathfrak{g}}\) reductions * 7.1 \(\hat{gl}_{n}\) and the \(\lambda\)-KdV hierarchy * 7.2 \(\hat{so}_{2n}\) and the \((\lambda,\mu)\)-reduced DKP hierarchy * 7.3 \(\hat{so}_{2n+1}\) and the \((\lambda,\mu)\)-reduced DKP hierarchy * 7.4 \(\hat{so}_{2n+1}\) and the \((\lambda,\mu)\)-reduced BKP hierarchy * 7.5 \(\hat{sp}_{2n}\) and the \((\lambda,\mu)\)-reduced CKP hierarchy * 8 Hierarchies of KP type related to \(x_{\infty}\) and \(\hat{\mathfrak{g}}^{(2)}\) reductions * 8.1 \(\hat{gl}_{2m}^{(2)}\) and the \((\lambda,\mu)\)-reduced DKP hierarchy * 8.2 \(\hat{gl}_{2m+1}^{(2)}\) and \(\hat{so}_{2n}^{(2)}\) and the \((\lambda,\mu)\)-reduced BKP hierarchy Introduction Let \(V\) be a vector space over \(\mathbb{C}\) with a symmetric (resp. skewsymmetric) nondegenerate bilinear form \((\cdot,\cdot)\). Recall that the Clifford (resp. Weyl) algebra \(C\ell(V)\) (resp. \(W(V)\)) is the factor algebra of the tensor algebra over \(V\) by the \(2\)-sided ideal, generated by the relations \[uv+vu=(u,v)\quad(\text{resp. }uv-vu=(u,v)),\quad u,v\in V.\] The key construction of the paper is that of the _Casimir operator,_ acting on \(F\otimes F\), where \(F\) is an irreducible module over \(C\ell(V)\) (resp. \(W(V)\)). For this we need two sets of vectors \(\{v_{i}^{+}\}_{i\in I}\) and \(\{v_{i}^{-}\}_{i\in I}\) in \(V\) indexed by a set \(I\subset\mathbb{Q}\), invariant under sign change, satisfying the following properties: \[(i) \{v_{i}^{+}\}_{i\in I}\cup\{v_{i}^{-}\}_{i\in I}\quad\text{span }V,\] \[(ii) (v_{i}^{+},v_{j}^{-})=\delta_{i,-j},\quad(v_{i}^{\pm},v_{i}^{\pm })=0\quad\text{if }i\neq 0,\quad v_{0}^{+}=v_{0}^{-}:=v_{0}.\] Let \(\epsilon=1\) (resp. \(0\)) if \(0\in I\) (resp. \(\not\in I\)). The module \(F\) is the irreducible \(C\ell(V)\)- (resp. \(W(V)\)-) module, called the spin (resp. Weyl) module, admitting a non-zero vector \(|0\rangle\) (vacuum vector), such that \[v_{i}^{+}|0\rangle=v_{i}^{-}|0\rangle=0\quad\text{if }i>0,\quad v_{0}|0 \rangle=\frac{\epsilon}{\sqrt{2}}|0\rangle. \tag{1.1}\] Define the Casimir operator \(S\) by the following formula \[S=\epsilon v_{0}\otimes v_{0}+\sum_{i\in I\setminus\{0\}}v_{i}^{+}\otimes v_{ -i}^{-}. \tag{1.2}\] Due to (1.1), we have \[S(|0\rangle\otimes|0\rangle)=\frac{\epsilon}{2}|0\rangle\otimes|0\rangle. \tag{1.3}\] Let \(\mathfrak{g}\) be the Lie subalgebra of \(C\ell(V)\) (resp. \(W(V)\)), spanned by the elements of the form \(v^{+}v^{-}\), where \(v^{\pm}\in V^{\pm}:=\text{span}_{i\in I}\{v_{i}^{\pm}\}\). In the case of \(C\ell(V)\) we have also the corresponding group \(G\), generated by the elements of the form \(1+v^{+}v^{-}\), where \(v^{\pm}\in V^{\pm}\) and \(\text{span}\left\{v^{+},v^{-}\right\}\) is an isotropic subspace of \(V\) (its inverse being \(1-v^{+}v^{-}\)). It is straightforward to check that the operator \(S\) on \(F\otimes F\) commutes with the action of \(\mathfrak{g}\) by derivations of the tensor product, hence \(S\) commutes with the diagonal action of \(G\). It follows from (1.3) that all elements \(\tau\) from the orbit \(G\cdot|0\rangle\) satisfy the equation \[S(\tau\otimes\tau)=\frac{\epsilon}{2}\tau\otimes\tau. \tag{1.4}\] Equation(1.4) is called the _generalized KP equation_ in _fermionic picture_. It is easy to prove (along the lines of [16], Theorem 1.2) that, conversely, a non-zero \(\tau\in F\), satisfying equation (1.4), lies on the orbit \(G\cdot|0\rangle\). The most important examples of the above setup are the Clifford algebras \(C\ell_{x}\), \(x=a,\ b\), or \(d\), and the Weyl algebra, which is, for the sake of uniformity, denoted by \(C\ell_{c}\), constructed as follows. We denote the corresponding \(C\ell_{x}\)-module \(F\) by \(F_{x}\). \(\mathbf{C}\ell_{\mathbf{a}}\). \(I=\frac{1}{2}+\mathbb{Z}\); \(\psi_{i}^{+}:=v_{i}^{+}\), \(\psi_{i}^{-}:=v_{i}^{-}\), \(i\in I\), form a basis of \(V\); the symmetric bilinear form is \((\psi_{i}^{+},\psi_{j}^{-})_{a}=\delta_{i,-j}\), \((\psi_{i}^{\pm},\psi_{j}^{\pm})_{a}=0\); the action on the vacuum vector is \(\psi_{i}^{\pm}|0\rangle=0\) for \(i>0\). The generating series \[\psi^{\pm}(z)=\sum_{j\in\frac{1}{2}+\mathbb{Z}}\psi_{j}^{\pm}z^{-j-\frac{1}{2}},\] are called charged free fermions. The relations of \(C\ell_{a}\) become \[[\psi^{+}(z),\psi^{-}(w)]_{+}=\delta(z-w),\quad[\psi^{\pm}(z),\psi^{\pm}(w)]_{ +}=0,\] where \(\delta(z-w)=z^{-1}\sum_{n\in\mathbb{Z}}\left(\frac{w}{z}\right)^{n}\) is the formal delta function. The equation (1.4) becomes \[\mathrm{Res}_{z=0}\psi^{+}(z)\tau\otimes\psi^{-}(z)\tau dz=0,\quad\tau\in F_{ a}. \tag{1.5}\] It is called the _KP hierarchy_ in the _fermionic picture_. The Lie algebra \(\mathfrak{g}\) is the Lie algebra \(gl_{\infty}\) of matrices \((a_{i,j})_{i,j\in\mathbb{Z}}\) over \(\mathbb{C}\) with finitely many non-zero entries via the identification \[E_{ij}=\psi_{-i+\frac{1}{2}}^{+}\psi_{j-\frac{1}{2}}^{-},\quad i,j\in\mathbb{ Z}.\] \(\mathbf{C}\ell_{\mathbf{b}}\). \(I=\mathbb{Z}\), \(\tilde{\phi}_{i}:=v_{i}^{+}=(-1)^{i}v_{i}^{-}\), \(i\in I\), form a basis of \(V\), the symmetric bilinear form \((\cdot,\cdot)_{b}\) is \((\tilde{\phi}_{i},\tilde{\phi}_{j})_{b}=(-1)^{i}\delta_{i,-j}\); the action on the vacuum vector is \(\tilde{\phi}_{i}|0\rangle=0\) if \(i>0\), \(\tilde{\phi}_{0}|0\rangle=\frac{1}{\sqrt{2}}|0\rangle\). The generating series \[\tilde{\phi}(z)=\sum_{j\in\mathbb{Z}}\tilde{\phi}_{j}z^{-j}\] is called the twisted neutral free fermion. The relations of \(C\ell_{b}\) become \[[\tilde{\phi}(z),\tilde{\phi}(w)]_{+}=z\delta(z+w). \tag{1.6}\] The equation (1.4) becomes \[\mathrm{Res}_{z=0}\tilde{\phi}(z)\tau\otimes\tilde{\phi}(-z)\tau\frac{dz}{z}= \frac{1}{2}\tau\otimes\tau,\quad\tau\in F_{b}. \tag{1.7}\] It is called the _BKP hierarchy_ in the _fermionic picture_. The Lie algebra \(\mathfrak{g}\) is the subalgebra of \(gl_{\infty}\), consisting of matrices, skewadjoint with respect to the bilinear form \((\cdot,\cdot)_{b}\), which we denote by \(so_{\infty,odd}\), via the identification \[(-1)^{j}E_{-i,j}-(-1)^{i}E_{-j,i}=\tilde{\phi}_{i}\tilde{\phi}_{j}\quad\text{ for }i>j.\] Note, that we use a a slightly different basis of the Clifford algebra \(C\ell_{b}\) in this introduction, than in the other sections. This means that we also have a different embedding in \(gl_{\infty}\). \(\mathbf{C}\ell_{\mathbf{d}}\). \(I=\frac{1}{2}+\mathbb{Z}\), \(\phi_{i}:=v_{i}^{+}=v_{i}^{-}\), \(i\in I\), form a basis of \(V\), the symmetric bilinear form is \((\phi_{i},\phi_{j})_{d}=\delta_{i,-j}\), \(i,j\in I\); the action on the vacuum vector is \(\phi_{i}|0\rangle=0\) if \(i>0\). The generating series \[\phi(z)=\sum_{j\in\frac{1}{2}+\mathbb{Z}}\phi_{j}z^{-j-\frac{1}{2}}\] is called the neutral free fermion. The relations of \(C\ell_{d}\) become \[[\phi(z),\phi(w)]_{+}=\delta(z-w). \tag{1.8}\] The equation (1.4) becomes \[\mathrm{Res}_{z=0}\phi(z)\tau\otimes\phi(z)\tau dz=0,\quad\tau\in F_{d}. \tag{1.9}\] It is called the _DKP hierarchy_ in the _fermionic picture_. The Lie algebra \(\mathfrak{g}\) is the subalgebra of \(gl_{\infty}\), consisting of matrices, skewadjoint with respect to the bilinear form \((\cdot,\cdot)_{d}\), which we denote by \(so_{\infty,even}\), via the identification \[E_{-i+\frac{1}{2},j+\frac{1}{2}}-E_{-j+\frac{1}{2},i+\frac{1}{2}}=\phi_{i} \phi_{j}\quad\text{for $i>j$}.\] \(\mathbf{C}\ell_{\mathbf{c}}\). \(I=\frac{1}{2}+\mathbb{Z}\), \(b_{i}:=v_{i}^{+}=(-1)^{i+\frac{1}{2}}v_{i}^{-}\), \(i\in I\), form a basis of \(V\), the skewsymmetric bilinear form is \((b_{i},b_{j})_{c}=(-1)^{i-\frac{1}{2}}\delta_{i,-j}\), \(i,j\in I\); the action on the vacuum vector is \(b_{i}|0\rangle=0\) if \(i>0\). The generating series \[b(z)=\sum_{j\in\frac{1}{2}+\mathbb{Z}}b_{j}z^{-j-\frac{1}{2}} \tag{1.10}\] is called the neutral free symplectic boson. The relations of \(C\ell_{c}\) become \[[b(z),b(w)]_{-}=\delta(z+w). \tag{1.11}\] The equation (1.4) becomes \[\mathrm{Res}_{z=0}b(z)\tau\otimes b(-z)\tau dz=0,\quad\tau\in F_{c}. \tag{1.12}\] It is called the _CKP hierarchy_ in the _fermionic picture_. The Lie algebra \(\mathfrak{g}\) is the subalgebra of \(gl_{\infty}\), consisting of matrices, skewadjoint with respect to the bilinear form \((\cdot,\cdot)_{c}\), which we denote by \(sp_{\infty}\), via the identification \[(-1)^{j+1}E_{-i,j+1}+(-1)^{i+1}E_{-j,1+i}=b_{i+\frac{1}{2}}b_{j+\frac{1}{2}} \quad\text{for $i\geq j$}.\] Note that we use the subscripts \(a\),\(b\), \(c\) and \(d\) since the Lie algebra \(\mathfrak{g}\) in these cases has Dynkin diagram \(A_{\infty}\), \(B_{\infty}\), \(C_{\infty}\) and \(D_{\infty}\), respectively. Recall that the equation (1.4) describes the group orbit of the vacuum vector under the group, corresponding to the Lie algebra \(\mathfrak{g}\), in the space \(F_{x}\) for \(x=a,b,d\). The generating series \(\psi^{\pm}(z)\), \(\tilde{\phi}(z)\), \(\phi(z)\) and \(b(z)\) are quantum fields in the sense that, when applied to \(v\in F_{x}\), one obtains an element from \(F_{x}((z))\). An important operation on quantum fields \(\alpha(z)\) and \(\beta(z)\) is their normally ordered product, defined by \[:\alpha(z)\beta(z):=\alpha(z)_{-}\beta(z)\pm\beta(z)\alpha_{+}(z), \tag{1.13}\] where \(\alpha(z)=\alpha_{-}(z)+\alpha_{+}(z)\), \(\alpha_{+}(z)|0\rangle\in F[[z]]\), \(\alpha_{-}(z)|0\rangle\in z^{-1}F[[z^{-1}]]\) and we have in (1.13) a minus sign if \(\alpha(z)\) and \(\beta(z)\) are both fermions. The simple formalism outlined above is applied to the theory of integrable evolution PDE's by using _bosonization_. For example, the simplest, 1-component bosonization, in cases \(a\), \(b\), \(d\), \(c\) is, respectively, \[\alpha_{a}(z)=:\psi^{+}(z)\psi^{-}(z):, \tag{1.14}\] \[\alpha_{b}(z)=\frac{1}{2}:\tilde{\phi}(z)\tilde{\phi}(-z):,\] \[\alpha_{d}(z)=\frac{1}{2}:\phi(z)\phi(-z):,\] \[\alpha_{c}(z)=\frac{1}{2}:b(z)b(-z):.\] In all cases bosonization is expressed in terms of vertex operators from string theory. In order to introduce them we need the following notation, which will be used throughout the paper: \[z\cdot t=\sum_{j=1}^{\infty}z^{j}t_{j},\quad z^{-1}\cdot\tilde{\partial}_{t}= \sum_{j=1}^{\infty}z^{-j}\frac{1}{j}\frac{\partial}{\partial t_{j}}; \tag{1.15}\] also \(z\circ t\) and \(z^{-1}\circ\tilde{\partial}_{t}\) will denote the expressions in (1.15), where the summation is taken over positive odd integers. In the simplest \(x=a\) case, \(\alpha_{a}(z)=\sum_{n\in\mathbb{Z}}\alpha_{n}z^{-n-1}\), where \[[\alpha_{m},\alpha_{n}]=m\delta_{m,-n},\quad\alpha_{j}|0\rangle=0\quad\text{ for }j\geq 0. \tag{1.16}\] Hence we have a Heisenberg Lie algebra, with respect to which \(F_{a}\) decomposes in a direct sum of eigenspaces \(F_{a}=\oplus_{m\in\mathbb{Z}}F^{(m)}\) of \(\alpha_{0}\), so that each \(F^{(m)}\) is an irreducible Heisenberg Lie algebra module with a non-zero vector \(|m\rangle\), such that \(\alpha_{j}|m\rangle=\delta_{j,0}m|m\rangle\) for \(j\geq 0\). This allows one to construct an isomorphism \[\sigma_{1}:F_{a}\xrightarrow{\sim}B_{1}=\mathbb{C}[q,q^{-1},t_{1},t_{2}, \ldots], \tag{1.17}\] called the bosonization of type \(a\), which is uniquely determined by the following properties: \[\sigma_{1}(|m\rangle)=q^{m},\ \sigma_{1}\alpha_{0}\sigma_{1}^{-1}=q\frac{ \partial}{\partial q},\ \sigma_{1}\alpha_{n}\sigma_{1}^{-1}=\frac{\partial}{\partial t_{n}}\text{ and }\sigma_{1}\alpha_{-n}\sigma_{1}^{-1}=nt_{n},\text{ for }n>0. \tag{1.18}\] Furthermore, since \([\alpha_{k},\psi^{\pm}(z)]=\pm z^{k}\psi^{\pm}(z)\), we can identify the charged free fermions, under the isomorphism \(\sigma_{1}\), with vertex operators: \[\sigma_{1}\psi^{\pm}(z)\sigma_{1}^{-1}=q^{\pm 1}z^{\pm q\frac{\partial}{ \partial q}}e^{\pm z\cdot t}e^{\mp z^{-1}\cdot\tilde{\partial}_{t}}. \tag{1.19}\] Recall that the isomorphism (1.17) is equivalent to the Jacobi triple product identity. Indeed, define energy \(|0\rangle\)=0 and energy \(\psi_{j}^{\pm}=-j\), then \[F_{a}=\bigoplus_{i\in\frac{1}{2}\mathbb{Z}_{\geq 0}}F_{i},\text{ where }F_{i}=\{w\in F|\,\text{energy}(w)=i\}.\] Let \[F_{i}^{(m)}:=F_{i}\cap F^{(m)}\text{ and }\dim_{q,t}F_{a}:=\sum_{i,m}\dim(F_{i}^ {(m)})q^{i}t^{m}.\] Then, looking at the spin module, \(F_{a}\), we have \[\dim_{q,t}F_{a}=\prod_{j\in\frac{1}{2}+\mathbb{Z}_{\geq 0}}(1+tq^{j})(1+t^{-1}q^{ j}). \tag{1.20}\] Looking at \(B_{1}\), one finds \[\dim_{q,t}F_{a}=\sum_{m\in\mathbb{Z}}t^{m}q^{\frac{m^{2}}{2}}\prod_{j\in \mathbb{Z}_{>0}}\frac{1}{1-q^{j}}. \tag{1.21}\] Thus, the equality between (1.20) and (1.21), is the famous Jacobi triple product identity. Applying \(\sigma_{1}\) to the KP hierarchy in the fermionic picture (1.5), gives the famous KP hierarchy of bilinear equations on the tau-function [6]: \[\text{Res}_{z=0}e^{z\cdot(t-\overline{t})}e^{z^{-1}\cdot(\widehat{\partial}_{ \overline{t}}-\bar{\partial}_{t})}\tau(t)\tau(\overline{t})dz=0,\quad\tau(t) \in B_{1}. \tag{1.22}\] Here and futher \(\bar{t}\) denotes another copy of \(t=(t_{1},t_{2},\ldots)\). The KP hierarchy of equations on the tau-function, like (1.22), can be rewritten in terms of Hirota bilinear equations, following [27] and many other papers. However, the most important approach, introduced by Sato in [27], is by the Lax type equations via the wave functions and dressing operators. To describe this, consider the algebra of pseudo-differential operators \(\mathbb{C}[[t_{1},t_{2},\ldots]][[\partial^{-1}]]\), where \(\partial=\frac{\partial}{\partial t_{1}}\). Recall that for the KP hierarchy the _dressing operator_\(P(t,\partial)\) for the tau-function \(\tau(t)\) is the monic pseudo-differential operator, whose symbol is \[P(t,z)=\frac{e^{-z^{-1}\cdot\bar{\partial}_{t}}\tau(t)}{\tau(t)}. \tag{1.23}\] The associated _Lax operator_\(L(\partial)=L(t,\partial)\) is defined as the "dressed" by \(P(t,\partial)\) operator \(\partial\): \[L(\partial)=P(\partial)\partial P(\partial)^{-1}. \tag{1.24}\] The equations (1.22) on the tau-function are equivalent to the Sato-Wilson equations \[L(\partial)P(\partial)=P(\partial)\partial,\quad\frac{\partial P(\partial)}{ \partial t_{k}}=-(L(\partial)^{k})_{-}P(\partial),\quad k=1,2,\ldots, \tag{1.25}\] where the subscript \({}_{-}\), as usual, denotes the integral part of \(L(\partial)^{k}\). This is equivalent to the following _Lax-Sato equations_ on \(L(\partial)\): \[\frac{\partial L(\partial)}{\partial t_{k}}=[(L(\partial)^{k})_{+},L(\partial)],\quad k=1,2,\ldots, \tag{1.26}\] where the subscript \({}_{+}\), as usual, denotes the differential part of \(L(\partial)^{k}\), which are equivalent to (1.22). It turns out that the bosonization (1.17)-(1.19) in the \(x=a\) case can be generalized to an \(s\)-component bosonization \(\sigma_{s}\) for an arbitrary positive integer \(s\) (see [15]). For that one breaks the fields \(\psi^{\pm}(z)=\sum_{k\in\frac{1}{2}+Z}\psi^{\pm}_{k}z^{-k-\frac{1}{2}}\) in a sum of \(s\) fields \(\psi^{\pm a}(z)=\sum_{k\in\frac{1}{2}+Z}\psi^{\pm a}_{k}z^{-k-\frac{1}{2}}\), where \(a=1,\ldots,s\), by letting \[\psi^{\pm a}_{j}=\psi^{\pm}_{sj\pm\frac{1}{2}(s-2a+1)},\quad j\in\frac{1}{2}+ \mathbb{Z}, \tag{1.27}\] namely the modes \(\psi^{\pm a}_{j}\) have indices \(\equiv\pm\frac{1}{2}(1-2a)\mod s\). Then one constructs \(s\) bosonic fields \(\alpha^{a}(z)=:\psi^{+a}(z)\psi^{-a}(z):=\sum_{j\in\mathbb{Z}}\alpha^{a}_{j}z^ {-j-1}\), \(a=1,\ldots,s\), whose modes form a Heisenberg Lie algebra of "rank" \(s\): \[[\alpha^{a}_{j},\alpha^{b}_{k}]=j\delta_{ab}\delta_{j,-k},\quad\alpha^{a}_{j} |0\rangle=0\quad\text{for $j\geq 0$}. \tag{1.28}\] Like in the 1-component case, this Heisenberg Lie algebra does not act irreducibly on \(F=F_{a}\), so in order to get irreducibility one needs to introduce additional operators \(Q_{a}\), \(a=1,\ldots s\), on \(F\), analogous to the operator of multiplication by \(q\). They are uniquely determined by the following properties: \[Q_{a}|0\rangle=\psi^{+a}_{-\frac{1}{2}}|0\rangle,\quad Q_{a}\psi^{\pm b}_{k}= (-1)^{1-\delta_{ab}}\psi^{\pm b}_{k\mp\delta_{ab}}Q_{a}. \tag{1.29}\] Then the \(s\)-component bosonization is a vector space isomorphism \[\sigma_{s}:F\xrightarrow{\sim}B_{s}=\mathbb{C}[q_{a},q_{a}^{-1},t_{1}^{(a)}, t_{2}^{(a)},\ldots|\,1\leq a\leq s], \tag{1.30}\] uniquely determined by \(\sigma_{s}(|0\rangle)=1\) and the following properties (\(a=1,\ldots,s\)): \[\begin{split}&\sigma_{s}Q_{a}\sigma_{s}^{-1}=(-1)^{\sum_{j=1}^{a-1} q_{j}\frac{\partial}{\partial q_{j}}}q_{a},\ \sigma_{s}\alpha^{a}_{0}\sigma_{s}^{-1}=q_{a}\frac{\partial}{\partial q_{a}}, \\ &\sigma_{s}\alpha^{a}_{j}\sigma_{s}^{-1}=\frac{\partial}{\partial t _{j}^{(a)}}\text{ and }\sigma_{s}\alpha^{a}_{-j}\sigma_{s}^{-1}=jt_{j}^{(a)},\text{ for $j>0$}.\end{split} \tag{1.31}\] As in the 1-component case, under the isomorphism \(\sigma_{s}\), one has the following identification \[\sigma_{s}\psi^{\pm a}(z)\sigma_{s}^{-1}=(-1)^{\sum_{j=1}^{a-1}q_{j}\frac{ \partial}{\partial q_{j}}}q_{a}^{\pm 1}z^{\pm q_{a}\frac{\partial}{\partial q_{a}}}e^{ \pm z\cdot t^{(a)}}e^{\mp z^{-1}\cdot\vec{\partial}_{t^{(a)}}} \tag{1.32}\] The isomorphism (1.30) is equivalent to the product of \(s\) copies of (1.20), which is equal to the product of \(s\) copies of (1.21), by the Jacobi triple product identity. Note that the KP hierarchy in the fermionic picture (1.5) becomes \[{\rm Res}_{z=0}\sum_{a=1}^{s}\psi^{+a}(z)\tau\otimes\psi^{-a}(z)\tau dz=0. \tag{1.33}\] Furthermore under the isomorphism \(\sigma_{s}\), we have \[\sigma_{s}:F^{(0)}\xrightarrow{\sim}B^{(0)}_{s}={\mathbb{C}}[q^{\underline{k}},t^{(a)}_{1},t^{(a)}_{2},\ldots|\,1\leq a\leq s,\ |\underline{k}|=0],\] where \(\underline{k}=(k_{1},\ldots,k_{s})\in{\mathbb{Z}}^{s}\), \(q^{\underline{k}}=q^{k_{1}}_{1}q^{k_{2}}_{2}\cdots q^{k_{s}}_{s}\) and \(|\underline{k}|_{j}\) stands for \(\sum_{i=1}^{j}k_{i}\), and \(|\underline{k}|=|\underline{k}|_{s}\), so that for \(\tau\in F^{(0)}\) we can write \(\sigma_{s}(\tau)=\sum_{|\underline{k}|=0}\tau_{\underline{k}}(t)q^{\underline{ k}}\in B^{(0)}\), where \(t=(t^{(a)}_{1},t^{(a)}_{2},\ldots|\,1\leq a\leq s)\). Applying the identification \(\sigma_{s}\otimes\sigma_{s}:F^{(0)}\otimes F^{(0)}\xrightarrow{\sim}B^{(0)}_{ s}\otimes B^{(0)}_{s}\) to (1.33) and using (1.32), we obtain the \(s\)-component _KP hierarchy_ in the _bosonic picture_ on the tau-functions \(\tau_{\underline{k}}(t)\in B^{(0)}_{s}\): \[{\rm Res}_{z=0}\sum_{j=1}^{s}(-1)^{|\underline{k}+\underline{\ell}|_{j-1}}z^{ k_{j}-\ell_{j}-2+2\delta_{js}}e^{z\cdot(t^{(j)}-\overline{t}^{(j)})}e^{z^{-1} \cdot(\tilde{\Theta}_{\overline{\chi}(j)}-\tilde{\bar{\partial}}_{t^{(j)}})} \tau_{\underline{k}+\underline{e}_{s}-\underline{e}_{j}}(t)\tau_{\underline{ \ell}+\underline{e}_{j}-\underline{e}_{s}}(\overline{t})=0, \tag{1.34}\] for each pair \(\underline{k},\underline{\ell}\in{\mathbb{Z}}^{s}\), such that \(|\underline{k}|=|\underline{\ell}|=0\). Here \(\underline{e}_{j}\in{\mathbb{Z}}^{s}\) are the standard basis vectors of \({\mathbb{Z}}^{s}\). A special case for \(s=1\) is the classical KP hierarchy (1.22) on the tau-function \(\tau(t)\in{\mathbb{C}}[t_{1},t_{2},\ldots]\). In the case of the \(s\)-component KP hierarchy, the dressing operator \(P(\underline{k};t,\partial)\), associated to the tau-function \(\{\tau_{\underline{k}}\}|\,\underline{k}|=0\}\), is defined in a similar way, except that \(\partial=\sum_{j=1}^{s}\frac{\partial}{\partial t^{(j)}_{1}}\) and its symbol is now a \(s\times s\) matrix \(P(\underline{k};t,z)\), with entries \[P_{ij}(\underline{k};t,z)=\frac{(-1)^{\eta_{ij}}z^{\delta_{ij}-1}e^{-z^{-1} \cdot\tilde{\Phi}_{t^{(j)}}}\tau_{\underline{k}+\underline{e}_{s}-\underline{ e}_{j}}(t)}{\tau_{\underline{k}}(t)},\quad i,j=1,\ldots,s, \tag{1.35}\] where \[\eta_{ij}=0\ \ {\rm if}\ i\geq j\ \ {\rm and}\ =1\ {\rm otherwise}. \tag{1.36}\] This is a monic \(s\times s\) matrix pseudo-differential operator. The (commuting) Lax operators are the dressed by \(P(\underline{k};t,\partial)\) operators \(I\partial\) and \(E_{jj}\), \(j=1,\ldots,s\): \[L(\underline{k};t,\partial)=P(\underline{k};t,\partial)\partial P(\underline{ k};t,\partial)^{-1},\quad C^{(j)}(\underline{k};t,\partial)=P(\underline{k};t, \partial)E_{jj}P(\underline{k};t,\partial)^{-1}.\] The equations (1.34) on the tau-functions imply the following \(s\times s\) matrix Lax-Sato equations on \(L=L(\underline{k},:t,\partial)\) and \(C^{(j)}=C^{(j)}(\underline{k},:t,\partial)\) for each \(\underline{k}\): \[\frac{\partial L}{\partial t^{(j)}_{i}}=[(L^{i}C^{(j)})_{+},L],\quad\frac{ \partial C^{(k)}}{\partial t^{(j)}_{i}}=[(L^{i}C^{(j)})_{+},C^{(k)}],\quad j,k=1,\ldots,s,\quad i=1,2,\ldots. \tag{1.37}\] See Subsection 7.1 for more details. Note that for \(s>1\) one needs the auxilary operators \(C^{(j)}\) in order to have Lax equations (1.37), which are equivalent to (1.34), see [15]. Note that the Lax operator \(L\) (resp. the auxiliary operators \(C^{(k)}\)) is an arbitrary \(s\times s\) matrix pseudo-differential operator of the form (resp. \(E_{kk}+\sum_{i\geq 1}C_{i}^{(k)}(t)\partial^{-i}\)), where \(A_{i}(t)\) and \(C_{i}^{(k)}(t)\) are \(s\times s\) matrices with entries in \(\mathbb{C}[[t_{i}^{(a)}|a=1,\ldots,s,\ i=1,2,\ldots]]\). For the 1-component \(x=b\) case, \(\alpha_{b}(z)=\sum_{n\in\mathbb{Z}_{odd}}\alpha_{n}z^{-n-1}\), where \[[\alpha_{m},\alpha_{n}]=\frac{m}{2}\delta_{m,-n},\quad\alpha_{j}|0\rangle=0 \quad\text{for }j>0. \tag{1.38}\] Hence we have a Heisenberg Lie algebra, with respect to which \(F_{b}\) is irreducible. This allows one to construct an isomorphism \[\sigma_{1}:F_{b}\xrightarrow{\sim}B_{1}=\mathbb{C}[t_{1},t_{3},\ldots], \tag{1.39}\] called bosonization of type \(b\), which is uniquely determined by the following properties: \[\sigma_{1}(|0\rangle)=1,\ \sigma_{1}\alpha_{n}\sigma_{1}^{-1}=\frac{\partial}{ \partial t_{n}}\text{ and }\sigma_{1}\alpha_{-n}\sigma_{1}^{-1}=\frac{n}{2}t_{n},\text{ for }n>0. \tag{1.40}\] This isomorphism is equivalent to the following simple identity: \[\dim_{q}(F_{b}):=\sum_{j\in\mathbb{Z}_{\geq 0}}\dim(F_{j})q^{j}=\prod_{j\in \mathbb{Z}_{>0}}(1+q^{j})=\prod_{j\in\mathbb{Z}_{>0,odd}}\frac{1}{1-q^{j}}, \tag{1.41}\] where the energy decomposition \(F=\oplus_{j}F_{j}\) is defined by energy \(|0\rangle=0\), energy \(\tilde{\phi}_{j}=-j\). Furthermore, since \([\alpha_{k},\tilde{\phi}(z)]=\pm z^{k}\tilde{\phi}(z)\), we can identify the neutral twisted free fermion, under the isomorphism \(\sigma_{1}\), with the vertex operator: \[\sigma_{1}\tilde{\phi}(z)\sigma_{1}^{-1}=\frac{1}{\sqrt{2}}e^{z\circ t}e^{-2( z^{-1}\circ\tilde{\phi}_{t})}. \tag{1.42}\] Applying \(\sigma_{1}\) turns equation (1.7) into the (small) BKP hierarchy [7], [16] (there also exists the so-called large BKP hierarchy, which was introduced in the Introduction of [16]): \[\text{Res}_{z=0}e^{z\circ(t-\bar{t})}e^{2(z^{-1}\circ(\tilde{\phi}_{\bar{t}}- (\tilde{\phi}_{\bar{t}}))}\tau(t)\tau(\overline{t})dz=\tau(t)\tau(\overline{ t}). \tag{1.43}\] In the \(x=b\) case the dressing operator is, see [7] or [28], \[P(t,z)=\frac{e^{-2(z^{-1}\circ\tilde{\partial}_{\bar{t}})}\tau(t)}{\tau(t)}.\] Then the Lax operator \(L\), defined again by (1.24), satisfies the same Lax-Sato equations (1.26), but only for odd \(k\). In this case \(L\) is a pseudo-differential operator of the form \(L=\partial+\sum_{j>0}u_{j}(t)\partial^{-j}\), where \(u_{j}(t)\in\mathbb{C}[[t_{1},t_{3},\ldots]]\), satisfying the extra condition [28]: \[L(\partial)^{*}=-\partial^{-1}L(\partial)\partial. \tag{1.44}\] For the 1-component \(x=c\) case, \(\alpha_{c}(z)=\sum_{n\in\mathbb{Z}_{odd}}\alpha_{n}z^{-n-1}\), where \[[\alpha_{m},\alpha_{n}]=-\frac{m}{2}\delta_{m,-n},\quad\alpha_{j}|0\rangle=0 \quad\text{for }j>0. \tag{1.45}\] Since \(F_{c}\) is not irreducible, when restricted to this Heisenberg Lie algebra, we need to construct more operators. Since \([\alpha_{k},b(z)]=-z^{k}b(z)\), one finds that \[b(z)=V_{-}(z)^{-1}\Theta(z)V_{+}(z), \tag{1.46}\] where (cf. [26] or Subsection 3.6.2 of this paper for details) \[V_{-}(z)=\exp\left(-2\sum_{k<0,\,\text{odd}}\frac{\alpha_{k}}{k}z^{-k}\right),\qquad V_{+}(z)=\exp\left(-2\sum_{k>0,\,\text{odd}}\frac{\alpha_{k}}{k}z^{-k }\right), \tag{1.47}\] and \[\Theta(z)=\sum_{k\in\frac{1}{2}+\mathbb{Z}}\theta_{k}z^{-k-\frac{1}{2}}:=V_{ -}(z)^{-1}b(z)V_{+}(z)^{-1}.\] One has \[\theta(z)|0\rangle=V_{-}(z)^{-1}\sum_{i>0}b_{-i}z^{i-\frac{1}{2}}|0\rangle\in F _{c}[[z]],\] and \[\Theta(y)\Theta(-z)+\Theta(-z)\Theta(y)=2z^{\frac{1}{2}}\partial_{z}(z^{\frac {1}{2}}\delta(y-z)).\] We construct an isomorphism \[\sigma_{1}:F_{c}\xrightarrow{\sim}B_{1}=\mathbb{C}[t_{1},t_{3},\dots,\xi_{ \frac{1}{2}},\xi_{\frac{3}{2}},\dots], \tag{1.48}\] where \(\xi_{j}\) are anti-commuting variables, called bosonization of type \(c\), which is uniquely determined by the following properties: \[\begin{gathered}\sigma_{1}(|0\rangle)=1,\ \sigma_{1}\alpha_{-j} \sigma_{1}^{-1}=\frac{1}{2}jt_{j},\ \sigma_{1}\alpha_{j}\sigma_{1}^{-1}=-\frac{\partial}{ \partial t_{j}},\\ \sigma_{1}\theta_{-i}\sigma_{1}^{-1}=(-1)^{i+\frac{1}{2}}2i\xi_ {i},\ \sigma_{1}\theta_{i}\sigma_{1}^{-1}=\frac{\partial}{\partial\xi_{i}},\end{gathered} \tag{1.49}\] and we find \[\sigma_{1}b(z)\sigma_{1}^{-1}=e^{z\circ t}\sum_{i\in\frac{1}{2}+\mathbb{Z}_{ \geq 0}}\left(\frac{\partial}{\partial\xi_{i}}z^{-i-\frac{1}{2}}-2i\xi_{i}(- z)^{i-\frac{1}{2}}\right)e^{2(z^{-1}\circ\tilde{\theta}_{t})}. \tag{1.50}\] The isomorphism (1.48) is equivalent to the following obvious identity for the Weyl module \(F_{c}\), when comparing the generators of the algebra \(C\ell_{c}\) and the operators that appear in the vertex operator (1.50): \[\dim_{q}F_{c}=\prod_{j\in\frac{1}{2}+\mathbb{Z}_{\geq 0}}\frac{1}{1-q^{j}}= \prod_{j\in\mathbb{Z}_{\geq 1,odd}}\frac{1}{1-q^{j}}\prod_{j\in\frac{1}{2}+ \mathbb{Z}_{\geq 0}}(1+q^{j}), \tag{1.51}\] where the energy decomposition is defined by energy \(|0\rangle=0\), energy \(b_{j}=-j\), hence energy \(\alpha_{k}=-k\), and energy \(\theta_{j}=-j\). Equation (1.12) turns under \(\sigma_{1}\) into [26]: \[\text{Res}_{z=0}\sum_{i,j\in\frac{1}{2}+\mathbb{Z}_{\geq 0}}e^{z \circ(t-\bar{t})}e^{2(z^{-1}\circ(\tilde{\partial}_{t}-\tilde{\partial}_{t})} \left(\frac{\partial}{\partial\xi_{i}}z^{-i-\frac{1}{2}}-2i\xi_{i}(-z)^{i- \frac{1}{2}}\right)\times\] \[\left(\frac{\partial}{\partial\overline{\xi}_{j}}(-z)^{-j-\frac{ 1}{2}}-2j\overline{\xi}_{j}(z)^{j-\frac{1}{2}}\right)\tau(t,\xi)\tau(\overline {t},\overline{\xi})dz=0.\] To construct the dressing operator \(P\) in the 1-component \(x=c\) case, we write, see [26], \[\tau(t,\xi)=\tau_{0}(t)+\sum_{m\in\frac{1}{2}+\mathbb{Z}_{>0}}\tau_{m}(t)\xi_{ \frac{1}{2}}\xi_{m}+\text{terms of degree}>2\text{ in the $\xi_{j}$'s},\] and let \[P(t,z)=\frac{1}{\tau_{0}(t)}e^{2(z^{-1}\circ\tilde{\partial}_{t})}\left(\tau_ {0}(t)-\sum_{m\in\frac{1}{2}+\mathbb{Z}_{>0}}\!\!\!\tau_{m}(t)z^{-m-\frac{1}{2 }}\right).\] Then \(L\), defined by (1.24), satisfies the Lax-Sato equations (1.26), but as in the \(x=b\) case, only for odd \(k\). Furthermore, \(L\) satisfies the extra condition (see [8] or [26]) \(L^{*}=-L\). For the 1-component \(x=d\) case, \(\alpha_{d}(z)=\sum_{n\in\mathbb{Z}_{even}}\alpha_{n}z^{-n-1}\), where \[[\alpha_{m},\alpha_{n}]=\frac{m}{2}\delta_{m,-n},\quad\alpha_{j}|0\rangle=0 \quad\text{for $j\geq 0$}. \tag{1.52}\] Hence we have a Heisenberg Lie algebra, with respect to which \(F_{d}\) splits into the direct sum \(F_{d}=\bigoplus_{i\in\mathbb{Z}}F^{(i)}\), where \(\alpha_{0}f=if\) for \(f\in F^{(i)}\). The Heisenberg Lie algebra module \(F^{(i)}\) is irreducible and has the highest weight vector \[|i\rangle=\begin{cases}\phi_{2i+\frac{3}{2}}\phi_{2i-\frac{1}{2}}\cdots\phi_{ -\frac{5}{2}}\phi_{-\frac{1}{2}}|0\rangle&i<0,\\ \phi_{-2i+\frac{1}{2}}\phi_{-2i+\frac{5}{2}}\cdots\phi_{-\frac{7}{2}}\phi_{- \frac{3}{2}}|0\rangle&i\geq 0,\end{cases}\] and \(\alpha_{0}|i\rangle=i|i\rangle\). This allows one to construct an isomorphism \[\sigma_{1}:F_{d}\xrightarrow{\sim}B_{1}=\mathbb{C}[q,q^{-1},t_{1},t_{2},\ldots], \tag{1.53}\] called the bosonization of type \(d\), which is uniquely determined by the following properties: \[\sigma_{1}(|i\rangle)=q^{i},\ \sigma_{1}\alpha_{0}\sigma_{1}^{-1}=q\frac{ \partial}{\partial q},\ \sigma_{1}\alpha_{2n}\sigma_{1}^{-1}=\frac{\partial}{\partial t_{n}}\text{ and }\sigma_{1}\alpha_{-2n}\sigma_{1}^{-1}=nt_{n},\text{ for }n\in \mathbb{Z}_{>0}. \tag{1.54}\] Since \([\alpha_{2k},\phi(z)]=-z^{2k}\phi(-z)\), we cannot find a vertex operator formulation of the field \(\phi(z)\). Instead, we define the Heisenberg eigenfunctions \[\phi^{\pm}(z):=\frac{\phi(z)\mp\phi(-z)}{2},\quad\text{so that }[\alpha_{2k},\phi^{ \pm}(z)]=\pm z^{2k}\phi^{\pm}(z). \tag{1.55}\] These fermionic fields can be identified with the following vertex operators \[\sigma_{1}\phi^{\pm}(z)\sigma_{1}^{-1}=q^{\pm 1}z^{-\frac{1}{2}\pm\frac{1}{2} \pm 2q\frac{\partial}{\partial q}}e^{\pm z^{2}\cdot t}e^{\mp z^{-2}\cdot\tilde{ \partial}_{t}} \tag{1.56}\] With respect to \(so_{\infty,even}\) the module \(F\) splits into two irreducible modules, namely \[F_{d}=F^{\overline{0}}\oplus F^{\overline{1}}=\bigoplus_{i\in 2\mathbb{Z}}F^{(i )}\oplus\bigoplus_{i\in 1+2\mathbb{Z}}F^{(i)},\] with a highest weight vector \(|\overline{0}\rangle=|0\rangle\), respectively \(|\overline{1}\rangle=\phi_{-\frac{1}{2}}|0\rangle=|-1\rangle\), such that \(\alpha_{j}|\overline{\epsilon}\rangle=-\epsilon\delta_{j,0}|\overline{\epsilon}\rangle\) for \(j\geq 0\), where \(\epsilon=0\) or \(1\). Thus the DKP hierarchy (1.9) turns into \[\operatorname{Res}_{z=0}\big{(}\phi^{+}(z)\tau\otimes\phi^{-}(z)\tau+\phi^{-} (z)\tau\otimes\phi^{+}(z)\tau\big{)}\,dz=0, \tag{1.57}\] with \(\tau\in F^{\overline{0}}\) (the same equation holds for \(\tau\in F^{\overline{1}}\)). Applying \(\sigma_{1}\) to this equation gives the charged version of the DKP hierarchy [12], [16]: \[\begin{split}\operatorname{Res}_{z=0}\Bigl{(}& z^{n-m-2}e^{z\cdot(t-\overline{t})}e^{z^{-1}\cdot(\tilde{\partial}_{ \overline{t}}-\tilde{\partial}_{t})}\tau_{n-1}(t)\tau_{m+1}(\overline{t})+\\ &+z^{m-n-2}e^{-z\cdot(t-\overline{t})}e^{-z^{-1}\cdot(\tilde{ \partial}_{\overline{t}}-\tilde{\partial}_{t})}\tau_{n+1}(t)\tau_{m-1}( \overline{t})\Bigr{)}dz=0,\end{split} \tag{1.58}\] where \(n\) and \(m\) are odd and \(\tau(t)=\sum_{k\in2\mathbb{Z}}\tau_{k}(t)q^{k}\). Note that this holds for \(F^{\overline{0}}\). In the case of \(F^{\overline{1}}\), \(n\) and \(m\) are even and \(\tau(t)=\sum_{k\in 1+2\mathbb{Z}}\tau_{k}(t)q^{k}\). The isomorphism (1.53) is equivalent to the following version of the Jacobi triple product identity: \[\dim_{q,t}F_{d}=\prod_{j\in\frac{1}{2}+2\mathbb{Z}_{\geq 0}}(1+tq^{j+1})(1+t^{ -1}q^{j})=\sum_{m\in\mathbb{Z}}t^{m}q^{\frac{2m^{2}+m}{2}}\prod_{j\in\mathbb{Z} _{>0}}\frac{1}{1-q^{2j}},\] where again energy \(|0\rangle=0\) and energy \(\phi_{j}=-j\). The two irreducible \(so_{\infty,even}\) representations correspond to \[\sum_{m\in 2\mathbb{Z}}t^{m}q^{\frac{2m^{2}+m}{2}}\prod_{j\in\mathbb{Z}_{>0}} \frac{1}{1-q^{2j}}\quad\text{and }\sum_{m\in 1+2\mathbb{Z}}t^{m}q^{\frac{2m^{2}+m}{2}}\prod_{j\in\mathbb{Z}_{>0}} \frac{1}{1-q^{2j}}.\] Since we have the appearance of two terms in the bilinear equation (1.58), this hierarchy can be rewritten only in terms of a \(2\times 2\) matrix Lax-Sato equations (see the Introduction of [16] for more details). Namely, for \(\tau(t)=\sum_{j\in 2\mathbb{Z}}\tau_{j}q^{j}\), define, for each \(k\in\mathbb{Z}\), a monic \(2\times 2\) matrix pseudo-differential dressing operator \(P(k;t,\partial)\) with the symbol \[P(k;t,z)=\frac{1}{\tau_{k}(t)}\begin{pmatrix}e^{-z^{-1}\cdot\tilde{\partial}_ {t}}\tau_{k}(t)&z^{-2}e^{(-z)^{-1}\cdot\tilde{\partial}_{t}}\tau_{k+2}(t)\\ z^{-2}e^{-z^{-1}\cdot\tilde{\partial}_{t}}\tau_{k-2}(t)&e^{(-z)^{-1}\cdot\tilde {\partial}_{t}}\tau_{k}(t)\end{pmatrix},\] and the Lax operator by \(L=L(k;t,\partial)=P(k;t,\partial)(E_{11}\partial-E_{22}\partial)P(k;t,\partial)^{ -1}\). One also needs an auxilary operator \(D(\partial)=D(k;t,\partial)\), defined by \(D(\partial)=P(\partial)(E_{11}-E_{22})P(\partial)^{-1}\). Then these operators, for each \(k\), clearly commute and satisfy the following Lax-Sato equations, see Subsection 4.4: \[\frac{\partial L}{\partial t_{i}}=[(L^{i}D)_{+},L],\quad\frac{\partial D}{ \partial t_{i}}=[(L^{i}D)_{+},D].\] Furthermore, \(L\) and \(D\) satisfy the following additional conditions \[L(\partial)^{*}=(E_{11}-E_{22})L(\partial)(E_{11}-E_{22}),\quad D(\partial)^{ *}=-(E_{11}-E_{22})D(\partial)(E_{11}-E_{22}).\] In order to obtain a \(1\times 1\) matrix dressing operator, one has to modify the bosonization, see [30] for this construction. However, that construction does not allow a reduction to an affine Lie algebra, therefore we will not describe this in the introduction. It appears as a special case of the construction in Subsection 4.4, viz. for \(r=0\) and \(s=1\). The multicomponent bosonizations in the \(x=b\), \(x=d\) and \(x=c\) cases are more involved. They can be constructed for an arbitrary pair of non-negative integers \(s\) and \(r\), such that \(r+s>0\). We refer for their constructions to Sections 4.1 and 4.2 and the construction of the corresponding \(s,r\)-component BKP, DKP and CKP hierarchies to Section 4.3. The identity equivalent to the bosonizations in the \(s,r\)-component \(b\) (and \(d\) cases,) is the product of \(s\) copies of the Jacobi triple product identity (1.20)-(1.21) and \(r\) copies of (1.41), while in the the \(c\) case, the corresponding identity is the product of \(r\) copies of (1.51) together with \(s\) copies of the following identity, which, as far as we know, is new, see Subsection 3.6.2, \[\prod_{i\in\mathbb{Z}_{>0}}\frac{1-q^{i}}{(1-tq^{i-\frac{1}{2}})(1-t^{-1}q^{i- \frac{1}{2}})}=\sum_{j,k=0}^{\infty}\frac{q^{\frac{k}{2}}t^{k}}{(1-q)\cdots(1 -q^{k})}q^{jk}\frac{q^{\frac{j}{2}}t^{-j}}{(1-q)\cdots(1-q^{j})}. \tag{1.59}\] In the papers [5], [12], Date, Jimbo, Kashiwara and Miwa initiated the study of reductions of KP type hierarchies to affine Lie algebras. We generalize this to the multicomponent case, see Subsection 4.2. In the 1-component case, one fixes a positive integer \(N\), and let \(\omega=e^{\frac{2\pi\sqrt{-1}}{N}}\), if \(x=a,b,c\) and \(\omega=e^{\frac{\pi\sqrt{-1}}{N}}\), if \(x=d\). Define the fields \[\begin{split} A_{k}(z)=:\psi^{+}(z)\psi^{-}(z\omega^{k}):,\\ B_{k}(z)=\frac{1}{2}:\tilde{\phi}(z)\tilde{\phi}(-z\omega^{k}):, \\ C_{k}(z)=\frac{1}{2}:b(z)b(-z\omega^{k}):,\\ D_{k}(z)=\frac{1}{2}:\phi(z)\phi(-z\omega^{k}):.\end{split} \tag{1.60}\] Then clearly, see (1.14), \(X_{0}(z)=\alpha_{x}(z)\), for \(X=A,B,C\) and \(D\) and the modes of the fields \(X_{k}(z)\) span a Lie algebra that contains the corresponding Heisenberg Lie algebra. In the \(x=a\) case, the modes of the fields \(A_{k}(z)\), \(k=0,\ldots,N-1\), span the affine Lie algebra \(\hat{gl}_{N}\), which is the central extension of the loop algebra \(gl_{N}(\mathbb{C}[t,t^{-1}])\). The diagonal action of an element \(g\in\hat{gl}_{N}\),on \(F_{a}\otimes F_{a}\) commutes not only with the Casimir operator \(S\), but also with the operators \[\mathrm{Res}_{z=0}z^{\ell N}\psi^{+}(z)\otimes\psi^{-}(z)dz,\quad\ell=1,2,\ldots,\] which means that an \(N\)-reduced tau-function \(\tau\), in the fermionic picture, also satisfies the equation \[\mathrm{Res}_{z=0}z^{\ell N}\psi^{+}(z)\tau\otimes\psi^{-}(z)\tau dz,\quad \ell=1,2,\ldots.\] This gives in the bosonic picture, that \(\tau(t)\) satisfies not only (1.22), but all the following equations \[\mathrm{Res}_{z=0}z^{\ell N}e^{z\cdot(t-\overline{t})}e^{z^{-1}\cdot(\tilde{ \partial}_{\overline{t}}-\tilde{\partial}_{t})}\tau(t)\tau(\overline{t})dz= 0,\quad\ell=0,1,\ldots. \tag{1.61}\] This is the \(N\)-KdV hierarchy, the case \(N=2\), being the classical KdV hierarchy, see [6]. Note that the equation (1.61) for \(\ell=0\) is the KP hierarchy (1.22). It is well-known, see e.g. [5], that this is equivalent to the fact that the tau-function satisfies \[\frac{\partial\tau(t)}{\partial t_{\ell N}}=c_{\ell}\tau(t),\quad\ell=1,2, \ldots,\quad\text{with }c_{\ell}\in\mathbb{C}. \tag{1.62}\] If a KP tau-function \(\tau(t)\) satisfies (1.62), then, it is straightforward to check that, when differentiating (1.61) for \(\ell=0\) by \(t_{kN}\), we get (1.61) for \(\ell=k\). The converse is also true, but not so straightforward, and we refer to Subsection 7.1 for a proof. Since the tau-function \(\tau(t)\) satisfies (1.62), the dressing operator \(P(t,\partial)\), whose symbol is given by (1.23), satisfies \(\frac{\partial P(t,\partial)}{\partial t_{\ell N}}=0\). Thus the second equation of (1.25), implies that \((L^{\ell N})_{-}=0\), which means that \(\mathcal{L}:=L^{N}\) is a monic \(N\)-th order differential operator. One has the folowing Lax-Sato equation for \(\mathcal{L}\): \[\frac{\partial\mathcal{L}}{\partial t_{k}}=[(\mathcal{L}^{\frac{k}{N}})_{+}, \mathcal{L}],\quad k=1,2,\ldots. \tag{1.63}\] In the \(x=b\) case, the modes of the generating fields \(B_{k}(z)\) (see (1.60)) span the twisted affine Lie algebras \(\hat{gl}_{N}^{(2)}\) if \(N\) is odd, and \(\hat{so}_{N}^{(2)}\) if \(N\) is even. An \(N\)-reduced BKP tau-function, satisfies \[\mathrm{Res}_{z=0}z^{\ell N}\tilde{\phi}(z)\tau\otimes\tilde{\phi}(-z)\tau \frac{dz}{z}=\frac{\delta_{\ell,0}}{2}\tau\otimes\tau,\quad\ell=0,1,\ldots. \tag{1.64}\] As in the \(x=a\) case, this implies that \((L^{kN})_{-}=0\). Thus, the differential operator \(\mathcal{L}=L^{N}\) satisfies (1.63). It satisfies the extra condition \(\mathcal{L}^{*}=(-1)^{N}\partial^{-1}\mathcal{L}\partial\). Recall that a BKP tau-function depends only on \(t_{k}\) for \(k\) odd. Hence, if \(N\) is odd, it is again straightforward to check that Equation (1.62) for \(\ell\) odd implies (1.64) for this same \(\ell\). The reduction in the \(x=c\) case gives the affine Lie algebra \(\hat{sp}_{N}\), if \(N\) is even, and \(\hat{gl}_{N}^{(2)}\) if \(N\) is odd (cf. [5]). An \(N\)-reduced CKP tau-function satisfies the equations (see Subsection 7.5): \[{\rm Res}_{z=0}z^{\ell N}b(z)\tau\otimes b(-z)\tau dz=0,\quad\ell=0,1,\ldots. \tag{1.65}\] As in the \(x=a\) and \(b\) cases, we find that \({\cal L}:=L^{N}\) is again a differential operator that satisfies (1.63), and the condition \({\cal L}^{*}=(-1)^{N}{\cal L}\). When \(x=d\), the reduction gives \(\hat{so}_{2N}\). The tau-function satisfies the equations \[{\rm Res}_{z=0}z^{2\ell N}\left(\phi^{+}(z)\tau\otimes\phi^{-}(z)\tau+\phi^{-} (z)\tau\otimes\phi^{+}(z)\tau\right)dz=0,\quad\ell=0,1,\ldots. \tag{1.66}\] As before, \({\cal L}=L^{N}\) is an \(N\)-th order (\(2\times 2\)-matrix) differential operator. One has the following Lax-Sato equations for \({\cal L}\) and \(D\): \[\frac{\partial{\cal L}}{\partial t_{k}}=[({\cal L}^{\frac{k}{N}}D)_{+},{\cal L }],\quad\frac{\partial D}{\partial t_{k}}=[({\cal L}^{\frac{k}{N}}D)_{+},D], \quad k=1,2,\ldots, \tag{1.67}\] where, as before, \(D(\partial)=P(\partial)(E_{11}-E_{22})P(\partial)^{-1}\). Note that when \(N\) is even, \(LD\) is also an \(N\)-th root of \({\cal L}\). The reductions in the multicomponent case for \(x=a\) (see e.g [22] and [15]) are parametrized by the conjugacy classes of the Weyl group of \(gl_{N}\) or \(sl_{N}\), which is the permutation group \(S_{N}\). Conjugacy classes of \(S_{N}\) are in one to one correspondence with partitions \(\lambda=(\lambda_{1},\lambda_{2},\ldots,\lambda_{s})\) of \(N\), where the \(\lambda_{i}\) are the lengths of the cycles in the cycle decomposition of a permutation from the conjugacy class. Each partition \(\lambda\) of \(N\) in \(s\) non-zero parts gives a reduction of the \(s\)-component KP hierarchy, which we call the \(\lambda\)-KdV hierarchy. For instance, when \(N=2\), there are two partitions \((1,1)\) and \((2)\). The former leads to the AKNS hierarchy, while the latter to the KdV hierarchy. We can express the generating series of the affine Lie algebra \(\hat{gl}_{N}\) in terms of the generating series \[:\psi^{+a}(\omega_{a}^{k}z^{\lambda_{b}})\psi^{-b}(\omega_{b}^{\ell}z^{ \lambda_{a}}):,\quad 1\leq a,b\leq s,\quad 0\leq k<\lambda_{a},\quad 0\leq\ell< \lambda_{b},\] where \(\omega_{a}=e^{\frac{2\pi\sqrt{-1}}{\lambda_{a}}}\) for \(a=1,\ldots,s\). Each part \(\lambda_{a}\) of the partition \(\lambda\) corresponds to a \(\hat{gl}_{\lambda_{a}}\), i.e. the modes of the fields \(:\psi^{+a}(\omega_{a}^{k}z^{\lambda_{a}})\psi^{-a}(\omega_{a}^{\ell}z^{ \lambda_{a}}):\), with \(0\leq k,\ell<\lambda_{a})\), span \(\hat{gl}_{\lambda_{a}}\). A \((\lambda_{1},\lambda_{2},\ldots,\lambda_{s})\)-KdV tau-function satisfies the equations \[{\rm Res}_{z=0}\sum_{a=1}^{s}z^{p\lambda_{a}}\psi^{+a}(z)\tau\otimes\psi^{-a} (z)\tau dz=0,\quad p=0,1,2,\ldots, \tag{1.68}\] which for \(p=0\) is equation (1.33). Hence, after the \(s\)-component bosonization, we have for \(p=0,1,2,\ldots\), \[{\rm Res}_{z=0}\sum_{j=1}^{s}(-1)^{|\underline{k}+\underline{\ell}\mid_{j-1}z ^{p\lambda_{j}+k_{j}-\ell_{j}-2+2\delta_{js}}}e^{z\cdot(t^{(j)}-\overline{t}^ {(j)})}e^{z^{-1}\cdot(\tilde{q}_{\Gamma(j)}-\tilde{\partial}_{t^{(j)}})}\tau_{ \underline{k}+\underline{e}_{c}-\underline{e}_{j}}(t)\tau_{\underline{\ell}+ \underline{e}_{j}-\underline{e}_{s}}(\overline{t})=0, \tag{1.69}\] of which \(p=0\) equation is (1.34). As in the 1-component case, this reduction corresponds to taking a differential operator \({\cal L}\) instead of a pseudo-differential operator \(L\). However in general the differential operator \({\cal L}\) is not the power of the pseudo-differential operator \(L\). For a general partition \(\lambda=(\lambda_{1},\lambda_{2},\ldots,\lambda_{s})\), the equation (1.69) for \(p=1\) implies that \[{\cal L}=\sum_{j=1}^{s}L^{\lambda_{j}}C^{(j)}\] is a differential operaror, see Subsection 7.1 for more details. This operator satisfies the Lax-Sato equations \[\frac{\partial{\cal L}}{\partial t_{i}^{(j)}}=[({\cal L}^{\frac{i}{\lambda_{j }}}C^{(j)})_{+},{\cal L}],\quad i=1,2,\ldots,\] and the auxilary operators \(C^{(k)}\) with \(1\leq k\leq s\) satisfy \[\frac{\partial C^{(k)}}{\partial t_{i}^{(j)}}=[({\cal L}^{\frac{i}{\lambda_{j }}}C^{(j)})_{+},C^{(k)}],\quad i=1,2,\ldots.\] As in the 1-component \(N\)-KdV case, the tau-function satisfies an additional constraint, viz. \[\sum_{a=1}^{s}\frac{\partial\tau_{\underline{k}}(t)}{\partial t_{\partial_{a }}^{(a)}}=c_{\ell}\tau_{\underline{k}}(t),\quad\mbox{for $\ell=1,2,\ldots,$ with $c_{\ell}\in\mathbb{C}$},\] and \(c_{\ell}\) is the same for all \(\tau_{\underline{k}}(t)\). See Subsection 7.1 for more details. For other classical Lie algebras, \(so_{N}\) and \(sp_{N}\), the reductions of the corresponding multicomponent hierarchies correspond to conjugacy classes of their Weyl groups \(W\) as well. Recall that, according to one of the definitions of \(W\) for \(so_{2n+1}\) and \(sp_{2n}\) (resp. \(so_{2n}\)), this group consits of all permutations of non-zero integers from the interval \([-n,n]\), such that \(\pi(-j)=-\pi(j)\) (in addition the number of \(j\), such that \(j\pi(j)<0\), is a multiple of 4). Each such permutation is a product of disjoined positive and negative cycles; a cycle is called positive (resp. negative) if for \(j\) appearing in it, \(-j\) doesn't appear (resp. does appear). Reversing the signs of all entries of a positve cycle, we again obtain a positive cycle, called its twin. If we do this for a negative cycle, it does not change. Note also that the length of a negative cycle is always even. One associates to an element \(w\in W\) a pair of partitions \((\lambda,\mu)\) as follows. The partition \(\lambda=(\lambda_{1},\ldots,\lambda_{s})\) consists of the lengths of positive cycles, where only one of the twin cycles is counted. The partition \(\mu=(\mu_{1},\ldots,\mu_{r})\) cosists of the halves of the lengths of negative cycles. The construction of the reduced hierarchies for the affine Lie algebra \(\hat{so}_{N}\) is based on \(s\) pairs of charged fermionic fields \(\psi^{\pm a}(z)\), \(a=1,\ldots,s\), and \(r\) twisted neutral fermionic fields \(\hat{\phi}^{s+b}(z)\), \(b=1,\ldots,r\). For \(\hat{so}_{2n+1}\), there is an additional field \(\sigma(z)\), that we do not bosonize using vertex operators. The reason for this is that the elements of the Heisenberg Lie algebra, corresponding to this field, do not lie in \(\hat{so}_{2n+1}\). The simpler case is \(\hat{so}_{2n}\), i.e. the \(x=d\) case (see Subsection 7.2 for more details). This affine Lie algebra is a subalgebra of \(d_{\infty}\) and the bilinear identities are, see (7.15), \[\begin{split}\operatorname{Res}_{z=0}&\left(\sum_{b =1}^{s}z^{p\lambda_{b}}\left(\psi^{+b}(z)\otimes\psi^{-b}(z)+\psi^{-b}(z) \otimes\psi^{+b}(z)\right)+\right.\\ &\qquad\qquad+\sum_{c=s+1}^{r+s}z^{2p\mu_{c}-1}\tilde{\phi}^{c}(z )\otimes\tilde{\phi}^{c}(-z)\right)(\tau\otimes\tau)dz=0,\quad\text{for $p=0,1,2,\ldots$.}\end{split} \tag{1.70}\] The equation for \(p=0\) is the \(s,r\)-component DKP hierarchy, while the equations for \(p=1,2,\ldots\) give the reduction to \(\hat{so}_{2n}\) associated to the pair of partitions \((\lambda,\mu)\), so that \(\lambda_{1}+\cdots+\lambda_{s}+\mu_{1}+\cdots+\mu_{r}=n\). Note that in this case \(r\) is always even. The \(s,r\)-bosonization is the isomorphism \[\sigma_{s,r}:F_{d}\xrightarrow{\sim}B_{s,r}=\mathbb{C}[q^{\underline{k}}, \theta_{1},\ldots,\theta_{\frac{r}{2}},t_{1}^{(a)},t_{2}^{(a)},\ldots,t_{1}^{ (b)},t_{3}^{(b)},\ldots\mid 1\leq a\leq s<b\leq r+s],\] where \(\underline{k}=(k_{1},\ldots,k_{s})\in\mathbb{Z}^{s}\), and \(\theta_{j}\) are anticommuting variables. Applying \(\sigma_{s,r}\) to this expression and using the vertex operators (1.32) for the charged free fermionic fields, and (cf. (1.42)) \[\sigma_{s,r}\tilde{\phi}^{s+b}(z)\sigma_{s,r}^{-1}=R_{b}e^{zot(s+b)}e^{-2(z^{ -1}\circ\tilde{\phi}_{t(s+b)})},\quad R_{b}=\operatorname{const}\left(\frac{ \partial}{\theta_{\left[\frac{b}{2}\right]}}-(-1)^{b}\theta_{\left[\frac{b}{ 2}\right]}\right) \tag{1.71}\] for the twisted neutral free fermionic fields, we obain, in a similar way as in the \(x=a\) case, the, rather complex, bilinear identities (7.16) on the (collection of) tau-function(s). In the \(x=b\) case we can embed the affine Lie algebra \(\hat{so}_{2n+1}\) in both \(b_{\infty}\) as well as \(d_{\infty}\), which gives different level \(1\) representations of this affine Lie algebra. The bilinear identity for the \(d_{\infty}\) embedding is similar to the expression (1.70), but one has to add the following term on the left-hand sides \(\sum_{j}\sigma_{j}\tau\otimes\sigma_{p-j}\tau.\) Then applying \(\sigma_{s,r}\) gives the bilinear identities (7.29). Here the additional field \(\sigma(z)\) is bosonized as generating series of anti-commuting variables \(\sigma_{-j}=\xi_{j}\) and \(\sigma_{j}=\frac{\partial}{\partial\xi_{j}}\), \(j>0\), with \(j\in\mathbb{Z}\) if \(r\) is odd, and \(j\in\frac{1}{2}+\mathbb{Z}\) if \(r\) is even. \(\sigma_{0}\) is special, it is the operator \(R_{r+1}\). Hence, in this case the tau-function (or rather the collection of tau-functions) depend not only on \(t_{i}^{(c)}\), for \(c=1,\ldots,r+s\), but also on these \(\xi_{j}\). The embedding in \(b_{\infty}\) leads to a similar bilinear identity as (7.29). The construction of the reduced hierarchy related to \(\hat{so}_{N}\) in terms of these fields suggests a \((2s+r)\times(2s+r)\) matrix valued dressing operator. However, if \(r>1\), this dressing operator is not invertible. We solve this problem by considering a certain principal minor, and put both the anticommuting variables \(\xi_{j}\) and the times \(t_{k}^{(s+b)}\), for \(b=1,\ldots,s\) or for \(b=1,\ldots,s-1\), equal to zero in the bilinear identity. This leads to two different approaches, one with a \(2s\times 2s\) matrix dressing operator, and the second one with a \((2s+1)\times(2s+1)\) matrix dressing operator. Since the second one is rather involved, we shall only sketch the first one in this introduction, for the second one and more details on the first one, we refer to Section 4.4 and Subsections 7.2-7.4. In the first approach, in the \(x=b\) and \(d\) case, we construct a \(2s\times 2s\) matrix dressing operator \(P(t,\partial)\) in a similar way as in the \(x=a\) case from the bilinear identity (1.70) for \(p=0\) (and the one in the \(\hat{so}_{2n+1}\), where one has the extra field \(\sigma(z)\)), by only using the terms related to the 2s charged fields \(\psi^{\pm a}(a)\), \(a=1,\ldots,s\). The corresponding Lax operator \(L\) and the auxilary operators \(D^{j}\) and \(E^{j}\) are the dressed by \(P\) operators \(\sum_{j=1}^{s}(E_{jj}-E_{s+j,s+j})\partial\), and \(E_{jj}-E_{s+j,s+j}\), \(E_{jj}+E_{s+j,s+j}\), respectively. The reduced Lax operator \({\cal L}\) is determined by the partition \(\lambda\), viz. \[{\cal L}=\sum_{j=1}^{s}L^{\lambda_{j}}E^{j},\] that, together with the auxilary operators, satisfy the Lax-Sato equations \[\frac{\partial{\cal L}}{\partial t_{j}^{(a)}}=[({\cal L}^{\frac{j}{\lambda_{a }}}D^{a})_{+},{\cal L}],\quad\frac{\partial D^{k}}{\partial t_{j}^{(a)}}=[({ \cal L}^{\frac{j}{\lambda_{a}}}D^{a})_{+},D^{k}],\quad\frac{\partial E^{k}}{ \partial t_{j}^{(a)}}=[({\cal L}^{\frac{j}{\lambda_{a}}}D^{a})_{+},D^{k}].\] If \(\mu\neq\emptyset\), \({\cal L}\) is no longer a differential operator, but an operator of constrained KP type, i.e. its integral part has a certain simple form (given below). The partition \(\mu\) determines the integral part of this operator. For \(\hat{so}_{2n}\) we find \[{\cal L}={\cal L}_{+}+\sum_{i=1}^{r}\sum_{n=0}^{2\mu_{i}}W^{n}E_{ii}\partial^ {-1}\tilde{W}^{2\mu_{i}-n},\] where \(W^{j}\) (resp. \(\tilde{W}^{j}\)) are certain \(2s\times r\) (resp. \(r\times 2s\)) matrix functions, depending on \(t_{i}^{(a)}\), for \(a=1,\ldots,s\), \(i=1,2,\ldots\). Note that if \(\mu=\emptyset\), then there is no integral part of \({\cal L}\) and \({\cal L}\) is a differential operator. For \(\hat{so}_{2n+1}\), one has the extra field \(\sigma(z)\). This field leads to some extra terms in the integral part of the operator, see Subsections 7.3-7.4 for details. Since the Weyl group of the Lie algebra \(sp_{2n}\) is the same as the Weyl group of \(so_{2n+1}\), the reduction to the affine Lie algebra \(\hat{sp}_{2n}\) depends, as in the \(\hat{so}_{2n+1}\) case, on the decomposition of \(n\) into two partitions \(\lambda=(\lambda_{1},\lambda_{2},\ldots,\lambda_{s})\) for the positive cycles and \(\mu=(\mu_{1},\mu_{2},\ldots,\mu_{r})\), for the negative cycles, such that \(n=\lambda_{1}+\cdots+\lambda_{s}+\mu_{1}+\cdots+\mu_{r}\). Each part of \(\mu\) is related to a neutral symplectic bosonic field \(b^{c}(z)\), \(c=s+1,\ldots,s+r\), see (1.10) and (1.11). Each part of \(\lambda\) is related to a pair of _charged free symplectic bosons_\(b^{\pm a}(z)\), \(a=1,\ldots,s\). These fields satisfy \[[b^{\pm a}(z),b^{\pm c}(w)]_{-}=0,\quad[b^{+a}(z),b^{-c}(w)]_{-}=\delta_{ac} \delta(z-w)\quad a,c=1,\ldots,s.\] The bilinear identities in this case are, see (7.58), \[\begin{split}\operatorname{Res}_{z=0}&\big{\{} \sum_{a=1}^{s}& z^{p\lambda_{a}}\left(b^{-a}(z)\tau\otimes b^{+a}(z) \tau-b^{+a}(z)\tau\otimes b^{-a}(z)\tau\right)+\\ &+\sum_{c=s+1}^{r+s}z^{2p\mu_{c-s}}b^{c}(z)\tau\otimes b^{c}(-z )\tau\big{\}}dz=0.\end{split} \tag{1.72}\] Note that this is similar to equation (1.70), but now with the fermionic fields \(\psi^{\pm a}(z)\), \(\phi^{c}(z)\) replaced by the symplectic boson fields \(b^{\pm a}(z)\), \(b^{c}(z)\), respectively. Special cases, viz. \(s=1\) and \(r=0\) (resp. \(r=0\) and \(s=1\)) for only \(p=0\) were studied in [1] (resp. [26]). For the bosonization of these fields we refer to Subsection 3.7.2. These fields depend on commuting times \(t_{k}^{(a)}\) for \(1\leq a\leq s\) and \(k=1,2,\ldots\), and anticommuting variables \(\xi_{\ell}^{a}\), \(\ell\in\mathbb{Z}\). Again there is a whole collection of tau-functions, which are connected via a bilinear identity. As in the 1-component \(x=c\) case, we focus on \(\tau_{0}\) and construct a \((2s+r)\times(2s+r)\) matrix monic dressing operator \(P(t,\partial)\). This time \(P\) is invertible, and we construct the Lax operator (see Subsection 4.4) \(L\), and the auxillary operators \(C^{b}\) and \(D^{a}\) by dressing the operators \[\left(\sum_{a=1}^{s}\left(E_{aa}-E_{s+a,s+a}\right)\partial+\sum_{c=s+1}^{s+r }E_{s+c,s+c}\right)\partial,\text{ and }\ E_{bb},\ E_{aa}-E_{s+a,s+a},\] by \(P\). The reduced Lax operator \(\mathcal{L}\) is the operator that one gets by dressing the operator \[\sum_{a=1}^{s}\left(E_{aa}+(-1)^{\lambda_{a}}E_{s+a,s+a}\right)\partial^{ \lambda_{a}}+\sum_{c=s+1}^{s+r}E_{s+c,s+c}\partial^{2\mu_{c-s}}.\] It satisfies the following Lax-Sato equations \[\frac{\partial\mathcal{L}}{\partial t_{k}^{a}}=[(\mathcal{L}^{\frac{k}{ \lambda_{a}}}D^{a})_{+},\mathcal{L}],\quad\frac{\partial\mathcal{L}}{ \partial t_{k}^{c}}=[(\mathcal{L}^{\frac{k}{2\mu_{c-s}}}C^{s+c})_{+}, \mathcal{L}],\quad a=1,\ldots s,\ c=s+1,\ldots,s+r,\] and the Lax equations for the auxillary operators are \[\frac{\partial D^{b}}{\partial t_{k}^{a}}=[(\mathcal{L}^{\frac{k}{ \lambda_{a}}}D^{a})_{+},D^{b}],\quad\frac{\partial D^{b}}{\partial t_{k}^{c}}= [(\mathcal{L}^{\frac{k}{2\mu_{c-s}}}C^{s+c})_{+},D^{b}],\] \[\frac{\partial C_{s+d}}{\partial t_{k}^{a}}=[(\mathcal{L}^{\frac {k}{\lambda_{a}}}D^{a})_{+},C_{s+d}],\quad\frac{\partial C_{s+d}}{\partial t_ {k}^{c}}=[(\mathcal{L}^{\frac{k}{2\mu_{c-s}}}C^{s+c})_{+},C_{s+d}].\] The Lax operator \(\mathcal{L}\) obeys the constraint \[\mathcal{L}^{*}=M\mathcal{L}M,\quad\text{where }M=\sum_{a=1}^{s}(E_{a,s+a}+E_{s +a,a})+\sum_{b=1}^{r}E_{2s+b,2s+b}.\] Note that \(\mathcal{L}=\mathcal{L}^{*}\), if \(\lambda=\emptyset\). Below we describe the contents of the paper. In Section 2 we recall the description of the classical finite-dimensional Lie algebras, the corresponding affine Lie algebras, and the central extensions of the infinite matrix Lie algebras denoted by \(a_{\infty},b_{\infty},c_{\infty}\), and \(d_{\infty}\). By considering a vector space isomorphism between \(\mathbb{C}^{\infty}\) and \(\mathbb{C}^{n}\otimes\mathbb{C}[t,t^{-1}]\), we construct each classical affine Lie algebra as a subalgebra of \(a_{\infty},b_{\infty},c_{\infty}\) or \(d_{\infty}\). In Subsection 2.5 we establish a similar result for the twisted affine Lie algebras. This Section is based on [13], Chapter 7. In Section 3 we define the Clifford (resp. Weyl) algebra and their corresponding spin (resp. Weyl) module \(F_{x}\) for the Lie algebras \(x_{\infty}\) for \(x=a,b,d\) (resp \(c\)) of level \(1\) (resp \(-\frac{1}{2}\)). The generating series of the generators of these Clifford (resp. Weyl) algebras, called the (free) fermionic (resp. bosonic) fields, admit a vertex operator construction. Since the generating series of the natural bases of \(x_{\infty}\) can be expressed as the normal ordered product of two such fields, we obtain a vertex operator realization for these Lie algebras, called a bosonization of \(F_{x}\). In Subsection 3.5, and 3.7 we construct multicomponent bosonizations of \(F_{x}\). This idea was introduced by Date, Jimbo, Kashiwara and Miwa, in a series of papers [5]-[8] in the 1980's and further developed in [15] and [16]. The construction for \(c_{\infty}\), as far as we know, is new. Note that, while for \(x=a\) there exists an \(s\)-component bosonization for each positive integer \(s\), for \(x=b,c\) and \(d\) there exists an \((s,r)\)-component bosonization for each pair of non-negative \(s\) and \(r\), \(s+r\neq 0\). In Section 4 we use these bosonizations to define the hierarchies of equations of KP type on the tau-functions \(\tau\in F_{x}\), called the KP, BKP, DKP (resp. CKP) hierarchy for \(x=a,b,d\) (resp. \(c\)), in terms of the free fields of the corresponding Clifford (resp. Weyl) algebra. Using the multicomponent bosonizations, we obtain Hirota bilinear equations for the tau-function. Using ideas of [27], [5]-[8], [15] and [16], we turn these equations into some Lax-Sato equations for certain pseudo-differential operators. Again, the construction for \(x=c\) is new. There are various CKP hierarchies in the literature. Our hierarchy coincides with the one given in [1] (resp. [26]) for \((s,r)=(1,0)\) (resp. \(=(0,1)\)). The CKP hierarchies described in [21], [2] and [3] (see also [17]) are different. These authors study a certain restriction of the KP hierarchy, wich is related to the fixed poins under some involution. This gives special solutions of the KP hierarchy wich are fixed under this involution. The CKP tau-function is the square root of this special KP tau-function when all even flows are set to zero. In particular the level of their \(c_{\infty}\) representation is \(1\), while our (metaplectic) representation of \(c_{\infty}\) in \(F_{c}\) has level \(-\frac{1}{2}\). In Section 5 we begin the discussion of reductions of the hierarchies of KP type, which correspond to restrictions of the representations in \(F_{x}\) of the Lie algebras \(x_{\infty}\) (\(x=a,b,c,d\)) to their classical affine subalgebras \(\hat{\mathfrak{g}}\), where \(\mathfrak{g}\) is a classical Lie algebra of type \(X\). The simplest such reductions have been studied before in [5]-[8]. Our main observation is that for any conjugacy class in the Weyl group of \(\mathfrak{g}\) one can naturally associate such a reduction. For this we use the following construction introduced in [19] for any simple Lie algebra \(\mathfrak{g}\). Let \(\mathfrak{h}\) be its Cartan subalgebra, \(W\) its Weyl group. and \((\cdot|\cdot)\) a non-zero invariant bilinear form on \(\mathfrak{g}\). Pick \(w\) in a conjugacy class of \(W\), and let \(\tilde{w}\) be a lift of \(w\) such that \(\tilde{w}\) is conjugate in \(G\) to an element \(\sigma\) of order \(N\) of the form \(\sigma=e^{2\pi\sqrt{-1}h_{w}}\), where \(h_{w}\in\mathfrak{h}\) is orthogonal to the fixed point set of \(w\) in \(\mathfrak{h}\). Let \(\mathfrak{g}=\bigoplus_{\mathfrak{F}\in\mathbb{Z}/N\mathbb{Z}}\mathfrak{g}_{ \overline{j}}\) be the \(\mathbb{Z}/N\mathbb{Z}\) gradation of \(\mathfrak{g}\), defined by the eigenspaces of \(\sigma\), and let \(L(\mathfrak{g},\sigma)=\bigoplus_{j\in\mathbb{Z}}t^{j}\mathfrak{g}_{j\mod N} \subset\mathfrak{g}[t,t^{-1}]\) be the corresponding equivariant loop algebra. It is isomorphic to the loop algebra \(\mathfrak{g}[t,t^{-1}]\) via the map \(t^{j}g\mapsto g(j):=t^{-\mathrm{ad}\,h_{w}}(t^{\frac{j}{N}}g)\), for \(g\in\mathfrak{g}_{j\mod N}\). Then the subalgebra \(H_{w}=\mathrm{span}\{h(j)|\,h\in\mathfrak{h},\ j\in\mathbb{Z}\}\) is a maximal abelian subalgebra of \(\mathfrak{g}[t,t^{-1}]\). Let \(Z_{w}\) be the centralizer of \(H_{w}\) in the loop group \(G(\mathbb{C}(t,t^{-1})\). The main result of [19] is that a level \(1\) representation of \(\hat{\mathfrak{g}}\) is irreducible with respect to the central extensions and \(\hat{Z}_{w}\), provided that \(\mathfrak{g}\) is simply laced. Since \(\hat{H}_{w}\) is a Heisenberg Lie algebra, this leads to a "coordination" of the representation and to a vertex operator construction of \(\hat{\mathfrak{g}}\), associated to the conjugacy class of \(w\). In this section we recall the explicit form of this construction, which was given for \(\mathfrak{g}\) of types \(A\) and \(D\) in [22] and [23], and also for the twisted affine Lie algebras of this type in [24], and for \(\mathfrak{g}\) of type \(B\) in [25]. In this section we also give a vertex operator construction of the metaplectic representation (of level \(-\frac{1}{2}\)) of \(\hat{\mathfrak{g}}\) for \(\mathfrak{g}\) of type \(C\). This construction seems to be new. As a side application of the results of Section 5, we construct in Section 6 a map from the conjugacy classes of \(W\) to the nilpotent orbits of \(\mathfrak{g}\) for all classical Lie algebras \(\mathfrak{g}\). It coincides with Lusztig's map for typr \(C\), but is different for types \(B\) an \(D\). In Section 7 we present a systematic description of reductions of the KP, DKP, BKP and CKP hierarchies to the affine Lie algebras \(\hat{\mathfrak{g}}\) of type \(A\), \(D\), \(B\) and \(C\), associated to each conjugacy class of the Weyl group of \(\mathfrak{g}\). We obtain the corresponding equations on the tau-functions, construct the associated dressing operators, Lax operators and auxilary operators. The Lax operators are matrix differential operators for the Lie algebras of type \(A\) and \(C\); for the Lie algebras of type \(B\) and \(D\) they are matrix pseudo-differential operator of constrained KP type. We show that these Lax operators and auxilary operators satisfy certain Lax-Sato equations, which in the \(x=a\) case are equivalent to the bilinear equations on the tau-function. Although some of the above constructions, except for the case of type \(c\), have been known for many years, a systematic description of the hierarchies related to these classical affine Lie algebras, has not been given before. (The case of affine Lie algebras of type \(a\) was briefly mentioned in [15].) Finally, in Section 8, we establish this construction for the twisted affine Lie algebras of type \(A\) and \(D\). In both Sections 7 and 8 some cases related to specific conjugacy classes were known, but as far as we know, no systematic description of all cases has been given before. In conclusion of the introduction we would like to propose conjectures about solutions of bi-Hamiltonian hierarchies of evolution PDE, associated to Poisson vertex algebras \(W(\mathfrak{g},f)\) (see e.g. [9]), and tau-functions of the \(w\)-reduced XKP hierarchies, where \(\mathfrak{g}\) is a classical Lie algebra of type X=A, B, C, D, and \(w\) is a representative of a conjugacy class of the Weyl group \(W\) of \(\mathfrak{g}\). **Conjecture 1.1**: _Let \(F\) be a map from the set of conjugacy classes of \(W\) to nilpotent orbits of \(\mathfrak{g}\), constructed in Chapter 6, and let \(W(\mathfrak{g},F(w))\) be the classical affine \(W\)-algebra, associated to \(\mathfrak{g}\) and the orbit of \(F(w)\). Then tau-functions of the \(w\)-reduced XKP hierarchy, constructed in Chapter 7, give solutions of the bi-Hamiltonian hierarchy, associated with the Poisson vertex algebra \(W(\mathfrak{g},F(w))\)._ The validity of this conjecture for the reduced AKP=KP hierarchies was demonstrated in [4]. A related conjecture for an arbitrary simply laced simple Lie algebra \(\mathfrak{g}\) and its nilpotent element \(f\) (rather its adjoint orbit) is as follows: **Conjecture 1.2**: _Solutions of \(W(\mathfrak{g},f)\)-hierarchy can be obtained from the elements of the orbit of the highest weight vector of the \(w\)-realization of the basic representation of \(\hat{\mathfrak{g}}\), constructed in [19], where \(f\) is the image of \(w\) under the Kazhdan-Lusztig map (see [31])._ Throughout the paper the base field is \(\mathbb{C}\). ## 2 Lie algebras This section more or less follows [13], Chapter 7. ### Classical Lie algebras We denote by \(gl_{n}\) the Lie algebra over \(\mathbb{C}\), consisting of complex \(n\times n\)-matrices with the bracket \([a,b]=ab-ba\). It has as basis the matrices \(e_{ij}\), that have a \(1\) on the \(i,j\)-th entry and zeros elsewhere. Let \(v_{j}\) with \(1\leq j\leq n\) be the standard basis vector of \(\mathbb{C}^{n}\). Consider the following symmetric bilinear form on \(\mathbb{C}^{n}\): \[(v_{i},v_{j})_{so_{n}}=\delta_{i+j,n+1}. \tag{2.1}\] The Lie algebra \(so_{n}\) is the subalgebra of \(gl_{n}\), consisting of elements \(a\in gl_{n}\) that satisfy \[(av,w)_{so_{n}}=-(v,aw)_{so_{n}}. \tag{2.2}\] The elements \[e_{ij}-e_{n+1-j,n+1-i},\qquad 1\leq i,j\leq n, \tag{2.3}\] span \(so_{n}\). Note, that the trace of such an element is zero, hence \(so_{n}\) is a subalgebra of \(sl_{n}\). For later use, we introduce for \(1\leq i,j\leq\frac{n}{2}\) \[\begin{split}& e_{ij}^{+-}=e_{ij}-e_{n+1-j,n+1-i},\\ & e_{ij}^{++}=-e_{ji}^{++}=e_{i,n+1-j}-e_{j,n+1-i},\\ & e_{ij}^{--}=-e_{ji}^{--}=e_{n+1-i,j}-e_{n+1-j,i}.\end{split} \tag{2.4}\] and when \(n\) is odd, also \[e_{i}^{+}=e_{i,\frac{n+1}{2}}-e_{\frac{n+1}{2},n+1-i},\quad e_{i}^{-}=e_{n+1-i,\frac{n+1}{2}}-e_{\frac{n+1}{2},i}. \tag{2.5}\] Consider the following skewsymmetric bilinear form on \(\mathbb{C}^{2n}\): \[(v_{i},v_{j})_{sp_{2n}}=(-1)^{i}\delta_{i+j,2n+1}, \tag{2.6}\] then the elements \(a\in gl_{2n}\) that satisfy condition (2.2), where we replace the symmetric bilinear form \((\cdot,\cdot)_{so_{n}}\) by the skewsymmetric bilinear form \((\cdot,\cdot)_{sp_{2n}}\), form the subalgebra \(sp_{2n}\). The elements \[(-1)^{j}e_{ij}-(-1)^{i}e_{2n+1-j,2n+1-i},\qquad 1\leq i,j\leq 2n, \tag{2.7}\] span \(sp_{2n}\). Again, \(sp_{2n}\) is a subalgebra of \(sl_{2n}\). We also introduce for \(1\leq i,j\leq n\) \[\begin{split} e_{ij}^{+-}&=e_{ij}-(-1)^{i+j}e_{2n+1- j,2n+1-i},\\ e_{ij}^{++}&=(-1)^{i+j}e_{ji}^{++}=e_{i,2n+1-j}-(- 1)^{i+j+1}e_{j,2n+1-i},\\ e_{ij}^{--}&=(-1)^{i+j}e_{ji}^{--}=e_{2n+1-i,j}-(- 1)^{i+j+1}e_{2n+1-j,i}.\end{split} \tag{2.8}\] For a classical Lie algebra \(\mathfrak{g}\), we choose the non-degenerate symmetric invariant bilinear form as follows: \[(g|h)=\gamma_{x}\mathrm{Tr}\,\mathrm{gh},\quad\mathrm{g},\mathrm{h}\in \mathfrak{g}, \tag{2.9}\] where \(x=a\), \(b\), \(c\) or \(d\), corresponding to the cases \(\mathfrak{g}=gl_{n}\), \(so_{2n+1}\), \(sp_{2n}\) or \(so_{2n}\), respectively, and \[\gamma_{x}=1\quad\text{if $x=a$ or $c$, and $\gamma_{x}=\frac{1}{2}$ if $x=b$ or $d$.}\] This is the standard choice for which the square of the length of a long root is 2. The standard _Cartan subalgebra_\(\mathfrak{h}\) of a classical Lie algebra \(\mathfrak{g}\) consists of all the diagonal matrices in \(\mathfrak{g}\). It has as orthonormal basis, with respect to the bilinear form (2.9), the following elements (\(1\leq i\leq n\)): \[\begin{split}\mathfrak{g}=gl_{n}:&\quad\epsilon_{i} =e_{ii},\\ \mathfrak{g}=so_{2n+1}:&\quad\epsilon_{i}=e_{ii}-e_{ 2n+2-i,2n+2-i},\\ \mathfrak{g}=sp_{2n}:&\quad\epsilon_{i}=\frac{1}{ \sqrt{2}}(e_{ii}-e_{2n+1-i,2n+1-i}),\\ \mathfrak{g}=so_{2n}:&\quad\epsilon_{i}=e_{ii}-e_{ 2n+1-i,2n+1-i}.\end{split} \tag{2.10}\] One has the following _root space decomposition_ of \(\mathfrak{g}\): \[\mathfrak{g}=\mathfrak{h}\bigoplus_{\alpha\in\Delta}\mathfrak{g}_{\alpha}, \quad\text{where $\mathfrak{g}_{\alpha}=\{g\in\mathfrak{g}\,|\,[h,g]=(\alpha,h)g$ for $h\in\mathfrak{h}$}\}.\] Every root space \(\mathfrak{g}_{\alpha}\) is 1-dimensional and one has the following _set of roots_\(\Delta\): \[\begin{split} gl_{n}:&\quad\{\pm(\epsilon_{i}-\epsilon_{ j})\,|\,1\leq i<j\leq n\},\\ so_{2n+1}:&\quad\{\pm\epsilon_{k},\ \pm(\epsilon_{i}- \epsilon_{j}),\ \pm(\epsilon+\epsilon_{j})\,|\,1\leq k\leq n,\ 1\leq i<j\leq n\},\\ sp_{2n}:&\quad\{\pm 2\epsilon_{k},\ \pm(\epsilon_{i}- \epsilon_{j}),\ \pm(\epsilon+\epsilon_{j})\,|\,1\leq k\leq n,\ 1\leq i<j\leq n\},\\ so_{2n}:&\quad\{\pm(\epsilon_{i}-\epsilon_{j}),\ \pm( \epsilon+\epsilon_{j})\,|\,1\leq i<j\leq n\},\end{split} \tag{2.11}\] and the following root spaces: \[\begin{split} gl_{n}:&\quad\mathfrak{g}_{\epsilon_{i}- \epsilon_{j}}=\mathbb{C}e_{ij};\\ so_{2n+1}:&\quad\mathfrak{g}_{\epsilon_{i}}= \mathbb{C}e_{i}^{+},\ \mathfrak{g}_{-\epsilon_{i}}=\mathbb{C}e_{i}^{-},\ \mathfrak{g}_{\epsilon_{i}-\epsilon_{j}}=\mathbb{C}e_{ij}^{+-},\ \mathfrak{g}_{\epsilon_{i}+\epsilon_{j}}=\mathbb{C}e_{ij}^{++},\ \mathfrak{g}_{- \epsilon_{i}-\epsilon_{j}}=\mathbb{C}e_{ij}^{--};\\ sp_{2n}:&\quad\mathfrak{g}_{\epsilon_{i}-\epsilon_{j}}= \mathbb{C}e_{ij}^{+-},\ \mathfrak{g}_{\epsilon_{i}+\epsilon_{j}}=\mathbb{C}e_{ij}^{++},\ \mathfrak{g}_{- \epsilon_{i}-\epsilon_{j}}=\mathbb{C}e_{ij}^{--};\\ so_{2n}:&\quad\mathfrak{g}_{\epsilon_{i}-\epsilon_{j}}= \mathbb{C}e_{ij}^{+-},\ \mathfrak{g}_{\epsilon_{i}+\epsilon_{j}}=\mathbb{C}e_{ij}^{++},\ \mathfrak{g}_{- \epsilon_{i}-\epsilon_{j}}=\mathbb{C}e_{ij}^{--}.\end{split} \tag{2.12}\] Recall that the Weyl group \(W\) of \(\mathfrak{g}\) is the subgroup of \(GL(\mathfrak{h})\) generated by all reflections \(r_{\alpha}\), \(\alpha\in\Delta\). The elements of \(W\) acts on the orthonormal basis, given by (2.10), of the Cartan subalgebra, by permuting these basis vectors for \(\mathfrak{g}=gl_{n}\). For \(\mathfrak{g}=so_{2n+1}\) or \(sp_{2n}\), elements of the Weyl group permute these basis vectors and also change signs of an arbitrary number of them. For \(\mathfrak{g}=so_{2n}\), the sign changes are restricted only to an even number. Thus conjugacy classes of the Weyl group are parametrized by the cycle type for \(\mathfrak{g}=gl_{n}\), and for the positive and negative cycle type for the other \(\mathfrak{g}\), the number of negative cycles being even for \(\mathfrak{g}=so_{2n}\). ### Loop algebras and affine Lie algebras Let \(\mathfrak{g}\) be a finite-dimensional Lie algebra. The associated loop algebra is defined as the Lie algebra \(\tilde{\mathfrak{g}}=\mathfrak{g}\otimes\mathbb{C}[t,t^{-1}]\) with the bracket \[[g\otimes t^{k},h\otimes t^{\ell}]=[g,h]\otimes t^{k+\ell},\quad g,h\in \mathfrak{g},\ k,\ell\in\mathbb{Z}.\] We will usually write \(t^{k}g\) in place of \(g\otimes t^{k}\). Given a non-degenerate symmetric invariant bilinear form \((\cdot|\cdot)\) on \(\mathfrak{g}\), one constructs a Lie algebra central extension \(\hat{\mathfrak{g}}=\tilde{\mathfrak{g}}\oplus\mathbb{C}K\) of \(\tilde{\mathfrak{g}}\) by 1-dimensional center \(\mathbb{C}K\), with the bracket \[[t^{k}g+\lambda K,t^{\ell}h+\mu K]=t^{k+\ell}[g,h]+k\delta_{k,-\ell}(g|h)K.\] In the paper we shall consider \(\mathfrak{g}\) to be one of the classical Lie algebras \(gl_{n}\) (or \(sl_{n}\)), \(sp_{2n},so_{2n+1}\) and \(so_{2n}\), with the bilinear form on \(\mathfrak{g}\) is chosen as (2.9). The Lie algebra \(\hat{\mathfrak{g}}\) with this choice of bilinear form \((\cdot|\cdot)\) is called an _affine Lie algebra_. We can twist this construction with an automorphism \(\sigma\). Let \(\sigma\) be an automorphism of \(\mathfrak{g}\) of finite order \(N\), i.e. \(\sigma^{N}=id\), then this defines a \(\mathbb{Z}_{N}=\mathbb{Z}/N\mathbb{Z}\)-gradation of \(\mathfrak{g}=\bigoplus_{j\in\mathbb{Z}_{N}}\mathfrak{g}_{\overline{j}}\), defined by \(\sigma(g_{\overline{j}})=e^{\frac{2\pi j}{N}}g_{\overline{j}}\) for \(g_{\overline{j}}\in\mathfrak{g}_{\overline{j}}\). Define the twisted affine Lie algebra \(\hat{L}(\mathfrak{g},\sigma)\) as the following subalgebra of \(\hat{\mathfrak{g}}\): \[\hat{L}(\mathfrak{g},\sigma)=\mathbb{C}K\oplus\bigoplus_{j\in\mathbb{Z}}t^{j} \mathfrak{g}_{j\,\mathrm{mod}\,N}. \tag{2.13}\] The Lie algebra \(\hat{\mathfrak{g}}\) and \(\hat{L}(\mathfrak{g},\sigma)\) are isomorphic for any inner automorphism \(\sigma\) of \(\mathfrak{g}\), but not for an outer automorphism (see [13], Chapter 8). Let \(J_{n}=\sum_{k=1}^{n}e_{j,n+1-j}\in GL_{n}\) and \(O\) be the matrix of transposition of \(v_{n}\) and \(v_{n+1}\). Define \(\sigma\in\mathrm{Aut}\,\mathfrak{g}\) by \[\begin{split}\sigma(g)&=-\mathrm{Ad}\,J_{n}(g^{t}) =-J_{n}g^{t}J_{n},\quad\text{ for }g\in\mathfrak{g}=gl_{n}\quad\text{and}\\ \sigma(g)&=\mathrm{Ad}\,O(g)=OgO,\quad\text{for }g\in \mathfrak{g}=so_{2n}.\end{split} \tag{2.14}\] In both cases \(\sigma\) is an outer automorphism of order 2, hence we get the corresponding twisted affine Lie algebra \(\hat{\mathfrak{g}}^{(2)}:=\hat{L}(\mathfrak{g},\sigma)\), which is not isomorphic to \(\hat{\mathfrak{g}}\). Note that if \(\mathfrak{g}=gl_{n}\), \(\sigma(e_{ij})=-e_{n+1-j,n+1-i}\), thus the fixed point set \((gl_{n})_{\overline{0}}\) is equal to exactly \(so_{n}\) as in (2.3), i.e., \[\begin{split} e_{ij}-e_{n+1-j,n+1-i}\in&(gl_{n})_{ \overline{0}},\quad\text{and}\\ e_{ij}+e_{n+1-j,n+1-i}\in&(gl_{n})_{\overline{1}}, \quad 1\leq i,j\leq n.\end{split} \tag{2.15}\] For \(\mathfrak{g}=so_{2n}\), \(e_{ij}-e_{2n+1-j,2n+1-i}\in(so_{2n})_{\overline{0}}\) for \(1\leq i,j\leq 2n\) with both \(i,j\neq n\) or \(n+1\). And \[\begin{split}& e_{in}-e_{n+1,2n+1-i}+e_{i,n+1}-e_{n,2n+1-i}\in(so_{ 2n})_{\overline{0}},\\ & e_{in}-e_{n+1,2n+1-i}-e_{i,n+1}+e_{n,2n+1-i}\in(so_{2n})_{ \overline{1}},\qquad 1\leq i\leq 2n,\end{split} \tag{2.16}\] in particular \(e_{n,n}-e_{n+1,n+1}\in(so_{2n})_{\overline{1}}\). Thus \((so_{2n})_{\overline{0}}=so_{2n-1}\). ### The classical infinite matrix algebras Let \(\overline{a}_{\infty}\) be the Lie algebra consisting of infinite matrices \((a_{ij})_{i,j\in\mathbb{Z}}\) such that \(a_{ij}\in\mathbb{C}\), \(a_{ij}=0\) when \(|i-j|>>0\). This Lie algebra has a non-trivial central extension \(a_{\infty}=\overline{a}_{\infty}\oplus\mathbb{C}K\) with the bracket \[[a+\lambda K,b+\mu K]=ab-ba+\omega(a,b)K,\quad a,b\in\overline{a}_{\infty}, \tag{2.17}\] where \(\omega\) is a two-cocycle, described below. Let \(E_{ij}\) be the matrix with a \(1\) on the \((i,j)\)-th entry and zeros elsewhere, then \(\omega\) is given by \[\omega(E_{ij},E_{ji})=-\omega(E_{ji},E_{ij})=1\quad\text{if $i\leq 0,j>0$}, \quad\text{and $\omega(E_{ij},E_{k\ell})=0$ otherwise}, \tag{2.18}\] which extends by bilinearity to the whole \(\overline{a}_{\infty}\). The Lie algebra \(a_{\infty}\) acts naturally on the space of column vectors \(\mathbb{C}^{\infty}=\bigoplus_{j\in\mathbb{Z}}\mathbb{C}e_{j}\), by \[E_{ij}e_{k}=\delta_{jk}e_{i},\quad K=0,\] where \(e_{j}\) is the column vector with a \(1\) on the \(j\)-th entry and zeros elsewhere. We define the other classical infinite matrix algebras \(\overline{b}_{\infty}\), \(\overline{c}_{\infty}\) and \(\overline{d}_{\infty}\), by introducing a bilinear form on \(\mathbb{C}^{\infty}\) as follows (cf. (2.1) and (2.2)) \[(e_{i},e_{j})_{b}=\delta_{i,-j},\quad(e_{i},e_{j})_{c}=(-1)^{i}\delta_{i,1-j},\quad\text{and $(e_{i},e_{j})_{d}=\delta_{i,1-j}$},\quad i,j\in\mathbb{Z}. \tag{2.19}\] Then for \(x=b,c,d\) \[\overline{x}_{\infty}=\{g\in\overline{a}_{\infty}\,|\,(gv,w)_{x}=-(v,gw)_{x} \text{ for all $v,w\in\mathbb{C}^{\infty}$}\},\] and its central extension is a subalgebra of \(a_{\infty}\): \[x_{\infty}=\overline{x}_{\infty}\oplus\mathbb{C}K,\] where the bracket is given by (2.17). In the \(b\), \(c\), \(d\) cases the elements \((i,j\in\mathbb{Z})\) \[\begin{split} B_{ij}&:=E_{-i,j}-E_{-j,i},\\ C_{i+\frac{1}{2},j-\frac{1}{2}}&:=(-1)^{j}E_{-i,j} -(-1)^{i}E_{1-j,1+i},\\ D_{i+\frac{1}{2},j-\frac{1}{2}}&:=E_{-i,j}-E_{1-j, 1+i},\end{split} \tag{2.20}\] respectively, span (if we allow infinite sums) the corresponding Lie algebra \(\overline{x}_{\infty}\). The Lie algebra \(\overline{a}_{\infty}\) has the standard _triangular decomposition_ \[\overline{a}_{\infty}=\overline{a}_{\infty}^{-}\oplus\overline{a}_{\infty}^{0 }\oplus\overline{a}_{\infty}^{+},\] in a sum of subalgebras, consisting of strictly lower, diagonal and strictly upper triangular matrices. It induces one on \(a_{\infty}\) by adding \(\mathbb{C}K\) to \(\overline{a}_{\infty}^{0}\). This triangular decomposition induces one on all \(x_{\infty}\). ### Affine Lie algebras as subalgebras of classical infinite matrix Lie algebras Let \(\mathfrak{g}=gl_{n}\), then the loop algebra \(\tilde{gl}_{n}\) naturally acts on the loop space \(\mathbb{C}[t,t^{-1}]^{n}=\mathbb{C}^{n}\otimes C[t,t^{-1}]\). Using \(v_{1},v_{2},\ldots v_{n}\), the standard basis of \(\mathbb{C}^{n}\), we can identify this loop space with \(\mathbb{C}^{\infty}\) as follows: \[e_{kn+j}=t^{-k}v_{j},\quad k\in\mathbb{Z},\ 1\leq j\leq n, \tag{2.21}\] so that an element \(t^{k}e_{ij}\in\tilde{gl}_{n}\) can be identified with an element of \(\overline{a}_{\infty}\) as \[t^{k}e_{ij}=\sum_{\ell\in\mathbb{Z}}E_{(\ell-k)n+i,\ell n+j}, \tag{2.22}\] producing an embedding \(\tilde{gl}_{n}\subset\overline{a}_{\infty}\). This induces an embedding \(\tilde{gl}_{n}\subset a_{\infty}\). (It is easy to check that the cocycles match.) Let \(\mathfrak{g}=so_{n}\). Then the loop algebra \(\tilde{so}_{n}\) naturally acts on the loop space \(\mathbb{C}[t,t^{-1}]^{n}\). We can use the identification (2.22) to obtain \[\begin{split} t^{k}e_{ij}-t^{k}e_{n+1-j,n+1-i}&= \sum_{\ell\in\mathbb{Z}}E_{(\ell-k)n+i,\ell n+j}-\sum_{\ell\in\mathbb{Z}}E_{( \ell-k+1)n+1-j,(\ell+1)n+1-i}\\ &=\sum_{\ell\in\mathbb{Z}}(E_{(\ell-k)n+i,\ell n+j}-E_{-\ell n+1 -j,-(\ell-k)n+1-i})\\ &=\sum_{\ell\in\mathbb{Z}}D_{-(\ell-k)n-i+\frac{1}{2},\ell n+j- \frac{1}{2}}.\end{split} \tag{2.23}\] Due to (2.20), this makes \(\tilde{so}_{n}\) a subalgebra of \(\overline{d}_{\infty}\). This embedding induces an embedding of the affine Lie algebra \(\tilde{so}_{n}\) in \(d_{\infty}\), with the cocycle \(\gamma_{d}\omega\), with \(\gamma_{d}=\frac{1}{2}\), where \(\omega\) is the restriction of the cocycle (2.18). The triangular decompositions of \(x_{\infty}\) induce the standard triangular decompositions [13] on the corresponding affine subalgebras. We can also choose a a different identification from (2.21) if \(n=2m+1\) is odd, namely, we identify \(\mathbb{C}[t,t^{-1}]^{2m+1}\) with \(\mathbb{C}^{\infty}\) via \[e_{k(2m+1)+j-m-1}=t^{-k}v_{j},\quad k\in\mathbb{Z},\ 1\leq j\leq 2m+1. \tag{2.24}\] Then we obtain the identification \[\begin{split} t^{k}& e_{ij}-t^{k}e_{2m+2-j,2m+2-i}= \\ &=\sum_{\ell\in\mathbb{Z}}(E_{(\ell-k)(2m+1)+i-m-1,\ell(2m+1)+j-m -1}-E_{(\ell-k)(2m+1)+m+1-j,\ell(2m+1)+m+1-i})\\ &=\sum_{\ell\in\mathbb{Z}}(E_{(\ell-k)(2m+1)+i-m-1,\ell(2m+1)+j-m -1}-E_{-\ell(2m+1)+m+1-j,(k-\ell)(2m+1)+m+1-i})\\ &=\sum_{\ell\in\mathbb{Z}}B_{-(\ell-k)(2m+1)-i+m+1,\ell(2m+1)+j-m -1}.\end{split} \tag{2.25}\] Due to (2.20), this makes \(\tilde{so}_{2m+1}\) a subalgebra of \(\overline{b}_{\infty}\). Recall, that \(\gamma_{b}=\frac{1}{2}\). This embedding induces an embedding of \(\hat{so}_{2m+1}\) in \(b_{\infty}\) with "non-standard" cocycle \[\frac{1}{2}\omega(t^{k}a,t^{\ell}b)=\frac{k}{2}\delta_{k,-\ell}\left(\operatorname {Tr}(ab)+\sum_{i=1}^{m}\operatorname{Tr}((e_{ii}-e_{2m+2-i,2m+2-i})[a,b])\right). \tag{2.26}\] Since the second term is a trivial cocycle, this non-standard cocycle is equivalent to \(\frac{1}{2}\omega\). Finally, let \(\mathfrak{g}=sp_{2n}\). Then the identification (2.22), for \(n\) replaced by \(2n\), gives the following identification, \[\begin{split}(-1)^{j}t^{k}e_{ij}-(-1)^{i}t^{k}e_{2n+1-j,2n+1-i}& =\sum_{\ell\in\mathbb{Z}}(-1)^{j}E_{2(\ell-k)n+i,2\ell n+j}-\sum_{ \ell\in\mathbb{Z}}(-1)^{i}E_{2(\ell-k+1)n+1-j,2(\ell+1)n+1-i}\\ &=\sum_{\ell\in\mathbb{Z}}((-1)^{j}E_{2(\ell-k)n+i,2\ell n+j}-(-1) ^{i}E_{-2\ell n+1-j,-2(\ell-k)n+1-i})\\ &=\sum_{\ell\in\mathbb{Z}}C_{-2(\ell-k)n-i+\frac{1}{2},2\ell n+ j-\frac{1}{2}}.\end{split} \tag{2.27}\] Due to (2.20), this makes \(\tilde{sp}_{2n}\) a subalgebra of \(\overline{c}_{\infty}\). This embedding induces an embedding of \(\hat{sp}_{2n}\) in \(c_{\infty}\) with the cocycle \(\omega\), so that the coefficient of \(\omega\) is \(\gamma_{c}=1\). The triangular decompositions of \(x_{\infty}\) induce triangular decompositions [13] of the classical affine Lie algebras \(\hat{\mathfrak{g}}\). Using (2.19) and identifications (2.21) and (2.24) we obtain the following **Lemma 2.1**: _We have the following bilinear forms on the loop space \(\mathbb{C}[t,t^{-1}]^{N}\), restricted from \(\mathbb{C}^{\infty}\):_ \[\begin{split}(t^{k}v,t^{\ell}w)_{b}=&\delta_{k+\ell,1}(v,w)_{so_{2n+1}},\qquad\text{embedding \eqref{eq:2.22}, $N=2n+1$},\\ (t^{k}v,t^{\ell}w)_{c}=&\delta_{k+\ell,1}(v,w)_{sp_ {2n}},\qquad\text{embedding \eqref{eq:2.22}, $N=2n$},\\ (t^{k}v,t^{\ell}w)_{d}=&\delta_{k+\ell,1}(v,w)_{so_ {2n}},\qquad\text{embedding \eqref{eq:2.22}, $N=2n$},\\ (t^{k}v,t^{\ell}w)_{b}=&\delta_{k+\ell,0}(v,w)_{so_ {2n+1}},\qquad\text{embedding \eqref{eq:2.24}, $N=2n+1$},\end{split}\] _such that for \(x=b,\ c,\) or \(d\) we have_ \[\hat{\mathfrak{g}}=\{g\in\hat{gl}_{N}\,|\,(gv,w)_{x}=-(v,gw)_{x}\text{ for all }v,w\in\mathbb{C}[t,t^{-1}]^{N}\}.\] ### Twisted affine Lie algebras as subalgebras of classical infinite matrix Lie algebras In this subsection, we embed the twisted classical affine Lie algebras \(\hat{\mathfrak{g}}^{(2)}\), for \(\mathfrak{g}=gl_{n}\) and \(so_{2n}\), into either \(d_{\infty}\) (the former case with \(n\) even) or \(b_{\infty}\) (the other cases). We proceed as in the non-twisted case, and relate the basis \(e_{j}\in\mathbb{C}^{\infty}\), \(j\in\mathbb{Z}\), with the basis of a subspace of the loop space \((\mathbb{C}[t,t^{-1}])^{n}\) (resp. \((\mathbb{C}[t,t^{-1}])^{2n}\)) for \(\mathfrak{g}=gl_{n}\) (resp. \(\mathfrak{g}=so_{2n}\)), which is invariant under \(\hat{\mathfrak{g}}^{(2)}\). We choose a basis of \(\mathbb{C}^{n}\) (resp. \(\mathbb{C}^{2n}\) consisting of eigenvectors of \(J_{n}\) (resp. \(O\)), this gives the following identification of \(\mathbb{C}^{\infty}\) and the subspace of the loop space: \[gl_{2m}: e_{k(2m)+j-m}=c_{k}\frac{t^{-k}}{\sqrt{2}}(v_{j}+(-1)^{k}v_{2m+1-j}),\] \[gl_{2m+1}: e_{k(2m+1)+j-m-1}=c_{k}\frac{t^{-k}}{\sqrt{2}}(v_{j}+(-1)^{k}v_{2m +2-j}).\] \[so_{2n}: e_{2nk+i-n}=t^{-2k}v_{i},\quad e_{2nk+n-i}=t^{-2k}v_{2n+1-i}, \qquad i=1,\dots,n-1\] \[e_{2nk}=\frac{t^{-2k}}{\sqrt{2}}(v_{n}+v_{n+1}),\] \[e_{2nk+n}=(-1)^{k}\frac{t^{-2k-1}}{\sqrt{2}}(v_{n}-v_{n+1}),\] where \[c_{k}=\begin{cases}1&\text{if $k$ even},\\ \sqrt{-1}&\text{if $k$ odd}.\end{cases} \tag{2.28}\] Then we obtain the following identification: \[\hat{gl}_{2m}^{(2)}\subset d_{\infty}\text{:}\] \[t^{k}(e_{ij}-(-1)^{k}e_{2m+1-j,2m+1-i})=\sum_{\ell\in\mathbb{Z}}(c_{\ell-k})^{ -1}c_{\ell}D_{-2m(\ell-k)-i+m+\frac{1}{2},2m\ell+j-m-\frac{1}{2}}\text{;} \tag{2.29}\] \[\hat{gl}_{2m+1}^{(2)}\subset b_{\infty}\text{:}\] \[t^{k}(e_{ij}-(-1)^{k}e_{2m+2-j,2m+2-i})=\sum_{\ell\in\mathbb{Z}}(c_{\ell-k})^{ -1}c_{\ell}B_{-(2m+1)(\ell-k)-i+m+1,(2m+1)\ell+j-m-1}\text{;} \tag{2.30}\] \(\hat{so}^{(2)}_{2n}\subset b_{\infty}\): (\(1\leq i,j<n\)) \[t^{2k}(e_{ij}-e_{2n+1-j,2n+1-i}) =\sum_{\ell\in\mathbb{Z}}B_{-2n(\ell-k)+n-i,2n\ell+j-n},\] \[t^{2k}(e_{i,2n+1-j}-e_{j,2n+1-i}) =\sum_{\ell\in\mathbb{Z}}B_{-2n(\ell-k)+n-i,2n\ell+n-j},\] \[t^{2k}(e_{2n+1-i,j}-e_{2n+1-j,i}) =\sum_{\ell\in\mathbb{Z}}B_{-2n(\ell-k)+i-n,2n\ell+j-n},\] \[\frac{t^{2k}}{\sqrt{2}}(e_{in}+e_{i,n+1}-e_{n,2n+1-i}-e_{n+1,2n+1 -i}) =\sum_{\ell\in\mathbb{Z}}B_{-2n(\ell-k)+n-i,2n\ell},\] \[\frac{t^{2k}}{\sqrt{2}}(e_{2n+1-i,n}+e_{2n+1-i,n+1}-e_{n,i}-e_{n +1,i}) =\sum_{\ell\in\mathbb{Z}}B_{-2n(\ell-k)+i-n,2n\ell},\] \[\frac{t^{2k+1}}{\sqrt{2}}(e_{in}-e_{i,n+1}+e_{n,2n+1-i}-e_{n+1,2n +1-i}) =(-1)^{k}\sum_{\ell\in\mathbb{Z}}B_{-2n(\ell-k)+n-i,2n\ell+n},\] \[\frac{t^{2k+1}}{\sqrt{2}}(e_{2n+1-i,n}-e_{2n+1-i,n+1}+e_{n,i}-e_{ n+1,i}) =(-1)^{k}\sum_{\ell\in\mathbb{Z}}B_{-2n(\ell-k)+i-n,2n\ell+n},\] \[t^{2k+1}(e_{nn}-e_{n+1,n+1}) =(-1)^{k}\sum_{\ell\in\mathbb{Z}}B_{-2n(\ell-k),2n\ell+n}. \tag{2.31}\] Note that in all cases the matrices are periodic of period \(4m,\ 4m+2,\ 2n\) for \(\hat{gl}^{(2)}_{2m}\), \(\hat{gl}^{(2)}_{2m+1}\), \(\hat{so}^{(2)}_{2n}\), respectively. **Remark 2.2**: _The automorphism \(\sigma\) induces an automorphism on \(d_{\infty}\) (resp. \(b_{\infty}\)), which is given by conjugation by \(J\) for \(\hat{gl}^{(2)}_{n}\), and \(O\) for \(\hat{so}^{(2)}_{2n}\), where_ \[J= \sum_{k\in\mathbb{Z}}(-1)^{k}\sum_{j=1}^{2m}E_{k(2m)+j-m,k(2m)+j-m },\quad n=2m,\] \[\big{(}\text{resp. }J= \sum_{k\in\mathbb{Z}}(-1)^{k}\sum_{j=1}^{2m+1}E_{k(2m+1)+j-m+1,k(2 m+1)+j-m+1},\quad n=2m+1\big{)}, \tag{2.32}\] \[O= \sum_{k\in\mathbb{Z}}\sum_{j=1}^{2n}(-1)^{\delta_{jn}}E_{2kn+j,2 kn+j}.\] _Note that the matrices \((a_{ij})_{i,j\in\mathbb{Z}}\) that correspond to an element of \(\hat{gl}^{(2)}_{n}\) satisfy_ \[a_{i+pn,j+pn}=J^{p}a_{i,j}J^{p}=c_{i}^{2p}c_{j}^{2p}a_{ij},\quad p\in\mathbb{Z}. \tag{2.33}\] As before, we can restrict the cocycle of \(x_{\infty}\) to \(\hat{\mathfrak{g}}^{(2)}\). One can check that this gives the cocycle (2.26) for \(\hat{g}l^{(2)}_{2m+1}\), and the following cocycle in the remaining cases: \[\hat{g}l^{(2)}_{2m}: \frac{1}{2}\omega(t^{k}a,t^{\ell}b)=\frac{k}{2}\delta_{k,-\ell} \left(\mathrm{Tr}(ab)+\sum_{i=1}^{m}\mathrm{Tr}((e_{ii}-e_{2m+1-i,2m+1-i})[a,b] )\right),\] \[\hat{so}^{(2)}_{2n}: \frac{1}{2}\omega(t^{k}a,t^{\ell}b)=\frac{k}{2}\delta_{k,-\ell} \left(\mathrm{Tr}(ab)+\sum_{i=1}^{n-1}\mathrm{Tr}((e_{ii}-e_{2n+1-i,2n+1-i})[a, b])\right).\] **Lemma 2.3**: _We have the following bilinear forms on the subspace of the loop space \(\mathbb{C}[t,t^{-1}]^{N}\), restricted from \(\mathbb{C}^{\infty}\):_ \[(t^{k}v,t^{\ell}w)_{\hat{g}l^{(2)}_{2n}}= (-1)^{k}\delta_{k,\ell}(v,w)_{so_{2n}}, N=2n,\] \[(t^{k}v,t^{\ell}w)_{\hat{g}l^{(2)}_{2m+1}}= (-1)^{k}\delta_{k,\ell}(v,w)_{so_{2n+1}}, N=2n+1,\] \[(t^{k}v,t^{\ell}w)_{\hat{so}^{(2)}_{2n}}= (-1)^{k}\delta_{k,\ell}(Ov,w)_{so_{2n}}, N=2n.\] ## 3 Representations of classical infinite matrix Lie algebras In this section we will recall the spin and Weyl modules for the classical infinite matrix algebras. These modules will be fundamental for the description of representations of the classical affine Lie algebras, used in this paper. ### The spin module for \(a_{\infty}\) Consider the Clifford algebra \(C\ell\) as a unital associative algebra with generators \(\psi^{\pm}_{i}\), \(i\in\frac{1}{2}+\mathbb{Z}\), and relations \[\psi^{\lambda}_{i}\psi^{\mu}_{j}+\psi^{\mu}_{j}\psi^{\lambda}_{i}=\delta_{ \lambda,-\mu}\delta_{i,-j},\quad i,j\in\frac{1}{2}+\mathbb{Z},\ \lambda,\mu=+\ \mathrm{or}\ -. \tag{3.1}\] Its corresponding spin module \(F=F_{a}\) is defined as the irreducible \(C\ell\)-module, which admits the vacuum vector \(|0\rangle\), subject to the relation \[\psi^{\pm}_{j}|0\rangle=0\ \mathrm{for}\ j>0.\] The well-known representation \(r\) of \(a_{\infty}\) on \(F\) is defined by \[r(K)=I,\quad r(E_{ij})=:\psi^{+}_{-i+\frac{1}{2}}\psi^{-}_{j-\frac{1}{2}}:, \quad i,j\in\mathbb{Z}, \tag{3.2}\] where as usual the normally ordered product, \(:\psi_{i}\psi_{j}:\) is defined as equal to \(\psi_{i}\psi_{j}\) if \(i\leq j\) and \(-\psi_{j}\psi_{i}\) otherwise. Define the charge on \(F\) by setting \[\mathrm{charge}(|0\rangle)=0\ \mathrm{and}\ \mathrm{charge}(\psi^{\pm}_{j})=\pm 1.\] Then \(F\) decomposes into charge sectors \[F=\bigoplus_{m\in\mathbb{Z}}F^{(m)},\quad\text{ where }F^{(m)}=\{v\in F\,|\, \text{charge}(v)=m\}. \tag{3.3}\] Each subspace \(F^{(m)}\) is an irreducible highest weight \(a_{\infty}\)-module [18], [13], for which \[|\pm m\rangle=\psi_{\frac{1}{2}-m}^{\pm}\psi_{\frac{3}{2}-m}^{\pm}\cdots\psi_{ -\frac{1}{2}}^{\pm}|0\rangle,\qquad m\in\mathbb{Z}_{\geq 0},\] is its highest weight vector, i.e. \[r(E_{ij})|\pm m\rangle=0\text{ for }i<j.\] The highest weight is given by \(r(E_{ii})|m\rangle=|m\rangle\) for \(1\leq i\leq m\), \(r(E_{ii})|-m\rangle=-|-m\rangle\) for \(1-m\leq i\leq 0\), if \(m\geq 1\), and \(r(E_{ii}|m\rangle=0\) in all other cases. The elements \[\psi_{i_{1}}^{+}\psi_{i_{2}}^{+}\cdots\psi_{i_{p}}^{+}\psi_{j_{1}}^{-}\psi_{j _{2}}^{-}\cdots\psi_{j_{p}}^{-}|m\rangle,\] with \(i_{1}<i_{2}<\cdots<i_{p}<-m\) and \(j_{1}<j_{2}<\cdots<j_{p}<m\) form a basis of \(F^{(m)}\). Finally, using the embedding (2.22), the spin module \(F\) restricted to \(a_{\infty}\) is \(\hat{gl}_{n}\)-module, such that each \(F^{(m)}\) is an irreducible level \(r(K)=1\) highest weight \(\hat{gl}_{n}\)-module [18]. ### The spin module for \(b_{\infty}\) and \(d_{\infty}\) Recall the bilinear forms (2.19) and the description of \(x_{\infty}\) for \(x=b\) and \(d\) in Subsection 2.3. Then the elements \(B_{jk}=E_{-j,k}-E_{-k,j}=-B_{kj}\) with \(j>k\), \(C_{j+\frac{1}{2},k-\frac{1}{2}}=(-1)^{k}E_{-j,k}-(-)^{j}E_{-k+1,j+1}=C_{k-\frac {1}{2},j+\frac{1}{2}}\), \(D_{j+\frac{1}{2},k-\frac{1}{2}}=E_{-j,k}-E_{-k+1,j+1}=-D_{k-\frac{1}{2},j+ \frac{1}{2}}\) with \(j\geq k\), given by (2.20) together with \(K\), form a basis of \(b_{\infty}\), \(c_{\infty}\), \(d_{\infty}\), respectively. Note that the representation \(r\) restricted to \(x_{\infty}\), is given by \[r(B_{jk}) =:\psi_{j+\frac{1}{2}}^{+}\psi_{k-\frac{1}{2}}^{-}-\psi_{k+\frac {1}{2}}^{+}\psi_{j-\frac{1}{2}}^{-}:, r(K) =I;\] \[r(C_{j+\frac{1}{2},k-\frac{1}{2}}) =(-1)^{k}:\psi_{j+\frac{1}{2}}^{+}\psi_{k-\frac{1}{2}}^{-}-(-1)^{ j}\psi_{k-\frac{1}{2}}^{+}\psi_{j+\frac{1}{2}}^{-}:, r(K) =I;\] \[r(D_{j+\frac{1}{2},k-\frac{1}{2}}) =:\psi_{j+\frac{1}{2}}^{+}\psi_{k-\frac{1}{2}}^{-}-\psi_{k-\frac {1}{2}}^{+}\psi_{j+\frac{1}{2}}^{-}:, r(K) =I.\] This suggests to define involutions \(\iota_{b}\) and \(\iota_{d}\) and an automorphism \(\iota_{c}\) of order \(4\) on the Clifford algebra \(C\ell\) (which respects the relations (3.1)): \[\iota_{b}(\psi_{j+\frac{1}{2}}^{+}) =\psi_{j-\frac{1}{2}}^{-}, \iota_{b}(\psi_{k-\frac{1}{2}}^{-}) =\psi_{k+\frac{1}{2}}^{+},\] \[\iota_{c}(\psi_{j+\frac{1}{2}}^{+}) =(-1)^{j}\psi_{j+\frac{1}{2}}^{-}, \iota_{c}(\psi_{k-\frac{1}{2}}^{-}) =(-1)^{k}\psi_{k-\frac{1}{2}}^{+}, \tag{3.4}\] \[\iota_{d}(\psi_{j+\frac{1}{2}}^{+}) =\psi_{j+\frac{1}{2}}^{-}, \iota_{d}(\psi_{k-\frac{1}{2}}^{-}) =\psi_{k-\frac{1}{2}}^{+}.\] This induces via \(r\) the following involution on \(\overline{a}_{\infty}\) \[\iota_{b}(E_{jk})=-E_{-k,-j},\quad\iota_{c}(E_{jk})=-(-1)^{j+k}E_{-k+1,-j+1}, \quad\text{and }\iota_{d}(E_{jk})=-E_{-k+1,-j+1},\] thus \[x_{\infty}=\{g\in a_{\infty}|_{\iota_{x}}(g)=g\}\quad\text{for $x=b,\ c$ or $d$}.\] Introduce the following elements of \(C\ell\): \[\begin{split}\tilde{\phi}_{i}&=\frac{\psi_{i+\frac{ 1}{2}}^{+}+\psi_{i-\frac{1}{2}}^{-}}{\sqrt{2}}\quad\text{for $i\in\mathbb{Z}$},\\ \phi_{i}&=\frac{\psi_{i}^{+}+\psi_{i}^{-}}{\sqrt{2}} \quad\text{for $i\in\frac{1}{2}+\mathbb{Z}$}.\end{split} \tag{3.5}\] They generate the Clifford algebra \(C\ell_{x}\), where \(x=b\) or \(d\), subject to relations \[\begin{split}\tilde{\phi}_{i}\tilde{\phi}_{j}+\tilde{\phi}_{j} \tilde{\phi}_{i}=\delta_{i,-j},\ i,j\in\mathbb{Z},\quad\text{if $x=b$},\\ \phi_{i}\phi_{j}+\phi_{j}\phi_{i}=\delta_{i,-j},\ i,j\in\frac{1}{ 2}+\mathbb{Z},\quad\text{if $x=d$}.\end{split} \tag{3.6}\] We define the spin module \(F_{x}\) over \(C\ell_{x}\) for \(x=b\) and \(d\), and its dual respectively, generated by the vacuum vector \(|0\rangle\) (resp. \(\langle 0|\)), subject to the conditions: \[\begin{split}\tilde{\phi}_{j}|0\rangle&=\phi_{j} |0\rangle=0\ \text{for $j>0$ (resp. $\langle 0|\tilde{\phi}_{j}=\langle 0|\phi_{j}=0$ for $j<0$)},\\ \tilde{\phi}_{0}|0\rangle&=\frac{1}{\sqrt{2}}|0 \rangle\ \text{(resp. $\langle 0|\tilde{\phi}_{0}=\frac{1}{\sqrt{2}}\langle 0|$)}.\end{split} \tag{3.7}\] The elements \(\tilde{\phi}_{j_{1}}\tilde{\phi}_{j_{2}}\cdots\tilde{\phi}_{j_{p}}|0\rangle\) (resp. \(\phi_{j_{1}}\phi_{j_{2}}\cdots\ \phi_{j_{p}}|0\rangle\)) with \(j_{1}<j_{2}<\cdots<j_{p}<0\) form a basis of \(F_{b}\) (resp. \(F_{d}\)). We define the spin representation \(r_{b}\) of \(b_{\infty}\) in \(F_{b}\), and \(r_{d}\) of \(d_{\infty}\) in \(F_{d}\), by \[\begin{split} r_{b}(B_{jk})&=:\tilde{\phi}_{j} \tilde{\phi}_{k}:\ \text{for $i,j\in\mathbb{Z}$},\quad r_{b}(K)=\frac{1}{2}I\ (=\gamma_{b}I),\\ r_{d}(D_{jk})&=:\phi_{j}\phi_{k}:\ \text{for $i,j\in \frac{1}{2}+\mathbb{Z}$},\quad r_{d}(K)=\frac{1}{2}I\ (=\gamma_{d}I).\end{split} \tag{3.8}\] Then \(F_{b}\) is an irreducible highest weight \(b_{\infty}\)-module with highest weight vector \(|0\rangle\), but \(F_{d}\) splits into two irreducible highest weight \(d_{\infty}\)-modules \(F_{d}^{\overline{0}}\oplus F_{d}^{\overline{1}}\), defined by the \(\mathbb{Z}_{2}\)-gradation given by \(\deg(|0\rangle)=\overline{0}\) and \(\deg(\tilde{\phi}_{i})=\overline{1}\). The highest weight vector of \(F_{d}^{\overline{0}}\) is \(|\overline{0}\rangle=|0\rangle\) and of \(F_{d}^{\overline{1}}\) is \(|\overline{1}\rangle=\sqrt{2}\tilde{\phi}_{-\frac{1}{2}}|0\rangle\). The embeddings (2.23) of \(\hat{so}_{n}\) in \(d_{\infty}\) (resp. (2.25) of \(\hat{so}_{2m+1}\) in \(b_{\infty}\)) give, via restriction of the \(x_{\infty}\) spin modules, irreducible highest weight representations of \(\hat{so}_{n}\) with highest weights \(\Lambda_{0}\) and \(\Lambda_{1}\) of \(F_{d}^{\overline{0}}\), respectively \(F_{d}^{\overline{1}}\) (resp. an irreducible highest weight representation of \(\hat{so}_{2m+1}\) with highest weight \(\Lambda_{m}\)) [23], [25]. Hereafter the \(\Lambda_{j}\) denote the fundamental weights of \(\hat{\mathfrak{g}}\). **Remark 3.1**: _If we choose another embedding for \(\tilde{so}_{2m}\), via the identification_ \[e_{2km+jmn}=t^{-k}v_{j},\quad k\in\mathbb{Z},\ 1\leq j\leq 2m,\] _we obtain an irreducible representations of \(\hat{so}_{2m}\) with other highest weights, viz. \(\Lambda_{m}\) and \(\Lambda_{m-1}\) of \(F_{d}^{\overline{0}}\), respectively \(F_{d}^{\overline{1}}\). Since, the description in this case is similar, one can use an outer automorphism of the Lie algebra to interchange these representations with the ones with highest weights \(\Lambda_{0}\) and \(\Lambda_{1}\). Hence, we will not describe this case here. In the twisted cases, the embeddings (2.30) and (2.31) in \(b_{\infty}\), give an irreducible representation of \(\hat{gl}^{(2)}_{2m+1}\), resp. \(\hat{so}^{(2)}_{2n}\), with highest weight \(\Lambda_{0}\), resp. \(\Lambda_{n-1}\). The embedding (2.29) of \(\hat{gl}^{(2)}_{2m}\) into \(d_{\infty}\) gives two irreducible highest weight representations of \(\hat{gl}^{(2)}_{2m}\) with highest weights \(\Lambda_{0}\) and \(\Lambda_{1}\) in \(F^{\overline{0}}_{d}\), respectively \(F^{\overline{1}}_{d}\). Again, as for \(\hat{so}_{2m}\), we can define also for \(\hat{so}^{(2)}_{2n}\) a different embedding, which produces the irreducible representation with highest weight \(\Lambda_{0}\). However, also here we can apply an outer automorphism wich interchanges this representation with the representation with highest weight \(\Lambda_{n-1}\)._ **Remark 3.2**: _The automorphisms \(\sigma\), which define the twisted affine lie algebras, induce an automorphism on \(x_{\infty}\) for \(x=b\) or \(d\). Viewing \(x_{\infty}\) as subalgebra of \(a_{\infty}\), we find that \(\sigma(E_{ij})=JE_{ij}J\) for \(\hat{\mathfrak{g}}^{(2)}=\hat{gl}^{(2)}_{n}\), where \(J\) is given by (2.32) and \(\sigma(E_{ij})=OE_{ij}O\) for \(\hat{\mathfrak{g}}^{(2)}=\hat{so}^{(2)}_{2n}\), where \(O\) is given by (2.32). We can extend this automorphism to an automorphism on the Clifford algebra \(C\ell_{d}\) (resp \(C\ell_{b}\)), which we denote by \(\sigma\), viz. \(\sigma(\phi_{j})=J\phi_{j}=\sigma_{j}\phi_{j}\), for \(j\in\frac{1}{2}+\mathbb{Z}\) (resp. \(\sigma(\tilde{\phi}_{i})=J\phi_{i}=\sigma_{i}\tilde{\phi}_{i}\) for \(i\in\mathbb{Z}\)). Note, that \(J\) acts as matrix multiplication from the left on a column vector \(\phi\). More explicitly,_ \[\hat{gl}^{(2)}_{2m}: \sigma(\phi_{k(2m)+j-m-\frac{1}{2}})=c_{k}^{2}\phi_{k(2m)+j-m- \frac{1}{2}},\quad k\in\mathbb{Z},\ j=1,\ldots,2n,\] \[\hat{gl}^{(2)}_{2m+1}: \sigma(\tilde{\phi}_{k(2m+1)+j-m+1})=c_{k}^{2}\tilde{\phi}_{k(2m +1)+j-m+1},\quad k\in\mathbb{Z},\ j=1,\ldots,2n+1,\] \[\hat{so}^{(2)}_{2n}: \sigma(\tilde{\phi}_{k(2n)+j})=\begin{cases}\tilde{\phi}_{k(2n) +j}&j=1,\ldots,n-1,n+1,\ldots 2n,\\ (-1)^{k}\tilde{\phi}_{k(2n)+n}&j=n,\end{cases}\ k\in\mathbb{Z}.\] _where the \(c_{k}\) are given by (2.28). Observe, for later calculations that \(\sigma_{i}=\sigma_{-i}\) This automorphism can be extended to the spin module by \(\sigma(|0\rangle)=|0\rangle\), since it leaves the annihilation space of \(|0\rangle\) (which is the subspace of \(\mathbb{C}^{\infty}\)consisting of all the elements \(\phi\) such that \(\phi|0\rangle=0\)) invariant._ ### The Weyl module for \(c_{\infty}\) (cf. [8]) Introduce free symplectic bosons \(b_{i}\), \(i\in\frac{1}{2}+\mathbb{Z}\), that form the Weyl algebra \(C\ell_{c}\) with commutation relations \[b_{i}b_{j}-b_{j}b_{i}=(-1)^{i-\frac{1}{2}}\delta_{i,-j},\quad i,j\in\frac{1}{2 }+\mathbb{Z}.\] Define the space \(F_{c}\) as an irreducible \(C\ell_{c}\)-module, generated by the vacuum vector \(|0\rangle\) (resp. its dual by \(\langle 0|\)) subject to conditions \[b_{j}|0\rangle=0\qquad(\mbox{resp. }\langle 0|b_{-j}=0)\quad\mbox{for }j>0.\] The elements \[b_{j_{1}}b_{j_{2}}\cdots b_{j_{p}}|0\rangle,\quad\mbox{with }j_{1}\leq j_{2}\leq \cdots j_{p}<0,\] form a basis of \(F_{c}\). Define a representation of \(c_{\infty}\) in \(F_{c}\), by \[r_{c}(C_{ij})=:b_{i}b_{j}:\text{ for }i,j\in\frac{1}{2}+\mathbb{Z},\quad r_{c}(K)=- \frac{1}{2}I. \tag{3.9}\] As in the case of \(F_{d}\), the \(c_{\infty}\)-module \(F_{c}\) splits in two irreducible modules with highest weights \(-\frac{1}{2}\Lambda_{0}\) and \(-\frac{3}{2}\Lambda_{0}+\Lambda_{1}\)in \(F_{c}^{\overline{0}}\), respectively \(F_{c}^{\overline{1}}\). The embedding (2.27) gives a highest weight representations of \(\hat{\wp}_{2n}\) with the same highest weight vectors as for \(c_{\infty}\). ### Generating fermionic and bosonic fields In view of Sections 3.1-3.3, introduce the following generating fields of fermions and symplectic bosons \[\begin{split}\psi^{\pm}(z)&=\sum_{j\in\frac{1}{2} +\mathbb{Z}}\psi_{j}^{\pm}z^{-j-\frac{1}{2}}\hskip 56.905512pt\text{(charged free fermions)},\\ \tilde{\phi}(z)&=\sum_{j\in\mathbb{Z}}\tilde{\phi}_{ j}z^{-j}\hskip 56.905512pt\text{(twisted neutral free fermions)},\\ b(z)&=\sum_{j\in\frac{1}{2}+\mathbb{Z}}b_{j}z^{-j -\frac{1}{2}}\hskip 14.226378pt\text{(neutral free symplectic bosons)},\\ \phi(z)&=\sum_{j\in\frac{1}{2}+\mathbb{Z}}\phi_{j}z^{-j -\frac{1}{2}}\hskip 56.905512pt\text{(neutral free fermions)}.\end{split} \tag{3.10}\] In terms of these fields we can define generating series for the representation of \(x_{\infty}\) in \(F_{x}\) for \(x=a,b,c,d\), viz. \[\begin{split} a_{\infty}:\qquad A(y,z)&=\sum_{i,j \in\mathbb{Z}}r_{a}(E_{ij})y^{i-1}z^{-j}=:\psi^{+}(y)\psi^{-}(z):,\\ b_{\infty}:\qquad B(y,z)&=\sum_{i,j\in\mathbb{Z}}r _{b}(B_{ij})y^{-i}z^{-j}=:\tilde{\phi}(y)\tilde{\phi}(z):,\\ c_{\infty}:\qquad C(y,z)&=\sum_{i,j\in\frac{1}{2} +\mathbb{Z}}r_{c}(C_{ij})y^{-i-\frac{1}{2}}z^{-j-\frac{1}{2}}=:b(y)b(z):,\\ d_{\infty}:\qquad D(y,z)&=\sum_{i,j\in\frac{1}{2} +\mathbb{Z}}r_{d}(D_{ij})y^{-i-\frac{1}{2}}z^{-j-\frac{1}{2}}=:\phi(y)\phi(z):.\end{split} \tag{3.11}\] Unfortunately, the fermionic fields that define \(b_{\infty}\) and \(d_{\infty}\) are not so convenient if one wants to express them in terms of vertex operators. For that reason we define new fields. In the case of \(b_{\infty}\) and \(d_{\infty}\) let \[\tilde{\phi}^{1}(z)=\sum_{j\in\mathbb{Z}}\tilde{\phi}^{1}_{j}z^{-j},\] where \[\text{for }b_{\infty}: \tilde{\phi}^{1}_{2j}=\tilde{\phi}_{2j},\quad\tilde{\phi}^{1}_{2j+1 }=\sqrt{-1}\tilde{\phi}_{2j+1},\quad j\in\mathbb{Z}; \tag{3.12}\] \[\text{for }d_{\infty}: \tilde{\phi}^{1}_{0}=\frac{1}{\sqrt{2}}(\phi_{\frac{1}{2}}+\phi_ {-\frac{1}{2}}),\quad\tilde{\phi}^{1}_{-2j}=\phi_{-2j-\frac{1}{2}},\quad\tilde {\phi}^{1}_{-2j+1}=\phi_{-2j+\frac{1}{2}},\quad j\in\mathbb{Z}_{>0}.\] \[\tilde{\phi}^{1}_{2j}=\phi_{2j+\frac{1}{2}},\quad\tilde{\phi}^{1} _{2j-1}=\sqrt{-1}\phi_{2j-\frac{1}{2}},\quad(j\in\mathbb{Z}_{>0}),\quad\sigma_ {0}=\frac{\sqrt{-1}}{\sqrt{2}}(\phi_{\frac{1}{2}}-\phi_{-\frac{1}{2}}).\] Then these elements satisfy the anti-commutation relations (1.6) of the twisted neutral fermionic field of the Introduction: \[\tilde{\phi}^{1}_{j}\tilde{\phi}^{1}_{k}+\tilde{\phi}^{1}_{k}\tilde{\phi}^{1 }_{j}=(-1)^{j}\delta_{j,-k},\quad\sigma^{2}_{0}=\frac{1}{2}\quad\text{and }\sigma_{0}\tilde{\phi}^{1}_{j}+\tilde{\phi}^{1}_{j}\sigma_{0}=0\] and \[(\tilde{\phi}^{1}_{0}-\sqrt{-1}\sigma_{0})|0\rangle=0,\qquad\tilde{\phi}^{1}_{ j}|0\rangle=0,\quad j>0. \tag{3.13}\] Letting \(\tilde{\phi}^{1}(z)=\sum_{j\in z}\tilde{\phi}_{j}z^{-j}\), we find the following generating series for the basis of \[b_{\infty}: E^{11}(y,z)=:\tilde{\phi}^{1}(y)\tilde{\phi}^{1}(z):,\] \[d_{\infty}: E^{11}(y,z)=:\tilde{\phi}^{1}(y)\tilde{\phi}^{1}(z):,\quad \text{and }E^{1}(z)=:\tilde{\phi}^{1}(z)\sigma_{0}:.\] Note that in the latter case the modes of \(E^{11}(y,z)\) form a \(b_{\infty}\) subalgebra inside \(d_{\infty}\). ### Multicomponent realizations of \(F_{x}\) and \(x_{\infty}\) It is possible to relabel the generators of the Clifford and Weyl algebra to obtain multicomponent realizations of the module \(F_{x}\) for \(x=a,b,c\) and \(d\). In the case of \(a_{\infty}\), we can relabel in such a way that one obtains instead of one pair of fields \(\psi^{\pm}(z)\), \(s\) pairs of such fields, where \(s\) is a positive integer. The simplest such a relabeling is (1.27) (cf. [15], p. 3253): \[\psi^{\pm a}_{j}=\psi^{\pm}_{sj\pm\frac{1}{2}(s-2a+1)},\quad a=1,\ldots,s, \tag{3.14}\] for which the subscripts on the right-hand side are \(\equiv\pm\frac{1}{2}(1-2a)\). Then we have \[\psi^{+a}_{i}\psi^{+b}_{j}+\psi^{+b}_{j}\psi^{+a}_{i}=\psi^{-a}_{i}\psi^{-b}_{ j}+\psi^{-b}_{j}\psi^{-a}_{i}=0,\quad\psi^{+a}_{i}\psi^{-b}_{j}+\psi^{-b}_{j} \psi^{-a}_{i}=\delta_{ab}\delta_{i,-j}, \tag{3.15}\] and \[\psi^{\pm a}_{j}|0\rangle=0\qquad\text{for }j>0,\quad a=1,\ldots,s. \tag{3.16}\] We define the corresponding fields by \[\psi^{\pm a}(z)=\sum_{j\in\frac{1}{2}+\mathbb{Z}}\psi^{\pm a}_{j}z^{-j-\frac{1 }{2}},\qquad a=1,\ldots,s. \tag{3.17}\] As a result, we obtain \(s^{2}\) generating fields for \(a_{\infty}\), namely \[A^{ab}(y,z)=:\psi^{+a}(y)\psi^{-b}(z):,\quad\ 1\leq a,b\leq s.\] In the cases \(x=b\) and \(d\) the multicomponent realization of \(F_{x}\) and \(x_{\infty}\) can be given in terms of \(s\geq 0\) pairs of charged fermionic fields and \(r\geq 0\) neutral twisted fermionic fields, where \(s+r>0\). In addition one needs one fermion \(\sigma_{0}\) in case \(x=b\) (resp. \(d\)) if \(r\) is even (resp. odd). We relabel as follows. In principle if we relabel the \(\phi_{i}\) with \(i>0\) and the \(\tilde{\phi}_{i}\) with \(\mathrm{i}\geq 0\), the ones with a negative index are then fixed. Let's describe the case that \(x=b\). We do the relabeling in blocks first we relabel \(2s\) fermions to obtain the charged ones as follows \(\phi_{(2s+r)+j+1+q}=\psi_{j+\frac{1}{2}}^{-1},\phi_{(2s+r)+j+2+q}=\psi_{j+ \frac{1}{2}}^{+1},\phi_{(2s+r)+j+3+q}=\psi_{j+\frac{1}{2}}^{-2},\phi_{(2s+r)+j+ 4+q}=\psi_{j+\frac{1}{2}}^{+2},\dots,\phi_{(2s+r)+j+2s+q}=\psi_{j+\frac{1}{2}}^ {+s}\). Here \(q\) is some positive integer which depends on \(r\). Then we relabel \(r\) fermions to obtain \(\tilde{\phi}_{j}^{s+c}\) with \(j>0\). We do this as follows \(\tilde{\phi}_{j}^{s+1}=\mathrm{constant}\tilde{\phi}_{(r+2s)j+2s+1+q},\tilde{ \phi}_{j}^{s+2}=\mathrm{constant}\tilde{\phi}_{(r+2s)j+2s+2+q},\dots,\tilde{ \phi}_{j}^{s+r}=\mathrm{constant}\tilde{\phi}_{(r+2s)j+2s+r+q}\). We need the constant to get the right commutation relations. Finally, we want to obtain the \(\tilde{\phi}_{0}^{r+c}\)'s. Note that we want the following commutation relations for them \((\tilde{\phi}_{0}^{r+c})^{2}=\frac{1}{2}\). There is however only one fermion which has this relation, viz. \(\tilde{\phi}_{0}\). So we have to create \(r-1\) of such fermions. This can be done as follows, we can combine a creation and annihilation fermion \(\tilde{\phi}_{-j}\) and \(\tilde{\phi}_{j}\). Note that we haven't relabeled the \(\tilde{\phi}_{j}\) with \(1\leq j\leq q\) yet. Let \[\tilde{\phi}_{0}^{s+1}=\frac{1}{\sqrt{2}}(\tilde{\phi}_{1}+\tilde{\phi}_{-1}),\quad\tilde{\phi}_{0}^{s+2}=\frac{\sqrt{-1}}{\sqrt{2}}(\tilde{\phi}_{1}- \tilde{\phi}_{-1}),\] Then these elements satisfy \((\tilde{\phi}_{0}^{s+1})^{2}=(\tilde{\phi}_{0}^{s+2})^{2}=\frac{1}{2}\). We can do a similar thing for the other \(r-3\) ones. If \(r\) is odd, we obtain all the elements \(\tilde{\phi}_{0}^{s+a}\) in this way, where in particularly \(\tilde{\phi}_{0}^{s+r}=\tilde{\phi}_{0}\). So in this case \(q=\frac{r-1}{2}\). If \(r\) is even, we haven't relabeled \(\tilde{\phi}_{0}\), we thus set \(\sigma_{0}=\tilde{\phi}_{0}\). Note that \(q=\frac{r}{2}\) in this case. In the \(x=d\), we do a similar thing, the role of \(r\) even and odd is interchanged in this case. Explicitly, we use the following relabeling of the generators \(\tilde{\phi}_{j}\), \(j\in\mathbb{Z}\) (resp. \(\phi_{j}\), \(j\in\frac{1}{2}+\mathbb{Z}\)) of \(C\ell_{b}\) (resp. \(C\ell_{d}\)). For \(x=b\) we let \(r=2q+1\) if \(r\geq 1\) is odd, and \(r=2q\) if \(r\geq 0\) is even. and introduce the following generators of \(C\ell_{b}\), where \(j\in\mathbb{Z}_{\geq 0}\), \(1\leq a\leq s\), \(1\leq c\leq r\): \[\begin{array}{l}\tilde{\phi}_{\pm(2j+1)}^{s+b}=\sqrt{-1}\tilde{\phi}_{\pm(2j (2s+r)+q+2s+b)},\quad\tilde{\phi}_{\pm 2j+2}^{s+b}=\tilde{\phi}_{\pm((2j+1)(2s+r)+q+2s+b)}, \\ \psi_{\pm(j+\frac{1}{2})}^{\mp a}=\tilde{\phi}_{\pm(j(2s+r)+q+2a-1)},\quad\psi _{\pm(j+\frac{1}{2})}^{\pm a}=\tilde{\phi}_{\pm(j(2s+r)+q+2a)},\\ \tilde{\phi}_{0}^{s+2c-1}=\frac{1}{\sqrt{2}}(\tilde{\phi}_{c}+\tilde{\phi}_{-c }),\quad\tilde{\phi}_{0}^{s+2c}=\frac{\sqrt{-1}}{\sqrt{2}}(\tilde{\phi}_{c}- \tilde{\phi}_{-c}),\\ \tilde{\phi}_{0}^{s+r}=\tilde{\phi}_{0},\quad\text{if }r=2q+1,\sigma_{0}= \tilde{\phi}_{0},\quad\text{if }r=2q.\end{array} \tag{3.18}\] For \(x=d\), we let \(r=2q-1\) if \(r\geq 1\) is odd, and \(r=2q\) if \(r\geq 0\) is even, and introduce the following generators of \(C\ell_{d}\), where \(j\in\mathbb{Z}_{\geq 0}\), \(1\leq a\leq s\), \(1\leq b\leq r\) \(1\leq c\leq q\): \[\begin{array}{l}\tilde{\phi}_{\pm(2j+1)}^{s+b}=\sqrt{-1}\phi_{\pm(2j(2s+r)+q+2s +b-\frac{1}{2})},\quad\tilde{\phi}_{\pm 2j+2}^{s+b}=\phi_{\pm((2j+1)(2s+r)+q+2s+b- \frac{1}{2})},\\ \psi_{\pm(j+\frac{1}{2})}^{\mp a}=\phi_{\pm(j(2s+r)+q+2a-\frac{3}{2})},\quad\psi _{\pm(j+\frac{1}{2})}^{\pm a}=\phi_{\pm(j(2s+r)+q+2a-\frac{1}{2})},\\ \tilde{\phi}_{0}^{s+2c-1}=\frac{1}{\sqrt{2}}(\phi_{c-\frac{1}{2}}+\phi_{\frac{1 }{2}-c}),\quad\tilde{\phi}_{0}^{s+2c}=\frac{\sqrt{-1}}{\sqrt{2}}(\phi_{c- \frac{1}{2}}-\phi_{\frac{1}{2}-c}),\\ \tilde{\phi}_{0}^{s+r}=\frac{1}{\sqrt{2}}(\phi_{q-\frac{1}{2}}+\phi_{\frac{1 }{2}-q}),\quad\sigma_{0}=\frac{\sqrt{-1}}{\sqrt{2}}(\phi_{q-\frac{1}{2}}-\phi _{\frac{1}{2}-q}),\quad\mbox{if $r=2q-1$}.\end{array} \tag{3.19}\] As a result we obtain in both cases \(s\) pairs of charged fermionic fields \[\psi^{\pm a}(z)=\sum_{j\in\frac{1}{2}+\mathbb{Z}}\psi_{j}^{\pm a}z^{-j-\frac{1 }{2}},\quad 1\leq a\leq s,\] and \(r\) twisted neutral fields \[\tilde{\phi}^{\pm b}(z)=\sum_{j\in\mathbb{Z}}\tilde{\phi}_{j}^{b}z^{-j},\quad s <b\leq r+s.\] Besides we have an extra fermion \(\sigma_{0}\) if \(x=b\) and \(r\) is even, and if \(x=d\) and \(r\) is odd. The commutation relations between the charged fermions, as well as their action on the vacuum are (3.15) and (3.16). The remaining commutation relations are \[\begin{array}{l}\sigma_{0}\psi_{i}^{\pm a}+\psi_{i}^{\pm a}\sigma_{0}=\sigma _{0}\tilde{\phi}_{i}^{b}+\tilde{\phi}_{i}^{b}\sigma_{0}=0,\ \sigma_{0}^{2}=\frac{1}{2},\ \psi_{i}^{\pm a}\tilde{\phi}_{j}^{b}+\tilde{\phi}_{j}^{b}\psi_{i}^{\pm a}=0, \\ \tilde{\phi}_{j}^{b}\tilde{\phi}_{k}^{c}+\tilde{\phi}_{k}^{c}\tilde{\phi}_{=} ^{b}(-1)^{j}\delta_{bc}\delta_{j,-k},\quad i\in\frac{1}{2}+\mathbb{Z},\ j,k \in\mathbb{Z},\ 1\leq a\leq s<b,c\leq r+s,\end{array} \tag{3.20}\] and we have the following action on the vacuum \[(\tilde{\phi}_{0}^{s+2k-1}-\sqrt{-1}\tilde{\phi}_{0}^{s+2k})|0\rangle=\tilde{ \phi}_{j}^{b}|0\rangle=0,\qquad\mbox{for $j>0$, $1\leq k\leq\frac{r}{2}$}, \tag{3.21}\] together with \[\sigma_{0}|0\rangle=\frac{1}{\sqrt{2}}|0\rangle,\quad\mbox{if $r$ even},\quad\tilde{\phi}_{0}^{s+r}|0\rangle=\frac{1}{\sqrt{2}}|0\rangle, \quad\mbox{if $r$ odd}, \tag{3.22}\] if \(x=b\) and (cf. (3.13)): \[(\tilde{\phi}_{0}^{s+r}-\sqrt{-1}\sigma_{0})|0\rangle=0,\quad\mbox{if $r$ \ is odd}, \tag{3.23}\] if \(x=d\). The generating fields for the representation \(r_{x}\) for \(x=b\) or \(d\) then are respectively (\(\delta,\epsilon=+\) or \(-\), \(1\leq a,b\leq s<c,d\leq s+r\)): \[\begin{array}{l}E^{\delta a,\epsilon b}(y,z)=:\psi^{\delta a}(y)\psi^{cb}(z) :,\quad\mbox{if $r>0$},\\ E^{\delta a,c}(y,z)=:\psi^{\delta a}(y)\tilde{\phi}^{c}(z):,\quad\mbox{if $r,s>0$},\\ E^{c,d}(y,z)=:\tilde{\phi}^{c}(y)\tilde{\phi}^{d}(z):,\quad\mbox{if $s>0$},\\ E^{\delta a}(z)=:\psi^{\delta a}(z)\sigma_{0}:,\quad\mbox{if $s>0$},\ (x=b\mbox{ and $r$ even})\ \mbox{or $(x=d$ and $r$ odd}),\\ E^{c}(z)=:\tilde{\phi}^{c}(z)\sigma_{0}:,\quad\mbox{if $r>0$},\ (x=b\mbox{ and $r$ even})\ \mbox{or $(x=d$ and $r$ odd}).\end{array} \tag{3.24}\] For \(c_{\infty}\), we can choose a different basis of the Weyl algebra such that we get \(r\geq 0\) neutral symplectic bosonic fields as in the previous section. In analogy with the \(x=b,d\) case, one can also obtain \(s\geq 0\) so-called charged symplectic bosonic fields \(b^{+a}(z)\) and \(b^{-a}(z)\), for \(1\leq a\leq s\). One can obtain this for instance by relabeling as follows. If \(r\) is even (\(j>0\), \(1\leq a\leq s\),\(1\leq c\leq\frac{r}{2}\)), \[\begin{array}{l}b^{s+2c-1}_{\pm(2j+\frac{1}{2})}=b_{\pm(2j(r+2s)+2c-\frac{3}{ 2})},\quad b^{s+2c}_{\pm(2j+\frac{1}{2})}=\sqrt{-1}b_{\pm(2j(r+2s)+2c-\frac{1}{ 2})},\\ b^{s+2c-1}_{\pm(2j+\frac{3}{2})}=\sqrt{-1}b_{\pm((2j+1)(r+2s)+2c-\frac{3}{2})},\quad b^{s+2c}_{\pm(2j+\frac{3}{2})}=b_{\pm((2j+1)(r+2s)+2c-\frac{1}{2})},\\ b^{\pm a}_{\pm(j+\frac{1}{2})}=b_{\pm(j(r+2s)+r+2a-\frac{3}{2})},\quad b^{\pm a }_{\mp(j+\frac{1}{2})}=b_{\mp(j(r+2s)+r+2a-\frac{1}{2})}.\end{array} \tag{3.25}\] If \(r\) is odd (\(j>0\), \(1\leq a\leq s\),\(1\leq c<\frac{r}{2}\)), \[\begin{array}{l}b^{s+r}_{\pm(j+\frac{1}{2})}=b_{\pm(j(r+2s)+r-\frac{1}{2})}, \\ b^{s+2c-1}_{\pm(j+\frac{1}{2})}=b_{\pm(j(r+2s)+2c-\frac{3}{2})},\quad b^{s+2c} _{\pm(j+\frac{1}{2})}=\sqrt{-1}b_{\pm(j(r+2s)+2c-\frac{1}{2})},\\ b^{\pm a}_{\pm(2j+\frac{1}{2})}=\sqrt{-1}b_{\pm(2j(r+2s)+r+2a-\frac{3}{2})}, \quad b^{\pm a}_{\mp(2j+\frac{3}{2})}=\sqrt{-1}b_{\mp((2j+1)(r+2s)+r+2a-\frac{ 3}{2})},\\ b^{\mp a}_{\pm(2j+\frac{1}{2})}=b_{\pm(2j(r+2s)+r+2a+\frac{1}{2})},\quad b^{ \mp a}_{\mp(2j+\frac{3}{2})}=\sqrt{-1}b_{\mp((2j+1)(r+2s)+r+2a+\frac{1}{2})}. \end{array} \tag{3.26}\] For \(1\leq a\leq s\) we have \[b^{\pm a}(z)=\sum_{i\in\frac{1}{2}+\mathbb{Z}}b^{\pm a}_{i}z^{-i-\frac{1}{2}},\quad b^{c}(z)=\sum_{i\in\frac{1}{2}+\mathbb{Z}}b^{c}_{i}z^{-i-\frac{1}{2}}, \quad 1\leq a\leq s<c\leq r+s,\] which satisfy \[\begin{array}{l}b^{+a}_{i}b^{+c}_{j}-b^{+c}_{j}b^{+a}_{i}=b^{-a}_{i}b^{-c}_{ j}-b^{-c}_{j}b^{-a}_{i}=0,\ b^{+a}_{i}b^{-c}_{j}-b^{-c}_{j}b^{-a}_{i}=\delta_{ab} \delta_{i,-j},\\ b^{e}_{i}b^{d}_{j}-b^{d}_{j}b^{e}_{i}=(-1)^{i-\frac{1}{2}}\delta_{de}\delta_ {i,-j},\ b^{\pm a}_{i}b^{d}_{j}-b^{d}_{j}b^{\pm a}_{i}=0,\end{array} \tag{3.27}\] for \(1\leq a,c\leq s<d\), \(e\leq s+r\), \(i,j\in\frac{1}{2}+\mathbb{Z}\). The action on the vacuum vector is as follows \[b^{\pm a}_{j}|0\rangle=b^{c}_{j}|0\rangle=0\quad\mbox{for $j>0$}. \tag{3.28}\] The generating fields for the representation \(r_{c}\) then are (\(\delta,\epsilon=+\) or \(-\), \(1\leq a,c\leq s<d,e\leq s+r\)) \[C^{\delta a,\epsilon c}(y,z)=:b^{\delta a}(y)b^{\epsilon c}(y):,\ C^{\delta a, d}(y,z)=:b^{\delta a}(y)b^{d}(z):,\ C^{de}(y,z)=:b^{d}(y)b^{e}(z):. \tag{3.29}\] **Remark 3.3**: _Note that the relabeling that we have used here is different from the relabeling in Section 4. There the relabeling is induced by the embedding of \(\hat{\mathfrak{g}}\) and \(\hat{\mathfrak{g}}^{(2)}\) into \(x_{\infty}\) and the element in the Weyl group of \(\mathfrak{g}\) on which the construction is based. If we would use the relabeling as above, then the construction of Section 4 would not give matrices in \(\overline{x}_{\infty}\)._ ### Vertex operator realizations of \(F_{x}\) #### 3.6.1 Bosonization of type \(a\), \(b\) and \(d\) In Section 3.5 we have obtained fermionic fields of two types, viz. the charged fermionic fields \(\psi^{\pm a}(z)\), \(1\leq a\leq s\), and the twisted neutral fermionic fields \(\tilde{\phi}^{b}(z)\), \(s<b\leq r+s\), and in some cases an additional operator \(\sigma_{0}\). To the charged and twisted neutral fermionic fields, we can associate free bosonic fields \[\begin{split}\alpha^{a}(z)&=\sum_{j\in\mathbb{Z}} \alpha^{b}_{j}z^{-j-1}=:\psi^{+b}(z)\psi^{-b}(z)\ ;\ 1\leq b\leq s,\\ \alpha^{s+b}(z)&=\sum_{j\in 1+2\mathbb{Z}}\alpha^{s+b}_ {j}z^{-j}=\frac{1}{2}:\tilde{\phi}^{s+b}(z)\tilde{\phi}^{s+b}(-z):,\ 1\leq b\leq r.\end{split} \tag{3.30}\] Their modes satisfy the relations of a Heisenberg Lie algebra: \[\begin{split}&=j\delta_{ab}\delta_{jk}\ 1\leq a\leq s,\ 1\leq b \leq s+r,\ j,k\in\mathbb{Z},\ k\ \text{odd if}\ b>s,\\ [\alpha^{a}_{j},\alpha^{b}_{k}]&=\frac{j}{2} \delta_{ab}\delta_{jk},\ s+1\leq a\leq r+s,\ 1\leq b\leq s+r,\ j,k\in\mathbb{Z},\ j\ \text{odd},\ k\ \text{odd if}\ b>s.\end{split} \tag{3.31}\] The action on the vacuum vector is \[\alpha^{a}_{j}|0\rangle=0,\quad\text{for}\ j\geq 0.\] Note that \[[\alpha^{b}_{i},\psi^{\pm a}(z)]=\pm\delta_{ab}z^{i}\psi^{\pm a}(z)\quad\text{ for}\ i\in\mathbb{Z},\ i\ \text{odd if}\ b>s. \tag{3.32}\] This makes it possible (see [22] or [15]) to express these fermionic fields almost completely in terms of the Heisenberg algebra, viz. for \(1\leq a\leq s\) we have \[\psi^{\pm a}(z)=Q_{a}^{\pm 1}z^{\pm\alpha^{a}_{0}}\exp\left(\mp\sum_{k<0} \frac{\alpha^{a}_{k}}{k}z^{-k}\right)\exp\left(\mp\sum_{k>0}\frac{\alpha^{a}_{ k}}{k}z^{-k}\right), \tag{3.33}\] where the operators \(Q_{a}\) are uniquely defined by \[Q_{a}|0\rangle=\psi^{+a}_{-\frac{1}{2}}|0\rangle,\quad Q_{a}\psi^{\pm b}_{i}= (-1)^{\delta_{ab}-1}\psi^{\pm b}_{i\mp\delta_{ab}}Q_{a}. \tag{3.34}\] It is straightforward to check that for \(a<b\), \(1\leq a,b,c\leq s\) we have \[Q_{a}Q_{b}=-Q_{b}Q_{a}\quad[\alpha^{a}_{i},Q_{c}]=\delta_{ac}\delta_{i0}Q_{c}. \tag{3.35}\] If \(x=a\), we have \(s\) pairs of charged free fermions. When \(x=b\) or \(d\) we have in addition also \(r\) twisted neutral free fermionic fields. For the neutral twisted fields, the commutation relations of the Heisenberg elements \(\alpha^{b}_{k}\), \(k\) odd, with \(s<b\leq s+r\): \[[\alpha^{b}_{k},\tilde{\phi}^{c}(z)]=\delta_{b,c}z^{k}\tilde{\phi}^{c}(z)\quad \text{for}\ k\in\mathbb{Z}_{odd}. \tag{3.36}\] Thus for \(s<b\leq r+s\) we have \[\tilde{\phi}^{b}(z)=R_{b-s}\exp\left(-2\sum_{k<0,\,\text{odd}}\frac{\alpha_{k}^{b }}{k}z^{-k}\right)\exp\left(-2\sum_{k>0,\,\text{odd}}\frac{\alpha_{k}^{b}}{k}z^ {-k}\right), \tag{3.37}\] \[R_{b}|0\rangle=\tilde{\phi}_{0}^{s+b}|0\rangle,\quad\text{and}\ R_{b}R_{c}+R_{c} R_{b}=\delta_{bc},\ R_{b}Q_{a}+Q_{a}R_{b}=0. \tag{3.38}\] If \(x=b\) and \(r\) is even, or \(x=d\) and \(r\) is odd we have in addition one extra operator \(\sigma_{0}\) such that \(\sigma_{0}^{2}=\frac{1}{2}\). This operator commutes with all the operators \(\alpha_{k}^{a}\), for \(1\leq a\leq r+s\) of the Heisenberg algebra and it anti-commutes with the operators \(Q_{a}\) (\(1\leq b\leq s\)) and \(R_{b}\) (\(s<b\leq r+s\)). If \(x=b\) and \(r\) is odd, then \(\sigma_{0}|0\rangle=\frac{1}{\sqrt{2}}|0\rangle\). If \(x=d\) and \(r\) is even, then \(\sigma_{0}|0\rangle=-\sqrt{-1}\tilde{\phi}_{0}^{r+s}|0\rangle\). The \(q\) dimension formula for the \(a_{\infty}\) module \(F^{(m)}\) is as follows, \[\dim_{q}F^{(m)}=\sum_{j_{1},\ldots,j_{s}\in\mathbb{Z}\atop j_{1}+\cdots+j_{s} =m}q^{\frac{j_{1}^{2}+j_{2}^{2}+\cdots+j_{s}^{2}}{2}}\prod_{i=1}^{\infty}\frac {1}{(1-q^{i})^{s}}. \tag{3.39}\] In the \(b\) case, the spin module \(F\) is irreducible for \(b_{\infty}\) and in that case, we find the following \(q\)-dimension formula. \[\dim_{q}F=2^{[\frac{r}{2}]}\sum_{j_{1},\ldots,j_{s}\in\mathbb{Z}}q^{\frac{j_{ 1}^{2}+j_{2}^{2}+\cdots+j_{s}^{2}}{2}}\prod_{i=1}^{\infty}\frac{1}{(1-q^{i})^{ s}(1-q^{2i-1})^{r}}.\] \(F\) splits in to two irreducible \(d_{\infty}\) modules \(F^{\overline{0}}\) and \(F^{\overline{1}}\), which both have the same \(q\)-dimension formula, except when \(r=0\). We have \[\dim_{q}F^{\overline{\epsilon}}=\begin{cases}2^{[\frac{r-1}{2}]}\sum_{j_{1}, \ldots,j_{s}\in\mathbb{Z}}q^{\frac{j_{1}^{2}+j_{2}^{2}+\cdots+j_{s}^{2}}{2}} \prod_{i=1}^{\infty}\frac{1}{(1-q^{i})^{s}(1-q^{2i-1})^{r}},&\text{for $r\neq 0$},\\ \\ \sum_{j_{1}+\cdots+j_{s}\in\mathbb{Z}}q^{\frac{j_{1}^{2}+j_{2}^{2}+\cdots+j_{s} ^{2}}{2}}\prod_{i=1}^{\infty}\frac{1}{(1-q^{i})^{s}},&\text{when $r=0$}.\end{cases} \tag{3.40}\] #### 3.6.2 Bosonization of type \(c\) Recall that in the multicomponent construction of \(C\ell_{c}\), we have two types of symplectic bosonic fields, viz. charged (resp. neutral) fields \(b^{\pm a}(z)\), \(1\leq a\leq s\) (resp. \(b^{c}(z)\), \(s<c\leq r+s\)), satisfying (3.27) and (3.28). \[\begin{split}\alpha^{a}(z)&=\sum_{j\in\mathbb{Z}} \alpha_{j}^{a}z^{-j-1}=:b^{+a}(z)b^{-a}(z):\quad 1\leq a\leq s,\\ \alpha^{c}(z)&=\sum_{j\in 1+2\mathbb{Z}} \alpha_{j}^{c}z^{-j}=\frac{1}{2}:b^{c}(z)b^{c}(-z):\quad s<c\leq s+r,\end{split} \tag{3.41}\] and their modes satisfy the commutation relations of a Heisenberg Lie algebra: \[=-j\delta_{ab}\delta_{jk}\ 1\leq a\leq s,\ 1\leq b\leq s+r,\ j,k\in \mathbb{Z},\ k\ \mbox{odd if}\ b>s,\] \[[\alpha^{a}_{j},\alpha^{b}_{k}] =-\frac{j}{2}\delta_{ab}\delta_{jk},\ s+1\leq a\leq r+s,\ 1\leq b\leq s+r,\ j,k\in\mathbb{Z},\ j\ \mbox{odd},\ k\ \mbox{odd if}\ b>s,\] \[\alpha^{a}_{j}|0\rangle =0,\quad\mbox{for}\ j\geq 0. \tag{3.42}\] We start with the charged fields. The commutation relations can be rewritten as \[b^{\pm a}(y)b^{\pm c}(z)-b^{\pm c}(z)b^{\pm a}(y)=0,\quad b^{+a}(y)b^{-c}(z)-b^ {-c}(z)b^{+a}(y)=\delta_{ac}\delta(y-z).\] Then \[[\alpha^{i}_{k},b^{\pm a}(z)]=\mp z^{k}\delta_{i,a}b^{\pm a}(z),\quad\mbox{ for}\ k\in\mathbb{Z}, \tag{3.43}\] hence, \[b^{\pm a}(z)=E^{a}_{-}(z)^{\pm 1}Q^{\pm a}(z)E^{a}_{+}(z)^{\pm 1} \tag{3.44}\] where \[E^{a}_{-}(z)=\exp\left(-\sum_{k<0}\frac{\alpha^{a}_{k}}{k}z^{-k}\right),\qquad E ^{a}_{+}(z)=\exp\left(-\sum_{k>0}\frac{\alpha^{a}_{k}}{k}z^{-k}\right), \tag{3.45}\] and \[[\alpha^{i}_{k},Q^{\pm a}(z)]=\mp\delta_{i,a}\delta_{k,0}Q^{\pm a}(z)\quad \mbox{for}\ k\in\mathbb{Z}.\] Following [1], we set \[\Theta^{\pm a}(z)=\sum_{k\in\mathbb{Z}}\Theta^{\pm a}_{k}z^{-k-1}:=E^{a}_{-}(z )^{\mp 1}b^{\pm a}(z)z^{\mp\alpha^{a}_{0}}E^{a}_{+}(z)^{\mp 1}, \tag{3.46}\] Then \[\Theta^{\pm a}(z)|0\rangle=E^{a}_{-}(z)^{\mp 1}\sum_{i>0}b^{\pm a}_{-i}|0 \rangle z^{i-\frac{1}{2}}\in F_{c}[[z]],\] hence, \[\Theta^{\pm a}_{k}|0\rangle=0\quad\mbox{for}\ k\in\mathbb{Z}_{>0}.\] **Lemma 3.4**: _Let \(i_{z,y}R(y,z)\) denote the power series expansion of a rational function \(R(y,z)\) in the domain \(|z|>|y|\) and let \(\lambda,\mu=\pm 1\). Then_ \[E^{a}_{-}(y)^{\lambda}E^{b}_{+}(z)^{\mu} =i_{z,y}(1-\frac{y}{z})^{\lambda\mu\delta_{ac}}E^{b}_{+}(z)^{\mu} E^{a}_{-}(y)^{\lambda},\] \[E^{a}_{+}(z)b^{\pm c}(y) =i_{z,y}(1-\frac{y}{z})^{\mp\delta_{ac}}b^{\pm c}(y)E^{a}_{+}(z),\] \[E^{a}_{-}(y)b^{\pm c}(z) =i_{z,y}(1-\frac{y}{z})^{\pm\delta_{ac}}b^{\pm c}(z)E^{a}_{-}(y),\] \[[\alpha^{i}(y),\theta^{\pm a}(z)] =\mp\frac{\delta_{i,a}}{y}\theta^{\pm a}(z).\] **Proof** Let \(|z|>|y|\), then \[E^{a}_{-}(y)^{\lambda}E^{b}_{+}(z)^{\mu} =\exp\left(-\sum_{k=1}^{\infty}\frac{\lambda\mu}{k^{2}}\left(\frac{ y}{z}\right)^{k}[\alpha^{a}_{-k},\alpha^{b}_{k}]\right)E^{b}_{+}(z)^{\mu}E^{a}_{-}(y)^{\lambda}\] \[=\exp\left(-\delta_{ab}\sum_{k=1}^{\infty}\frac{\lambda\mu}{k} \left(\frac{y}{z}\right)^{k}\right)E^{b}_{+}(z)^{\mu}E^{a}_{-}(y)^{\lambda}\] \[=i_{z,y}(1-\frac{y}{z})^{\lambda\mu\delta_{ab}}E^{b}_{+}(z)^{\mu} E^{a}_{-}(y)^{\lambda}.\] Next, since \[\alpha^{a}_{k}b^{\pm c}(z)=b^{\pm c}(z)(\alpha^{a}_{k}\mp\delta_{ac}z^{k}), \quad\text{for }k\in\mathbb{Z},\] also \[E^{a}_{+}(z)b^{\pm c}(y) =b^{\pm c}(y)\exp\left(-\sum_{k>0}\frac{1}{k}z^{-k}(\alpha^{a}_{k }\mp\delta_{ac}y^{k})\right)\] \[=\exp\left(\pm\delta_{ac}\sum_{k>0}\frac{1}{k}\left(\frac{y}{z} \right)^{k}\right)b^{\pm c}(y)E^{a}_{+}(z)\] \[=i_{z,y}(1-\frac{y}{z})^{\mp\delta_{ac}}b^{\pm c}(y)E^{a}_{+}(z).\] The last identity follows from (3.43). \(\square\) Using this lemma and the expression (3.46), we find that for \(\lambda=\pm\) we have \[\Theta^{+a}(y)\Theta^{\lambda b}(z) =i_{y,z}(y-z)^{\lambda\delta_{ab}}E^{a}_{-}(y)^{-1}E^{b}_{-}(z)^{ -\lambda 1}b^{+a}(y)b^{\lambda b}(z)y^{-\alpha^{a}_{0}}E^{a}_{+}(y)^{-1}E^{b}_{+}(z) ^{-\lambda 1}\] \[=i_{y,z}(y-z)^{\lambda\delta_{ab}}E^{a}_{-}(y)^{-1}E^{b}_{-}(z)^{ -\lambda 1}(\colon b^{+a}(y)b^{\lambda b}(z)\colon+\] \[\quad+\delta_{ab}\delta_{\lambda,-}i_{y,z}(y-z)^{\lambda 1})y^{-\alpha^{a}_{0}}E^{a}_{+}(y)^{-1}E^{b}_{+}(z) ^{-\lambda 1},\] \[\Theta^{\lambda b}(z)\Theta^{+a}(y) =i_{z,y}(z-y)^{\lambda\delta_{ab}}E^{a}_{-}(y)^{-1}E^{b}_{-}(z)^{ -\lambda 1}(\colon b^{\lambda b}(z)b^{+a}(y):\] \[\quad-\delta_{ab}\delta_{\lambda,-}i_{z,y}(z-y)^{\lambda 1})y^{-\alpha^{a}_{0}}z^{-\lambda\alpha^{b}_{0}}E^{a}_{+}(y)^{-1}E^{b}_{+}(z) ^{-\lambda 1}.\] From (3.43), we deduce that \[\Theta^{+a}(y)\Theta^{+b}(z)-(-1)^{\delta_{ab}}\theta^{+b}(z) \Theta^{+a}(y) =0,\] \[\Theta^{+a}(y)\Theta^{-b}(z)-\Theta^{-b}(z)\Theta^{+a}(y) =0,\quad\text{if }a\neq b,\] and \[\Theta^{+a}(y)\Theta^{-a}(z)+\Theta^{-a}(z)\Theta^{+a}(y)=\] \[E^{a}_{-}(y)^{-1}E^{a}_{-}(z)(\delta(y-z):b^{+a}(y)b^{-a}(z):+ \partial_{z}\delta(y-z))E^{a}_{+}(y)^{-1}E^{a}_{+}(z)y^{-\alpha^{a}_{0}}z^{ \alpha^{a}_{0}}=\] \[\delta(y-z)\alpha^{a}(z)+\partial_{z}\delta(y-z)-\delta(y-z) \alpha^{a}(z)\] \[=\partial_{z}\delta(y-z). \tag{3.47}\] Here we have used Taylor's formula. In a similar way one can prove that \[\Theta^{-a}(y)\Theta^{-b}(z)-(-1)^{\delta_{ab}}\Theta^{-b}(z)\Theta^{-a}(y)=0. \tag{3.48}\] Note that still \[[\alpha_{k}^{a},\Theta^{\pm b}(z)]=\mp\delta_{ab}\delta_{k0}\Theta^{\pm b}(z). \tag{3.49}\] For convenience of counting the energy degree, we introduce an operator \(Q_{a}\) by setting \(\Theta^{\pm a}=Q_{a}^{\mp 1}\theta^{\pm a}(z)\), where \([\alpha_{0},Q_{a}]=Q_{a}\), then \(\theta^{\pm a}(z)\) satisfies the same commutation relations as \(\Theta^{\pm a}(z)\), the only difference is that \([\alpha_{k}^{a},\theta^{\pm b}(z)]=0\). Thus, \[b^{\pm a}(z)=Q_{a}^{\mp 1}z^{\pm a_{0}^{a}}E_{-}^{a}(z)^{\pm 1}\theta^{\pm a}( z)E_{+}^{a}(z)^{\mp 1}. \tag{3.50}\] This last formula was already present in [10] in the situation when there is only one such pair of symplectic boson fields. Note that \(Q_{a}^{\pm 1}\) is an operator that always appears together with each \(\theta_{j}^{\pm a}\). In principle we do not need to introduce it, but for the moment this operator will be convenient when calculating the energy decomposition of \(F\). We will show this in the next paragraph. Note also that \(Q_{a}b^{\pm c}(z)=z^{\pm\delta_{ac}}b^{\pm c}(z)Q_{a}\). Assume for the moment that there is only one such pair \(b^{\pm}(z)\). Then \((b^{\pm}_{-\frac{1}{2}})^{n}|0\rangle\), which has energy \(\frac{n}{2}\), is equal to the constant coefficient of \(b^{\pm}(z_{n})\cdots b^{\pm}(z_{1})|0\rangle\). Using (3.50), this is equal to \[\theta_{-n}^{\pm}\theta_{1-n}^{\pm}\cdots\theta_{-2}^{\pm}\theta_{-1}^{\pm}Q ^{\mp n}|0\rangle. \tag{3.51}\] More generally the energy of \[\theta_{-j_{m}}^{-}\cdots\theta_{-j_{1}}^{-}\theta_{-k_{n}}^{+}\cdots\theta_{ -k_{1}}^{+}Q^{m-n}|0\rangle,\quad\text{where $j_{m}>\cdots j_{1}>0$ and $k_{n}>\cdots>k_{1}>0$}, \tag{3.52}\] is equal to \[j_{1}+\cdots+j_{m}+k_{1}+\cdots+k_{n}-\frac{(m-n)^{2}}{2}. \tag{3.53}\] This can be seen as follows. Consider, the element \[b^{-}_{m-n-j_{m}+\frac{1}{2}}\cdots b^{-}_{-n-j_{1}+\frac{1}{2}}b^{+}_{n-k_{n }-\frac{1}{2}}\cdots b^{+}_{-k_{1}+\frac{1}{2}}|0\rangle,\] which, clearly has energy (3.53). It is the coefficient of \[y_{m}^{j_{m}+n-m}\cdots y_{2}^{j_{2}+n-2}y_{1}^{j_{1}+n-1}z_{n}^{k_{n}-n} \cdots z_{2}^{k_{2}-2}z_{1}^{k_{1}-1}\] in \[b^{-}(y_{m})\cdots b^{-}(y_{1})b^{+}(z_{n})\cdots b^{+}(z_{1})|0\rangle=\] \[z_{2}^{-1}z_{3}^{-2}z_{n}^{-n-1}y_{1}^{n}y_{2}^{n-1}\cdots y_{m }^{n-m+1}\theta^{-}(y_{m})\cdots\theta^{-}(y_{1})\theta^{+}(z_{n})\theta^{+}( z_{1})Q^{m-n}|0\rangle+\cdots=\] \[y_{m}^{j_{m}+n-m}\cdots y_{1}^{j_{1}+n-1}z_{n}^{k_{n}-n}\cdots z _{1}^{k_{1}-1}\theta_{-j_{m}}^{+}\cdots\theta_{-j_{1}}^{+}\theta_{-k_{n}}^{+} \cdots\theta_{-k_{1}}^{+}Q^{m-n}|0\rangle+\cdots\] Hence, \[\text{energy}(\theta_{-j}^{\pm})=j,\quad\text{energy}(Q^{m}|0\rangle)=-\frac {m^{2}}{2},\qquad j,m\in\mathbb{Z},\ j>0.\] An element of the form (3.52) can be obtained by action of several elements of the form \(\theta_{i}^{\pm}\theta_{k}^{\mp}\) on an element (3.51). **Lemma 3.5**: _Let_ \[F_{ann}=\{w\in F|\,\alpha_{k}w=0,\ \ \mbox{for all $k>0$}\}.\] _Then the elements (3.52) form a basis for \(F_{ann}\)._ **Proof** It suffices to consider a linear combination of elements of the form (3.51), where the energy, say \(N\), is fixed and where the charge \(m-n\), the eigenvalue of \(\alpha_{0}\), is fixed. Since the energy is \(N\), there can only be a finite number of elements of the form (3.52) which have this energy. Now let \[\sum_{i=1}^{p}\lambda_{i}\theta^{-}_{-j^{i}_{m_{i}}}\cdots\theta^{-}_{-j^{i}_{ 1}}\theta^{+}_{-k^{i}_{n_{i}}}\cdots\theta^{+}_{-k^{i}_{1}}Q^{m-n}|0\rangle=0, \tag{3.54}\] where we take a linear combination of all elements wich have energy \(N\) and charge \(m-n\). If we compare two elements in this sum, then there always exists a pair \(\theta^{\pm}_{a}\) and \(\theta^{\pm}_{b}\), such that one appears in one of the two elements and is missing in the other, and vice versa. Hence, we can show that the coefficient \(\lambda_{i}=0\), by letting \[\theta^{-}_{-j^{i}_{m_{i}}}\theta^{+}_{j^{i}_{m_{i}}}\cdots\theta^{-}_{-j^{i}_ {1}}\theta^{+}_{j^{i}_{1}}\theta^{-}_{-k^{i}_{n_{i}}}\,\theta^{-}_{k^{i}_{n_{ i}}}\cdots\theta^{+}_{-k^{i}_{1}}\theta^{-}_{k^{i}_{1}}\] act on this sum, which kills of all terms except for \(\lambda_{i}\theta^{-}_{-j^{i}_{m_{i}}}\cdots\theta^{-}_{-j^{i}_{1}}\theta^{+} _{-k^{i}_{n_{i}}}\cdots\theta^{+}_{-k^{i}_{1}}\). Hence, elements of the form (3.52) form a basis of \(F_{ann}\). \(\square\) So using (3.53), we have the following: **Corollary 3.6**: _The \(q,t\)-dimension of \(F_{ann}\) is equal to_ \[\sum_{m,n\in\mathbb{Z}_{\geq 0}}t\ ^{m-n}q^{-\frac{(m-n)^{2}}{2}}s_{m}(q)s_{n}(q), \tag{3.55}\] _where \(s_{m}(q)\) is the power series in \(q\), which is defined as the coefficient of \(s^{m}\) in the expansion of \(\prod_{j=1}^{\infty}(1+sq^{j})=\sum_{m=0}^{\infty}s_{m}(q)s^{m}\)._ Note that the coefficients of \(q^{k}\) in \(s_{m}(q)\) are the possible ways that the number \(k\) can be written as the sum of \(m\) distinct positive integers. It is well known that \[s_{m}(q)=\frac{q^{\frac{m^{2}+m}{2}}}{(1-q)\cdots(1-q^{m})}.\] Thus we find that \[\dim_{q,t}F=\prod_{i\in\mathbb{Z}_{>0}}\frac{1}{1-q^{i}}\sum_{j,k=0}^{\infty} \frac{q^{\frac{k}{2}}t^{k}}{(1-q)\cdots(1-q^{k})}q^{jk}\frac{q^{\frac{i}{2}}t^ {-j}}{(1-q)\cdots(1-q^{j})}. \tag{3.56}\] On the other hand, by looking at the energy and charge of \(F\) in terms of the symplectic bosons \(b^{\pm}_{j}\), we find that \[\dim_{q,t}F=\prod_{i\in\mathbb{Z}_{>0}}\frac{1}{(1-tq^{i-\frac{1}{2}})(1-t^{- 1}q^{i-\frac{1}{2}})}. \tag{3.57}\] This means that the right-hand side of (3.57) is equal to the right-hand side of (3.56), which gives a remarkable identity, which we could not find in the literature. In [1], Section 3, and [11], Section 7.2 and the Appendix, there were some other formula's for the q-dimensions of symplectic bosons. Next, we want to describe the neutral fields, see [1] or [26] for more details. The commutation relations can be rewritten to \[b^{a}(y)b^{c}(z)-b^{c}(z)b^{a}(y)=\delta_{ac}\sum_{k\in\mathbb{Z}}\frac{1}{y} \left(-\frac{y}{z}\right)^{k}=\delta_{ac}\delta(y+z).\] Since for \(a>s\), \[[\alpha_{k}^{a},b^{c}(z)]=-\delta_{ac}z^{k}b^{c}(z),\quad[\alpha_{k}^{a}, \alpha_{\ell}^{c}]=-\frac{\delta_{ac}}{2}k,\quad\text{for }k\in 1+2\mathbb{Z},\ \ell\in \mathbb{Z},\ \text{odd if }c>s.\] We define \[\Theta^{a}(z)=\sum_{k\in\frac{1}{2}+\mathbb{Z}}\theta_{k}^{a}z^{-k-\frac{1}{2 }}:=V_{-}^{a}(z)^{-1}b^{a}(z)V_{+}^{a}(z)^{-1},\] where (cf. (1.47)) \[V_{-}^{a}(z)=\exp\left(-2\sum_{k<0,\,\text{odd}}\frac{\alpha_{k}^{a}}{k}z^{-k }\right),\qquad V_{+}^{a}(z)=\exp\left(-2\sum_{k>0,\,\text{odd}}\frac{\alpha_ {k}^{a}}{k}z^{-k}\right), \tag{3.58}\] Then \[\Theta^{a}(z)|0\rangle=V_{-}^{a}(z)^{-1}\sum_{i>0}b_{-i}^{a}z^{i-\frac{1}{2}}| 0\rangle\in F_{c}[[z]],\] hence \[\theta_{k}^{a}|0\rangle=0\quad\text{for }k\in\frac{1}{2}+\mathbb{Z}_{\geq 0}.\] In a similar way as before, see [26] for details (note \(\alpha_{k}=-J_{k}\)), we obtain that \[\Theta^{a}(y)\Theta^{b}(-z)-(-1)^{\delta_{ab}}\Theta^{b}(-z)\Theta^{a}(y)=2 \delta_{ab}z^{\frac{1}{2}}\partial_{z}(z^{\frac{1}{2}}\delta(y-z)).\] Note that we also have \[\Theta^{\pm a}(y)\Theta^{b}(z)-\Theta^{b}(z)\Theta^{\pm a}(y)=0\quad\text{for }a \leq s,\ b>s.\] We thus have **Proposition 3.7**: _For \(a\leq s\) and \(j>s\) we have_ \[b^{\pm a}(z)= Q_{a}^{\pm 1}z^{\pm\alpha_{0}^{a}}E_{-}^{a}(z)^{\pm 1}\theta^{ \pm a}(z)E_{+}^{a}(z)^{\mp 1}=E_{-}^{a}(z)^{\pm 1}\Theta^{\pm a}(z)E_{+}^{a}(z)^{ \mp 1}z^{\pm\alpha_{0}^{a}}, \tag{3.59}\] \[b^{j}(z)= V_{-}^{j}(z)\Theta^{j}(z)V_{+}^{j}(z),\] _with \(E_{\pm}^{a}(z)\) (\(V_{\pm}^{j}(z)\)) given by (3.45) (resp. (4.39)), such that the last equation of Lemma 3.4 holds. The modes of the fields \(\theta^{\pm a}(z)=\sum_{k\in\mathbb{Z}}\theta_{k}^{\pm a}z^{-k-1}\) and \(\Theta^{j}(z)=\sum_{k\in\frac{1}{2}+\mathbb{Z}}\theta_{k}^{j}z^{-k-\frac{1}{2}}\) satisfy the following (anti-)commutation relations_ \[\theta_{k}^{\pm a}\theta_{\ell}^{\pm b}-\theta_{\ell}^{\pm b} \theta_{k}^{\pm a}=\theta_{k}^{\pm a}\theta_{\ell}^{j}-\theta_{\ell}^{j} \theta_{k}^{\pm a}=0,\] \[\theta_{k}^{+a}\theta_{\ell}^{-b}-(-1)^{\delta_{ab}}\theta_{\ell }^{-b}\theta_{k}^{+a}=\delta_{ab}\delta_{k,-\ell}k,\] \[\theta_{k}^{i}\theta_{\ell}^{j}-(-1)^{\delta_{ij}}\theta_{\ell}^{ j}\theta_{k}^{i}=(-1)^{k-\frac{1}{2}}\delta_{ij}\delta_{k,-\ell}2k.\] **Remark 3.8**: _In [1] I. Anguelova bosonizes the fields \(\Theta^{\pm a}(z)\) further, following an idea of [10]. We will not do that here, but continue to work with the fields \(\Theta^{\pm a}(z)\) and find a realization for them, more or less as in [14]._ The Weyl module \(F\) splits in two irreducible \(c_{\infty}\) modules \(F^{\overline{\epsilon}}\), \(\overline{\epsilon}\in\mathbb{Z}_{2}\) The \(q\)-dimension formula can be most conveniently described by using the variable \(x\) such that \(x^{2}=1\). We have \(s\) (resp. \(r\)) charged (resp. neutral) symplectic bosonic fields. Therefore we combine \(s\) copies of (3.56) and \(r\) copies copies of (1.51), which gives \[\begin{split}\dim_{q}F^{\overline{0}}+xF^{\overline{1}}& =\prod_{j=1}^{\infty}\frac{1}{(1-xq^{j-\frac{1}{2}})^{r+2s}}\\ &=\left(\sum_{j,k=0}^{\infty}\frac{q^{\frac{k}{2}}x^{k}}{(1-q) \cdots(1-q^{k})}q^{jk}\frac{q^{\frac{i}{2}}x^{j}}{(1-q)\cdots(1-q^{j})}\right) ^{s}\times\\ &\qquad\prod_{j=1}^{\infty}\frac{(1+xq^{j-\frac{1}{2}})^{r}}{(1- q^{j})^{s}(1-q^{2j-1})^{r}}.\end{split} \tag{3.60}\] ### The bosonic realization of \(F_{x}\) #### 3.7.1 The bosonic realization of \(F_{x}\), the Clifford algebra case In Subsection 3.6.1 we have found bosonizations of the multicomponent fermionic fields that generate the Clifford algebra \(C\ell_{x}\) for \(x=a,b,d\) and described the spin module \(F_{x}\). The description was based on the Heisenberg algebra and some additional operators \(Q_{a}\) and \(R_{b}\). In this subsection we will give the corresponding bosonic description of \(F_{x}\), using the Fock space representation of the Heisenberg Lie algebra: \[\alpha_{k}^{j}=\frac{\partial}{\partial t_{k}^{(j)}},\quad\alpha_{-k}^{j}=kt_ {k}^{(j)},\quad k\in\mathbb{Z}_{>0},\ 1\leq j\leq s, \tag{3.61}\] for the charged fields and \[\alpha_{k}^{j}=\frac{\partial}{\partial t_{k}^{(j)}},\quad\alpha_{-k}^{j}= \frac{k}{2}t_{k}^{(j)},\quad\ k\in\mathbb{Z}_{odd>0},\quad s<j\leq r+s, \tag{3.62}\] for the neutral fields. For \(x=a\), we let \[B_{s}=\mathbb{C}[q_{j},q_{j}^{-1},t_{1}^{(j)},t_{2}^{(j)},\ldots|\,1\leq j \leq s].\] Recalling the operators \(Q_{j}\) on \(F_{a}\), defined by (3.34), we define an isomorphism \(\sigma:F_{a}\to B_{s}\) by the following properties \[\sigma(Q_{1}^{k_{1}}Q_{2}^{k_{2}}\cdots Q_{s}^{k_{s}}|0\rangle)=q_{1}^{k_{1}} q_{2}^{k_{2}}\cdots q_{s}^{k_{s}}, \tag{3.63}\] and \[\sigma r(\alpha_{0}^{j})\sigma^{-1}=q_{j}\frac{\partial}{\partial q_{j}}, \quad\sigma r(\alpha_{k}^{j})\sigma^{-1}=\frac{\partial}{\partial t_{k}^{(j) }},\quad\sigma r(\alpha_{-k}^{j})\sigma^{-1}=kt_{k}^{(j)},\quad k>0, \tag{3.64}\] see [22] or [15], for more details. One also has \[\sigma Q_{j}\sigma^{-1}=(-1)^{\sum_{k=1}^{j-1}q_{k}\frac{\partial}{\partial q_{k}}} q_{j}. \tag{3.65}\] Thus substituting (3.64) in (3.33), we obtain (1.32). It is straightforward to check (see [22] or [15]) that for \(1\leq a,b\leq s\), with \(a\neq b\): \[\begin{split}\sigma E^{+a,-b}(w,z)\sigma^{-1}=&\sigma :\psi^{+a}(w)\psi^{-b}(z):\sigma^{-1}\\ =& q_{a}(-1)^{\sum_{\ell=1}^{a-1}q_{\ell}\frac{ \partial}{\partial q_{\ell}}}q_{b}^{-1}(-1)^{\sum_{\ell=1}^{b-1}q_{\ell}\frac {\partial}{\partial q_{\ell}}}w^{q_{a}\frac{\partial}{\partial q_{a}}}z^{-q_{ b}\frac{\partial}{\partial q_{b}}}e^{z\cdot t^{(a)}-w\cdot t^{(b)}}e^{w^{-1}\cdot\tilde{ \partial}_{t^{(b)}}-z^{-1}\tilde{\partial}_{t^{(a)}}},\\ \sigma E^{+a,-a}(w,z)\sigma^{-1}=&\sigma:\psi^{+a }(w)\psi^{-a}(z):\sigma^{-1}\\ =& i_{w,z}\frac{1}{w-z}\big{[}\left(\frac{w}{z} \right)^{q_{a}\frac{\partial}{\partial q_{a}}}e^{z\cdot t^{(a)}-w\cdot t^{(a)} }e^{w^{-1}\cdot\tilde{\partial}_{t^{(a)}}-z^{-1}\cdot\tilde{\partial}_{t^{(a) }}}-1\big{]}.\end{split} \tag{3.66}\] For \(x=b\) or \(d\), for which there is in general a combination of \(r\) neutral and \(s\) charged fields, we have to extend the Fock space \(B_{s}\) by \(p\) variables \(\theta_{i}\), \(1\leq i\leq p\), for \(r=2p\) or \(r=2p+1\), in the \(b\)-case, and \(r=2p\) or \(r=2p-1\) in the \(d\)-case, which anticommute: \(\theta_{i}\theta_{j}=-\theta_{j}\theta_{i}\). So for \(x=b\) or \(d\) we let \[B_{s,r}=\mathbb{C}[\theta_{i},q_{j},q_{j}^{-1},t^{(\ell)}|\,1\leq i\leq p,\ 1 \leq j\leq s,\ 1\leq\ell\leq s+r], \tag{3.67}\] where \(t^{(\ell)}=(t_{1}^{(\ell)},t_{2}^{(\ell)},t_{3}^{(\ell)},\ldots)\) if \(1\leq\ell\leq s\) and \(t^{(\ell)}=(t_{1}^{(\ell)},t_{3}^{(\ell)},t_{5}^{(\ell)},\ldots)\) if \(\ell>s\). We define a vector space isomorphism \(\sigma:F_{x}\to B_{s,r}\) by \[\sigma(Q_{1}^{a_{1}}\cdots Q_{s}^{a_{s}}R_{1}^{b_{1}}\cdots R_{\tilde{r}}^{b_{ \tilde{r}}}|0)=q_{1}^{a_{1}}q_{2}^{a_{2}}\cdots q_{s}^{a_{s}}\left(R_{1}(\theta )\right)^{b_{1}}\left(R_{2}(\theta)\right)^{b_{2}}\cdots\left(R_{\tilde{r}}( \theta)\right)^{b_{\tilde{r}}}1,\] where \(\tilde{r}=r\) in the \(b\)-case and \(\tilde{r}=r\) if \(r\) is even and \(\tilde{r}=r+1\) if \(r\) is odd in the \(d\)-case, and \[R_{2j-1}(\theta):=\frac{\frac{\partial}{\partial\theta_{j}}+\theta_{j}}{\sqrt{ 2}},\quad R_{2j}(\theta):=\sqrt{-1}\frac{\frac{\partial}{\partial\theta_{j}}- \theta_{j}}{\sqrt{2}},\quad 1\leq j\leq p. \tag{3.68}\] Then \(\sigma r_{x}(\alpha_{k}^{j})\sigma^{-1}\), for \(j\leq s\), can be realized as in (3.61), and for \(j>s\) we set: \[\sigma r_{x}(\alpha_{k}^{j})\sigma^{-1}=\frac{\partial}{\partial t_{k}^{(j)}},\quad\sigma r(\alpha_{-k}^{j})\sigma^{-1}=\frac{k}{2}t_{k}^{(j)},\quad j>s,\ k>0. \tag{3.69}\] The action of \(Q_{j}\) is is given by (3.65). The action of \(R_{j}\) for \(1\leq j\leq 2p\) is given by \[\sigma R_{j}\sigma^{-1}=(-1)^{\sum_{k=1}^{s}q_{k}\frac{\partial}{\partial q_{k }}}R_{j}(\theta), \tag{3.70}\] where \(R_{j}(\theta)\) is given by (3.68). For \(b_{\infty}\) and \(r\) odd, we also have \[R_{r}(\theta)=(-1)^{\sum_{j=1}^{p}\theta_{j}\frac{\partial}{\partial\theta_{j}}}\] Thus, as before, \(\sigma\psi^{\pm j}(z)\sigma^{-1}\), for \(1\leq j\leq r\), is given by (1.32). For \(1\leq j\leq r\) (see e.g. [16] or [25]) we have that \(\sigma\tilde{\phi}^{s+j}(z)\sigma^{-1}\) is given by \[\sigma\tilde{\phi}^{s+b}(z)\sigma^{-1}_{s,r}=(-1)^{\sum_{j=1}^{s}q_{j}\frac{ \partial}{\partial q_{j}}}R_{b}(\theta)e^{z\circ t^{(s+b)}}e^{-2(z^{-1}\circ \tilde{\partial}_{t^{(s+b)}})}. \tag{3.71}\] Finally, if \(r=2p\) is even for \(b_{\infty}\) or \(r=2p-1\) is odd for \(d_{\infty}\), there is also an operator \(\sigma_{0}\). One has \[\sigma\sigma_{0}\sigma^{-1}=S(\theta)=\begin{cases}\frac{1}{\sqrt{2}}(-1)^{\sum_ {k=1}^{s}q_{k}\frac{\partial}{\partial q_{k}}+\sum_{j=1}^{p}\theta_{j}\frac{ \partial}{\partial\theta_{j}}},&x=b,\\ \sqrt{-1}(-1)^{\sum_{k=1}^{s}q_{k}\frac{\partial}{\partial q_{k}}\frac{ \partial}{\partial\theta_{p}}-\theta_{p}},&x=d.\end{cases} \tag{3.72}\] Using the results of Subsection 3.6.1 we obtain a representation of \(b_{\infty}\) or \(d_{\infty}\) on the Fock space \(B_{r,s}\). It is straightforward to check that for \(1\leq a,b\leq s\), the equations (3.66) still hold. Let \(a,d\leq s\) and \(1\leq b,c\leq r\), then \[\sigma E^{\pm a,\pm d}(w,z)\sigma^{-1}=\sigma:\psi^{\pm a}(w)\psi ^{\pm d}(z):\sigma^{-1}=\] \[\qquad=(w-z)^{\delta_{ad}}q_{a}^{\pm 1}(-1)^{\sum_{\ell=1}^{a-1} q_{\ell}\frac{\partial}{\partial q_{\ell}}}q_{d}^{\pm 1}(-1)^{\sum_{\ell=1}^{d-1} q_{\ell}\frac{\partial}{\partial q_{\ell}}}w^{\pm q_{a}\frac{\partial}{ \partial q_{a}}}z^{-\pm q_{d}\frac{\partial}{\partial q_{d}}}\times\] \[\qquad\qquad e^{\pm(z\cdot t^{(a)}+w\cdot t^{(b)})}e^{\mp(z^{-1} \cdot\tilde{Q}_{t^{(a)}}+w^{-1}\cdot\tilde{Q}_{t^{(b)}})},\] \[\sigma E^{s+b,s+c}(w,z)\sigma^{-1}=\sigma:\tilde{\phi}^{s+b}(w) \tilde{\phi}^{s+c}(z):\sigma^{-1}\] \[\qquad=i_{w,z}\left(\frac{w-z}{w+z}\right)^{\delta_{bc}}\big{[}R_ {b}(\theta)R_{c}(\theta)e^{(z\circ t^{(s+b)}+w\circ t^{(s+c)})}e^{-2(z^{-1} \circ\tilde{Q}_{t^{(s+b)}}+w^{-1}\circ\tilde{Q}_{t^{(s+c)}})}-\delta_{bc}\big{]},\] \[\sigma E^{\pm a,s+b}(w,z)\sigma^{-1}=\sigma:\psi^{\pm a}(w)\tilde {\phi}^{s+b}(z):\sigma^{-1}=\] \[\qquad=q_{a}(-1)^{\sum_{i=a}^{s}q_{i}\frac{\partial}{\partial q_{ i}}}w^{\pm q_{a}\frac{\partial}{\partial q_{a}}}R_{b}(\theta)e^{\pm z\cdot t^{(a)} +w\circ t^{(s+b)}}e^{\mp z^{-1}\cdot\tilde{Q}_{t^{(a)}}-2(w^{-1}\circ\tilde{Q} _{t^{(s+b)}})}. \tag{3.73}\] If \(\sigma_{0}\) is present, we also have \[\sigma E^{\pm a}(z)\sigma^{-1}=\sigma:\psi^{\pm a}(z)\sigma_{0}: \sigma^{-1}=\ q_{a}^{\pm 1}(-1)^{\sum_{\ell=1}^{a-1}q_{\ell}\frac{\partial}{ \partial q_{\ell}}}S(\theta)z^{\pm q_{a}\frac{\partial}{\partial q_{a}}}e^{ \pm z\cdot t^{(a)}}e^{\mp z^{-1}\cdot\tilde{Q}_{t^{(a)}}},\] \[\sigma E^{s+b}(z)\sigma^{-1}=\sigma:\tilde{\phi}^{s+b}(z)\sigma_ {0}:\sigma^{-1}=R_{b}(\theta)S(\theta)e^{z\circ t^{(s+b)}}e^{-2(z^{-1}\circ \tilde{Q}_{t^{(s+b)}})}, \tag{3.74}\] where \(S(\theta)\) is given by (3.72). #### 3.7.2 The bosonic realization of \(F_{c}\), the Weyl algebra case We use the results of Subsection 3.6.2 to obtain a bosonic realization of \(F_{c}\). Let \[C_{s,r}= \mathbb{C}[t_{i}^{(a)},t_{j}^{(c)},\xi_{k}^{a},\xi_{\ell}^{c}|\,1 \leq a\leq s<c\leq s+r;,i\in\mathbb{Z}_{>0},\,j\in\mathbb{Z}_{odd>0},\,k\in \mathbb{Z}_{\neq 0},\,\ell\in\frac{1}{2}+\mathbb{Z}_{\geq 0}], \tag{3.75}\] where \(\xi_{k}^{a}\) are anticommuting variables, and define the vector space isomorphism \(\sigma:F_{c}\to C_{s,r}\) as follows. Again we use the standard realization of the Heisenberg algebra (\(1\leq a\leq s<c\leq s+r\), \(i\in\mathbb{Z}_{>0}\) and \(j\in\mathbb{Z}_{odd>0}\)) \[\sigma\alpha_{-i}^{a}\sigma^{-1}=it_{i}^{(a)},\quad\sigma\alpha_{i}^{a} \sigma^{-1}=-\frac{\partial}{\partial t_{i}^{(a)}},\] \[\sigma\alpha_{-j}^{c}\sigma^{-1}=\frac{1}{2}jt_{j}^{(c)},\quad \sigma\alpha_{j}^{c}\sigma^{-1}=-\frac{\partial}{\partial t_{j}^{(c)}}.\] We realize the modes of the fields \(\Theta^{\pm a}(z)=\sum_{j\in\mathbb{Z}}\Theta^{\pm a}_{j}z^{-j-1}=Q^{\mp a}_{a} \theta^{\pm a}(z)\) for \(1\leq a\leq s\) and the modes of \(\Theta^{c}(z)\) for \(s<c\leq r\), as multiplication and differentiation of Grassmann variables, viz. for \(a=1,\ldots,s\), \[\sigma\Theta^{\pm a}(z)\sigma^{-1}=\Xi^{\pm a}(z)S_{a},\qquad\sigma\Theta^{c}(z) \sigma^{-1}=\Xi^{c}(z)S_{c},\] where (\(a\leq s<c\)) \[\Xi^{+a}(z) =\frac{\partial}{\partial\xi_{0}^{a}}z^{-1}+\sum_{k=1}^{\infty}-k \xi_{k}^{a}z^{k-1}+\sum_{k=1}^{\infty}\frac{\partial}{\partial\xi_{-k}^{a}}z^{- k-1},\] \[\Xi^{-a}(z) =-\frac{\partial}{\partial\xi_{0}^{a}}z^{-1}+\sum_{k=1}^{\infty} k\xi_{-k}^{a}z^{k-1}+\sum_{k=1}^{\infty}\frac{\partial}{\partial\xi_{k}^{a}}z^{- k-1}, \tag{3.76}\] \[\Xi^{c}(z) =\sum_{i\in\frac{1}{2}+\mathbb{Z}_{\geq 0}}\left((-1)^{i+ \frac{1}{2}}2i\xi_{i}^{c}z^{i-\frac{1}{2}}+\frac{\partial}{\partial\xi_{i}^{c }}z^{-i-\frac{1}{2}}\right).\] Here the operator \(S_{a}\) acts as \[S_{a}(1)=1,\qquad S_{a}\xi_{k}^{b}S_{a}^{-1}=\begin{cases}-\xi_{k}^{b}&\text{ if }a>b,\\ \xi_{k}^{b}&\text{ if }a\leq b,\end{cases},\qquad S_{a}\frac{\partial}{ \partial\xi_{k}^{b}}S_{a}^{-1}=\begin{cases}-\frac{\partial}{\partial\xi_{k}^ {b}}&\text{ if }a>b,\\ \frac{\partial}{\partial\xi_{k}^{b}}&\text{ if }a\leq b.\end{cases}\] Hence, for the modes we find that \[\sigma\Theta_{0}^{\pm a}\sigma^{-1} =\pm\frac{\partial}{\partial\xi_{\mp j}^{a}}S_{a},\quad\sigma \Theta_{-j}^{\pm a}\sigma^{-1}=\mp j\xi_{\pm j}^{a}S_{a},\quad\sigma\Theta_{j} ^{\pm a}\sigma^{-1}=-\frac{\partial}{\partial\xi_{\mp j}^{a}}S_{a},\quad j\in \mathbb{Z}_{>0},\] \[\sigma\theta_{-k}^{c}\sigma^{-1} =(-1)^{k+\frac{1}{2}}2k\xi_{k}^{c}S_{c},\quad\sigma\theta_{k}^{c }\sigma^{-1}=\frac{\partial}{\partial\xi_{k}^{c}}S_{c},\quad k\in\frac{1}{2}+ \mathbb{Z}_{>0}\] We still need to define \(\sigma\alpha_{0}^{a}\sigma^{-1}\) for \(1\leq a\leq s\). Formula (3.49) suggest the following: \[\sigma\alpha_{0}^{a}\sigma^{-1}=T_{a}:=-\sum_{k=1}^{\infty}\xi_{k}^{a}\frac{ \partial}{\partial\xi_{k}^{a}}+\sum_{k=0}^{\infty}\xi_{-k}^{a}\frac{\partial}{ \partial\xi_{-k}^{a}}. \tag{3.77}\] **Remark 3.9**: _Note that here we use the fields \(\Theta^{\pm a}(z)\) instead of \(\theta^{\pm a}(z)\), which means that we do not consider the operators \(Q_{a}\). When using \(\theta^{\pm a}(z)\) instead of \(\Theta^{\pm a}(z)\), the operator \(Q_{a}\) appears in the field \(\theta^{\pm a}(z)\) and we can assume that \(\sigma Q_{a}\sigma^{-1}=q_{a}\), but then \(\sigma\alpha_{0}^{a}\sigma^{-1}=q_{a}\frac{\partial}{\partial q_{a}}\)._ Using all this and formulas (3.59), we obtain for \(1\leq a\leq s<c\leq r+s\): \[\sigma b^{\pm a}(z)\sigma^{-1} =e^{\pm z\cdot t^{(a)}}\Xi^{+a}(z)z^{\pm T_{a}}e^{\pm z\cdot \tilde{\partial}_{t^{(a)}}}S_{a}, \tag{3.78}\] \[\sigma b^{c}(z)\sigma^{-1} =e^{z\circ t^{(c)}}\Xi^{c}(z)e^{2(z\circ\tilde{\partial}_{t^{(c) }})}S_{c}.\] **Remark 3.10**: _In this way we always want to present the Grassmann variables in a prefered order, namely the lexicographic ordering, i.e. \(\xi_{i}^{k}<_{\rm lex}\xi_{j}^{\ell}\), when \(k<\ell\), or when \(k=\ell\) and \(i<j\). Hence we always write a monomial in the following ordered way._ \[\xi_{i_{1}}^{k_{1}}\xi_{i_{2}}^{k_{2}}\cdots\xi_{i_{m}}^{k_{m}},\quad\text{ with }(i_{1},k_{1})<_{\rm lex}(i_{2},k_{2})<_{\rm lex}\cdots<_{\rm lex}(i_{m},k_{m}).\] We use the vertex operators (3.78) to derive for \(1\leq a\leq d\leq s\) \[\sigma C^{\pm a,\pm d}(z,w)\sigma^{-1}=\sigma:b^{\pm a}(z)b^{\pm d}(w): \sigma^{-1} \tag{3.79}\] \[=i_{z,w}\left(\frac{1}{z-w}\right)^{\delta_{ad}}e^{\pm(z\cdot t^{(a )}+w\cdot t^{(d)}}\Xi^{\pm a}(z)\Xi^{\pm d}(w)z^{\pm T_{a}}w^{\pm T_{d}}e^{\pm(z ^{-1}\cdot\tilde{\partial}_{t^{(a)}}+w^{-1}\cdot\tilde{\partial}_{t^{(d)}})}S_ {a}S_{d},\] (3.80) \[\sigma C^{+a,-d}(z,w)\sigma^{-1}=\sigma:b^{+a}(z)b^{-d}(w):\sigma^ {-1}\] (3.81) \[=(z-w)^{\delta_{ad}}\,e^{z\cdot t^{(a)}-w\cdot t^{(d)}}\Xi^{+a}(z )\Xi^{-d}(w)z^{T_{a}}w^{-T_{d}}e^{(z^{-1}\cdot\tilde{\partial}_{t^{(a)}}-w^{- 1}\cdot\tilde{\partial}_{t^{(d)}}}S_{a}S_{d}-i_{z,w}\left(\frac{\delta_{ad}}{ z-w}\right), \tag{3.82}\] and for \(s<a\leq c\leq r+s\), we obtain \[\sigma C^{a,c}(z,w):\sigma^{-1}=\sigma:b^{a}(z)b^{c}(w):\sigma^{-1} \tag{3.83}\] \[=i_{z,w}\left(\frac{z+w}{w-z}\right)^{\delta_{ac}}\Xi^{a}(z)\Xi^ {b}(w)e^{z\circ t^{(a)}+w\circ t^{(c)}}e^{2(z^{-1}\circ\tilde{\partial}_{t^{(a )}}+w^{-1}\circ\tilde{\partial}_{t^{(c)}})}S_{a}S_{c}-i_{z,w}\frac{\delta_{ac }}{z+w}. \tag{3.84}\] Finally, the mixed terms, when \(1\leq a\leq r<c\leq r+s\), are \[\sigma C^{\pm a,c}(z,w):\sigma^{-1} =\sigma:b^{\pm a}(z)b^{c}(w):\sigma^{-1} \tag{3.85}\] \[=e^{\pm z\cdot t^{(a)}+w\circ t^{(c)}}\Xi^{+a}(z)\Xi^{c}(w)z^{\pm T _{a}}e^{\pm z^{-1}\cdot\tilde{\partial}_{t^{(a)}}+w^{-1}\circ\tilde{\partial}_ {t^{(c)}})}S_{a}S_{c}.\] ## 4 Hierarchies of KP type ### The fermionic description of the KP, BKP and DKP hierarchies **The KP hierarchy.** Recall the charge decomposition (3.3) of the module \(F\). By definition, a non-zero element \(\tau\in F^{(0)}\), satisfies the KP hierarchy if (cf. (1.5)) \[S(\tau\otimes\tau)=\sum_{i\in\mathbb{Z}+\frac{1}{2}}\psi_{i}^{+}\tau\otimes \psi_{-i}^{-}\tau=0. \tag{4.1}\] Note that (4.1) can be rewritten in terms of \(s\) pairs of charged free fermionic fields \(\psi^{\pm j}(z)\), defined by (3.17), as follows.: \[S(\tau\otimes\tau)=\sum_{j=1}^{s}\sum_{i\in\mathbb{Z}+\frac{1}{2}}\psi_{i}^{+ j}\tau\otimes\psi_{-i}^{-j}\tau=\text{Res}_{z=0}\sum_{j=1}^{s}\psi^{+j}(z) \tau\otimes\psi^{-j}(z)\tau dz. \tag{4.2}\] Equation (4.1) or (4.2) is the KP hierarchy in the fermionic picture. It becomes a hierarchy of differential equations if we apply the isomorphism \(\sigma\), constructed in section 3.7.1. Taking \(\tau\in F^{(m)}\) produces the same hierarchy. **The BKP hierarchy.** Recall from Section 3.2 that the spin module \(F_{b}\) is an irreducible \(C\ell_{b}\)-module \(F_{b}\). The following equation on a non-zero \(\tau\in F_{b}\) is called the BKP hierarchy in the fermionic picture (cf. (1.7)): \[S_{B}(\tau\otimes\tau)=\sum_{i\in\mathbb{Z}}\tilde{\phi}_{i}\tau\otimes\tilde{ \phi}_{-i}\tau=\frac{1}{2}\tau\otimes\tau. \tag{4.3}\] This equation describes the \(B_{\infty}\)-orbit of the vacuum vector \(|0\rangle\), see [16] for more details. Equation (4.3) can be rewritten in terms of \(s\) pairs of charged free fermionic fields \(\psi^{\pm b}(z)\), \(b=1,\ldots,s\), and \(r\) neutral twisted free fermionic fields \(\tilde{\phi}^{c}(z)\), \(c=s+1,\ldots,s+r\), as follows If \(r=2p+1\) is odd, then (4.3) becomes \[\begin{split}\operatorname{Res}_{z=0}\,\left(\sum_{b=1}^{s}\left( \psi^{+b}(z)\otimes\psi^{-b}(z)+\psi^{-b}(z)\otimes\psi^{+b}(z)\right)+ \right.\\ \left.\qquad\qquad+\sum_{c=s+1}^{s+2p+1}z^{-1}\tilde{\phi}^{c}(z) \otimes\tilde{\phi}^{c}(-z)\right)(\tau\otimes\tau)dz=\frac{1}{2}\tau\otimes \tau.\end{split} \tag{4.4}\] However, if \(r=2p\) is even, we have the extra generator \(\sigma_{0}\) so that (4.3) turns into: \[\begin{split}\big{\{}\sigma_{0}\otimes\sigma_{0}+\operatorname{ Res}_{z=0}\,\left(\sum_{b=1}^{s}\left(\psi^{+b}(z)\otimes\psi^{-b}(z)+\psi^{-b}(z) \otimes\psi^{+b}(z)\right)+\right.\\ \left.\qquad\qquad\qquad+\sum_{c=s+1}^{s+2p}z^{-1}\tilde{\phi}^{ c}(z)\otimes\tilde{\phi}^{c}(-z)\right)dz\big{\}}(\tau\otimes\tau)=\frac{1}{2} \tau\otimes\tau.\end{split} \tag{4.5}\] **The DKP hierarchy** Recall from Section 3.2 that the spin module \(F_{d}\) splits into two irreducible \(d_{\infty}\)-modules \(F_{d}^{\overline{0}}\oplus F_{d}^{\overline{1}}\),. The highest weight vector of \(F_{d}^{\overline{0}}\) is \(|\overline{0}\rangle=|0\rangle\) and of \(F_{d}^{\overline{1}}\) is \(|\overline{1}\rangle=\sqrt{2}\phi_{-\frac{1}{2}}|0\rangle\). The following equation on the non-zero \(\tau_{a}\in F_{d}^{a}\) is called DKP hierarchy in the fermionic picture: \[S_{D}(\tau_{a}\otimes\tau_{a})=\sum_{i\in\frac{1}{2}+\mathbb{Z}}\phi_{i}\tau_ {a}\otimes\phi_{-i}\tau_{a}=0,\quad a=\overline{0}\text{ or }\overline{1}. \tag{4.6}\] This equation describes the \(D_{\infty}\)-orbit of the highest weight vector \(|\overline{a}\rangle\), see [16] for more details. This group consists of (even) invertible elements \(g\) such that \(g\phi_{j}g^{-1}=\sum_{i}a_{ij}\phi_{i}\) where \(\sum_{k}a_{kj}a_{-k,j}=\delta_{i,-j}\), i.e. that \(g\) leaves the bilinear form \((\cdot,\cdot)_{d}\) of (2.19) invariant. Equation (4.6) can be expressed in terms of \(s\) pairs of charged free fermionic fields \(\psi^{\pm b}(z)\) and \(r\) neutral twisted fields \(\tilde{\phi}^{c}(z)\), introduced in Section 3.5. If \(r=2p\) is even, then (4.6) becomes \[\begin{split}\operatorname{Res}_{z=0}\,\left(\sum_{b=1}^{s}\left( \psi^{+b}(z)\otimes\psi^{-b}(z)+\psi^{-b}(z)\otimes\psi^{+b}(z)\right)+ \right.\\ \left.\qquad\qquad+\left.\sum_{c=s+1}^{s+2p}z^{-1}\tilde{\phi}^{ c}(z)\otimes\tilde{\phi}^{c}(-z)\right)(\tau_{a}\otimes\tau_{a})dz=0.\end{split} \tag{4.7}\] However, if \(r=2p-1\) is odd, there is an extra generator \(\sigma_{0}\). Then (4.6) turns into: \[\begin{split}\big{\{}\sigma_{0}\otimes\sigma_{0}+\text{Res}_{z=0} \left(\sum_{b=1}^{s}\left(\psi^{+b}(z)\otimes\psi^{-b}(z)+\psi^{-b}(z)\otimes \psi^{+b}(z)\right)+\right.\\ +\left.\sum_{c=s+1}^{s+2p-1}z^{-1}\tilde{\phi}^{c}(z)\otimes\tilde{ \phi}^{c}(-z)\right)dz\big{\}}(\tau_{a}\otimes\tau_{a})=0.\end{split} \tag{4.8}\] ### The symplectic bosons description of the CKP hierarchy Recall from Section 3.3 that the Weyl module \(F_{c}\) splits into two irreducible \(c_{\infty}\)-modules \(F_{c}^{\overline{0}}\) and \(F_{c}^{\overline{1}}\) with highest weight vectors \(|\overline{0}\rangle=|0\rangle\), respectively \(|\overline{1}\rangle=b_{-\frac{1}{2}}|0\rangle\). The CKP hierarchy (see [26]) in the fermionic picture is the following equation on \(\tau\in F_{c}^{\overline{0}}\): \[S_{C}(\tau\otimes\tau)=\sum_{i\in\frac{1}{2}+\mathbb{Z}}(-1)^{i+\frac{1}{2}}b_ {i}\tau\otimes b_{-i}\tau=0. \tag{4.9}\] Equation (4.9) can be rewritten in terms of \(s\) charged fields \(b^{\pm a}(z)\) and \(r\) neutral fields \(b^{c}(z)\).: \[\sum_{a=1}^{s}\sum_{i\in\frac{1}{2}+\mathbb{Z}}\left(b_{i}^{-a}\tau\otimes b_{ -i}^{+a}\tau-b_{i}^{+a}\tau\otimes b_{-i}^{-a}\tau\right)+\sum_{c=s+1}^{r+s} \sum_{i\in\frac{1}{2}+\mathbb{Z}}(-1)^{i+\frac{1}{2}}b_{i}^{c}\tau\otimes b_{ -i}^{c}\tau=0. \tag{4.10}\] In terms of the symplectic bosonic fields, it becomes \[\text{Res}_{z=0}\big{\{}\sum_{a=1}^{s}\left(b^{-a}(z)\tau\otimes b^{+a}(z)\tau -b^{+a}(z)\tau\otimes b^{-a}(z)\tau\right)+\sum_{c=s+1}^{r+s}b^{c}(z)\tau \otimes b^{c}(-z)\tau\big{\}}dz=0. \tag{4.11}\] Unfortunately, taking \(\tau\in F_{c}^{\overline{1}}\), does not give the same hierarchy. ### Bosonic description of the multicomponent KP, BKP, CKP and DKP hierarchies **The KP hierarchy** Equation (4.1) or (4.2) is the KP hierarchy in the fermionic picture. It becomes a hierarchy of differential equations if we apply the isomorphism \(\sigma:F\xrightarrow{\sim}B_{s}\), constructed in (3.7.1) so that for \(\tau\in F^{(0)}\) \[\sigma(\tau)=\tau(q,t)=\sum_{|\underline{k}|=0}\tau_{\underline{k}}(t)q^{ \underline{k}}\] becomes a function in \(t=(t_{1}^{(i)},t_{2}^{(i)},\ldots)\), \(i=1,\ldots,s\), and in the \(sl_{s}\)-root lattice, with lattice points \(\underline{k}=(k_{1},k_{2},\ldots k_{s})\) with \(|\underline{k}|=k_{1}+k_{2}+\cdots+k_{s}=0\), where \(q^{\underline{k}}=q_{1}^{k_{1}}q_{2}^{k_{2}}\cdots q_{s}^{k_{s}}\). If \(s=1\) one obtains the classical KP-hierarchy. For \(s>1\) we obtain the \(s\)-component KP hierarchy. Equation (4.1) describes the \(GL_{\infty}\)-group orbit of \(|0\rangle\)[18]. Note that in the fermionic picture this equation is the same. It is only after we apply the isomorphism \(\sigma\) and \(\tau\in B_{s}^{(0)}\) that we obtain different descriptions for different choices of \(s\). Using the fermionic vertex operators (3.33) and (4.2) we rewrite (4.1) and obtain the \(s\)-component KP hierarchy. Let \(\underline{e}_{i}\) be the basis vector in \(\mathbb{Z}^{s}\), wich has a \(1\) on the \(i\)-th place and zeros elsewhere and let \(|\underline{k}|_{j}=k_{1}+k_{2}+\cdots+k_{j}\). Then, for each pair \(\underline{k},\underline{\ell}\) with \(|\underline{k}|=|\underline{\ell}|=0\) we have the \(s\)-component KP hierarchy of Hirota bilinear equations (1.34) on \(\tau_{\underline{k}}(t)\), \(|\underline{k}|=0\) (cf. (4.2)), describing the orbit \(GL_{\infty}|0\rangle\) in the bosonic picture. Taking \(\tau\in F^{(m)}\), we obtain the same equations on \(\tau_{\underline{k}}(t)\), \(|\underline{k}|=m\) for the \(GL_{\infty}\) orbit of \(|m\rangle\). **The BKP hierarchy** In this case \(F_{b}\) is irreducible and \(\sigma:F_{b}\xrightarrow{\sim}B_{s,r}\), where \(B_{s,r}\), \(r=2p\) or \(r=2p+1\), is given by (3.67) and \(\sigma\) constructed in (3.7.1). We write \[\sigma(\tau)=\tau(q,t,\theta)=\sum_{\underline{k}\in\mathbb{Z}^{s},\underline {\ell}\in\mathbb{Z}_{2}^{p}}\tau_{\underline{k},\underline{\ell}}(t)q^{ \underline{k}}\theta\underline{\ell},\] where \(q^{\underline{k}}=q_{1}^{k_{1}}q_{2}^{k_{2}}\cdots q_{s}^{k_{s}}\) and \(\theta\underline{\ell}=\theta_{1}^{\ell_{1}}\theta_{2}^{\ell_{2}}\cdots\theta _{p}^{\ell_{p}}\). Now using \(\sigma\) and substituting the vertex operators (1.32), (3.71) and if \(r=2p\), then use formula (3.72) for \(\sigma\sigma_{0}\sigma^{-1}\), we obtain: \[\begin{split}&\operatorname{Res}_{z=0}\bigl{(}\sum_{b=1}^{s} \big{(}(-1)^{|\underline{k}+\overline{k}|_{b-1}}z^{k_{b}-\overline{k}_{b}-2+ 2\delta_{bs}}e^{z\cdot(t^{(b)}-\overline{t}^{(b)})}e^{z^{-1}\cdot(\tilde{ \alpha}_{\underline{\ell}(b)}-\tilde{\beta}_{\ell(b)})}\tau_{\underline{k}+ \underline{e}_{a}-\underline{e}_{b},\underline{\ell}}(t)\tau_{\overline{k}+ \underline{e}_{b}-\underline{e}_{a},\underline{\ell}}(\overline{t})+\\ &+(-1)^{|\underline{k}+\overline{k}|_{b-1}}z^{-k_{b}+\overline{k} _{b}-2-2\delta_{bs}}e^{z\cdot(\overline{t}^{(b)}-t^{(b)})}e^{z^{-1}\cdot( \tilde{\theta}_{t}^{(b)}-\tilde{\alpha}_{\underline{\ell}(b)})}\tau_{ \underline{k}+\underline{e}_{a}+\underline{e}_{b},\underline{\ell}}(t)\tau_{ \overline{k}-\underline{e}_{b}-\underline{e}_{a},\underline{\ell}}(\overline{ t})\bigr{)}+\\ &+\frac{1}{2}\sum_{c=1}^{2p}(-1)^{|\underline{k}+\overline{k}|+ |\underline{\ell}+\overline{k}|}[\frac{c-1}{2}]^{+(c-1)(\ell[\frac{c+1}{2}]^{ +\overline{\ell}}[\frac{c+1}{2}]^{+1})}z^{-1}e^{zo(t^{(s+c)}-\overline{t}^{( s+c)})}\times\\ &\quad e^{2(z^{-1}\circ(\tilde{\alpha}_{\underline{\ell}(s+c)}- \tilde{\beta}_{\ell(s+c)}))}\tau_{\underline{k}+\underline{e}_{a},\underline{ \ell}+\underline{e}_{\underline{\ell}}[\frac{c+1}{2}]}(t)\tau_{\overline{k}- \underline{e}_{a},\underline{\ell}+\underline{e}_{\underline{\ell}}[\frac{c+1 }{2}]}(\overline{t})\\ &+\frac{\delta_{r,2p+1}}{2}(-1)^{|\underline{k}+\overline{k}|+| \underline{\ell}+\overline{k}|}z^{-1}e^{zo(t^{s+r})-\overline{t}^{(s+r)})}e^ {2(z^{-1}\circ(\tilde{\theta}_{\underline{\ell}(s+r)}-\tilde{\beta}_{\ell(s+ r)}))}\tau_{\underline{k}+\underline{e}_{a},\underline{\ell}}(t)\tau_{\overline{k}- \underline{e}_{a},\underline{\ell}}(\overline{t})\bigr{)}dz\\ =&\left(\frac{1}{2}-\frac{\delta_{r,2p}}{2}(-1)^{| \underline{k}+\overline{k}|+|\underline{\ell}+\overline{\ell}|}\right)\tau_{ \underline{k}+\underline{e}_{a},\underline{\ell}}(t)\tau_{\overline{k}- \underline{e}_{a},\underline{\ell}}(\overline{t}).\end{split} \tag{4.12}\] Here \([x]\) stands for the largest integer smaller or equal to \(x\). **The DKP hierachy** This case is similar to the BKP case, however in this case \(F_{d}\) splits into two irreducible \(d_{\infty}\) modules. Again we have two cases, viz \(r=2p\) and \(r=2p-1\), and we write \[\sigma(\tau_{a})=\tau^{a}(q,t,\theta)=\sum_{\underline{k}\in\mathbb{Z}^{s}, \underline{\ell}\in\mathbb{Z}_{2}^{p},\,|\underline{k}|+|\underline{\ell}|=a \operatorname{mod}2}\tau_{\underline{k},\underline{\ell}}(t)q^{\underline{k} }\theta\underline{\ell},\] Then the fermionic DKP hierarchy turns into the following set of equations for all \(\underline{k},\overline{\underline{k}}\in\mathbb{Z}^{s}\), \(\underline{\ell},\overline{\underline{\ell}}\in\mathbb{Z}_{2}^{p}\), such that \(|\underline{k}|+|\underline{\ell}|+1=a\mod 2\): \[\begin{split}\operatorname{Res}_{z=0}&\big{(}\sum_{ b=1}^{s}\big{(}(-1)^{|\underline{k}+\overline{\underline{k}}\rrbracket_{b-1}}z^{k_{b}- \overline{k}_{b}-2+2\delta_{bs}}e^{z\cdot(t^{(b)}-\overline{t}^{(b)})}e^{z-1 \cdot(\tilde{\Omega}_{t^{(b)}}-\tilde{\Omega}_{t^{(b)}})}\tau_{\underline{k}+ \underline{e}_{s}-\underline{e}_{s},\underline{\ell}}(t)\tau_{\overline{ \underline{k}}+\underline{e}_{b}-\underline{e}_{s},\underline{\ell}}(\overline {t})+\\ &+(-1)^{|\underline{k}+\overline{\underline{k}}\rrbracket_{b-1}}z^{- k_{b}+\overline{k}_{b}-2-2\delta_{bs}}e^{z\cdot(\overline{t}^{(b)}-t^{(b)})}e^{z-1 \cdot(\tilde{\Omega}_{t^{(b)}}-\tilde{\Omega}_{t^{(b)}})}\tau_{\underline{k}+ \underline{e}_{s}+\underline{e}_{b},\underline{\ell}}(t)\tau_{\overline{ \underline{k}}-\underline{e}_{s},\underline{\ell}}(\overline{t})\big{)}+\\ &+\frac{1}{2}\sum_{c=1}^{r}(-1)^{|\underline{k}+\overline{ \underline{k}}|+[\underline{\ell}+\overline{\underline{\ell}}]}\big{[}\frac{c- 1}{2}\big{]}^{+(c-1)(\ell\big{[}\frac{c+1}{2}\big{]}^{+\overline{\ell}}\big{[} \frac{c+1}{2}\big{]}^{+1})}z^{-1}e^{z\circ(t^{(s+c)}-\overline{t}^{(s+c)})} \times\\ &\qquad\qquad\qquad\qquad e^{2(z-1\circ(\tilde{\Omega}_{\overline {t}^{(s+c)}}-\tilde{\Omega}_{t^{(s+c)}}))}\tau_{\underline{k}+\underline{e}_{ s},\underline{\ell}+\underline{\underline{\ell}}}\big{[}\frac{c+1}{2}\big{]} ^{(t)}\tau_{\overline{\underline{k}}-\underline{e}_{s},\overline{\underline{ \ell}}+\underline{\underline{\ell}}_{c\left[\frac{c+1}{2}\right]}}(\overline {t})\big{)}dz\\ &=\frac{\delta_{r,2p-1}}{2}\tau_{\underline{k}+\underline{e}_{s}, \underline{\ell}+\underline{e}_{p}}(t)\tau_{\overline{\underline{k}}- \underline{e},\overline{\underline{\ell}}+\underline{e}_{p}}(\overline{t}). \end{split} \tag{4.13}\] In the above, we have used that \((-1)^{|\underline{k}+\overline{\underline{k}}|+|\underline{\ell}+\overline{ \underline{\ell}}|}=1\). **Remark 4.1**: _Note that the equations of DKP hierarchy for \(r=2p\) appear as a subset of the equations of the BKP hierarchy for the same \(r=2p\). Namely, one only takes those \(\underline{k}\), \(\underline{\ell}\), and \(\overline{\underline{k}}\), \(\overline{\underline{\ell}}\), for which \(|\underline{k}|+|\underline{\ell}|+1=|\overline{\underline{k}}|+|\overline{ \underline{\ell}}|+1=a\mod 2\). Since both \(a=\overline{0}\) and \(a=\overline{1}\), appear in the BKP hierarchy we also find, besides these two DKP hierarchies, the so-called modified DKP equations. See ([16], Section 2)._ **The CKP hierarchy** Now using the isomorphism \(\sigma\), see Subsection 3.7.2, we assume that \(\tau(t,y,\xi)\in\sigma(F_{0})\), thus \[\begin{split}&\tau(t,\xi)\in\\ &\mathbb{C}[t_{i}^{(a)},t_{j}^{(c)},\xi_{k}^{a},\xi_{\ell}^{c}]\,1 \leq a\leq s<c\leq s+r,\,i\in\mathbb{Z}_{>0},\,j\in\mathbb{Z}_{>0,odd},\,0\neq k \in\mathbb{Z},\,\ell\in\frac{1}{2}+\mathbb{Z}_{\geq 0}].\end{split} \tag{4.14}\] with \[\tau(t,\xi)=\sum_{w}\tau_{w}(t)w,\] where \[w=\xi_{k_{1}^{1}}^{1}\cdot\cdot\xi_{k_{1}^{1}}^{1}\xi_{\ell_{1}^{1}}^{1}\cdot \cdot\xi_{\ell_{j_{1}}^{1}}^{1}\xi_{k_{2}^{1}}^{2}\cdot\cdot\xi_{k_{2}^{2}}^{ 2}\xi_{\ell_{2}^{2}}^{2}\cdot\cdot\xi_{k_{2}^{2}}^{2}\cdot\cdot\xi_{k_{1}^{2} }^{s}\cdot\cdot\xi_{k_{1}^{s}}^{s}\xi_{\ell_{1}^{s}}^{s}\cdot\cdot\xi_{\ell_{ j_{s}}^{s}}^{s}\xi_{m_{1}^{s+1}}^{s+1}\cdots\xi_{m_{j_{s+1}}^{s+1}}^{s+1}\cdot\cdot\xi_{m_{j_{s+ 1}}^{s+r}}^{s+r}\cdot\cdot\xi_{m_{j_{s+r}}^{s+r}}^{s+r}, \tag{4.15}\] such that \(i_{a},j_{b}\in\mathbb{Z}_{\geq 0}\), \(t_{p}^{a}\in\mathbb{Z}_{<0}\), \(\ell_{p}^{a}\in\mathbb{Z}_{>0}\), and \(m_{o}^{a}\in\frac{1}{2}+\mathbb{Z}_{\geq 0}\), with \(k_{p}^{a}<k_{p+1}^{a}\), \(\ell_{p}^{a}<\ell_{p+1}^{a}\) and \(m_{q}^{c}<m_{q+1}^{c}\) and \(i_{1}+\cdots i_{s}+j_{1}+\cdots+j_{s+r}\in 2\mathbb{Z}\). To describe the CKP hierarchy of differential equations, we will use the following notations. Let \(\xi_{j}^{b}\) (resp. \(\xi_{k}^{c}\)) be one of the Grassmann variables appearing (resp. not appearing) in the expression (4.15). We write \(w\backslash\xi_{j}^{b}\) (resp \(w\cup\xi_{k}^{c}\) for the monomial that we get by removing \(\xi_{j}^{b}\) from \(w\) (resp. by adding \(\xi_{k}^{c}\) to \(w\) and writing it in the preferred lexicographical ordering). We substitute (3.78) in (4.11) and and thus obtain the multicomponent CKP hierarchy in the bosonic picture. Let \(w\) be given by (4.15) and \(\overline{w}\) by the same expression but with all \(\xi^{b}_{j}\) replaced by \(\overline{\xi^{b}_{\overline{j}}}\), such that the number of Grassmann variables appearing in \(w\) and \(\overline{w}\) is odd, then we obtain \[\begin{split}\text{Res}_{z=0}&\bigg{\{}\sum_{a=1}^{s} \bigg{[}-z^{i_{a}-\overline{i}_{a}-j_{a}+\overline{j}_{a}}e^{z\cdot(t^{(a)}- \overline{t}^{(a)})}e^{z^{-1}\cdot(\bar{\partial}_{t^{(a)}}-\bar{\partial}_{t ^{(a)}})}\times\\ &\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad on the variable \(x\). Introduce the \(s\times s\)-matrices \[V^{\pm}(\underline{\alpha};x,t,z)=P^{\pm}(\underline{\alpha};x,t,\pm z)R^{\pm}( \underline{\alpha};\pm z)S^{\pm}(t,\pm z)e^{\pm xz},\] where \[P^{\pm}(\underline{\alpha};x,t,\pm z)_{ij} =\frac{(-1)^{|\underline{\alpha}|_{j-1}}z^{\delta_{ij}-1}}{\tau_ {\underline{\alpha}}(x,t)}e^{\mp z^{-1}\cdot\tilde{\delta}_{t(j)}}\left(\tau_{ \underline{\alpha}\pm(\underline{\epsilon}_{i}-\underline{\epsilon}_{j})}(x,t)\right),\] \[R^{\pm}(\underline{\alpha};\pm z) =\sum_{j=1}^{s}(-1)^{|\underline{\alpha}|_{j-1}}z^{\pm\alpha_{j} }E_{jj}, \tag{4.17}\] \[S^{\pm}(t,\pm z) =\sum_{j=1}^{s}e^{\pm z\cdot t^{(j)}}E_{jj}.\] Then (1.34) turns into \[\operatorname{Res}_{z=0}\sum_{j=1}^{s}V^{+}(\underline{\alpha};x,t,z)E_{jj}V^ {-}(\underline{\beta};\overline{x},\overline{t},z)^{t}dz=0. \tag{4.18}\] Let \(\partial=\partial_{x}\) and using the fundamental lemma (see [15], Lemma 4.1), we deduce the following expression for the matrix pseudo-differential operators \[\sum_{j=1}^{s}(P^{+}(\underline{\alpha};x,t,\partial)R^{+}(\underline{\alpha };\partial)E_{jj}R^{-}(\underline{\beta};x,t,\partial)^{*}P^{-}(\underline{ \beta};x,t,\partial)^{*})_{-}=0. \tag{4.19}\] (Note that one cannot simply substitute \(\partial\) for \(\pm z\) in the formulas (4.17). One first has to write \(P^{\pm}(\pm z)\) as \(P^{\pm}(\pm z)=I+\sum_{k=1}^{\infty}P_{k}^{\pm}(\pm z)^{-k}\), then \(P^{\pm}(\partial)=1+\sum_{k=1}^{\infty}P_{k}^{\pm}\partial^{-k}\). This means that \(P^{\pm}(\pm z)\) is the symbol of pseudo-differential operator \(P^{\pm}(\partial)\).) Taking \(\underline{\alpha}=\underline{\beta}\), we deduce the Sato-Wilson equations \[\begin{split} P^{-}(\underline{\beta};x,t,\partial)^{*}& =P^{+}(\underline{\beta};x,t,\partial)^{-1},\\ \frac{\partial P^{+}(\underline{\beta};x,t,\partial)}{\partial t _{i}^{j}}&=-(P^{+}(\underline{\beta};x,t,\partial)\partial^{i}E_ {jj}P^{+}(\underline{\beta};x,t,\partial)^{-1})_{-}P^{+}(\underline{\beta};x,t,\partial).\end{split} \tag{4.20}\] Now introducing the Lax operators \[\begin{split} L&=L(\underline{\beta};t,x,\partial)=P^ {+}(\underline{\beta};x,t,\partial)\partial P^{+}(\underline{\beta};x,t, \partial)^{-1},\\ C^{j}&=C^{j}(\underline{\beta};x,t,\partial)=P^{+ }(\underline{\beta};x,t,\partial)E_{jj}P^{+}(\underline{\beta};x,t,\partial)^ {-1},\end{split} \tag{4.21}\] we deduce from (4.20) that \[\frac{\partial L}{\partial t_{i}^{(j)}}=[(L^{i}C^{j})_{+},L],\quad\frac{ \partial C^{k}}{\partial t_{i}^{(j)}}=[(L^{i}C^{j})_{+},C^{k}]. \tag{4.22}\] **The BKP hierarchy (first approach)** In general one can define an \((2s+r)\times(2s+r)\)-matrix valued wave function as in the previous case. However, in general the corresponding wave operators \(P^{\pm}(\partial)\) are not invertible. This is a problem. Therefore we take a different approach. If we assume that \(s\neq 0\), and \(r=0\) or \(s\) arbitrary and \(r=1\), there is no problem. In those cases the wave operators \(P^{\pm}(\partial)\) are invertible. The case that \(r=0\) suggest to take the following approach (the case \(r=1\), suggests the second approach, which we will present later in this section). So assume from now on that \(s\neq 0\). If \(r\geq 1\), we set in equation (4.12) all the variables \(t_{j}^{(c)}=\overline{t}_{j}^{(c)}=0\) for \(c>s\) and \(j\in\mathbb{Z}_{\geq 1}\). Next replace all \(t_{1}^{(a)}\) for \(1\leq a\leq s\) by \(t_{1}^{(a)}+x\) and similarly \(\overline{t}_{1}^{(a)}\) by by \(\overline{t}_{1}^{(a)}+\overline{x}\). Define the \(2s\times 2s\)-matrix valued wave function \(V^{\pm}(\alpha;x,t,z)\) by \[\tilde{V}^{\pm}(\underline{\alpha};x,t,z)=P^{\pm}(\underline{\alpha};x,t,\pm z )R^{\pm}(\underline{\alpha};\pm z)S^{\pm}(t,\pm z)e^{\pm xz}, \tag{4.23}\] where for \(1\leq i,j\leq s\) and all \(a=1,\ldots,r\) \[P^{\pm}(\underline{\alpha};x,t,\pm z)_{ij} =\frac{(-1)^{\lfloor\underline{\varepsilon}_{i}\rfloor_{j-1}}2^{ \delta_{ij}-1}}{\tau_{\underline{\alpha}}(x,t)}e^{\mp z^{-1}.\tilde{\partial} _{t(j)}}\left(\tau_{\underline{\alpha}\pm(\underline{\varepsilon}_{i}- \underline{\varepsilon}_{j})}(x,t)\right)\big{|}_{t^{(s+a)}=0},\] \[P^{\pm}(\underline{\alpha};x,t,\pm z)_{s+i,j} =\frac{(-1)^{\lfloor\underline{\varepsilon}_{i}\rfloor_{j-1}}2^{ -\delta_{ij}-1}}{\tau_{\underline{\alpha}}(x,t)}e^{\mp z^{-1}.\tilde{\partial }_{t(j)}}\left(\tau_{\underline{\alpha}\mp(\underline{\varepsilon}_{i}+ \underline{\varepsilon}_{j})}(x,t)\right)\big{|}_{t^{(s+a)}=0},\] \[P^{\pm}(\underline{\alpha};x,t,\pm z)_{i,s+j} =\frac{(-1)^{\lfloor\underline{\varepsilon}_{i}\rfloor_{j-1}}(-z) ^{\delta_{ij}-1}}{\tau_{\underline{\alpha}}(x,t)}e^{\pm((-z^{-1}).\tilde{ \partial}_{t(j)})}\left(\tau_{\underline{\alpha}\mp(\underline{\varepsilon}_{ i}+\underline{\varepsilon}_{j})}(x,t)\right)\big{|}_{t^{(s+a)}=0}, \tag{4.24}\] \[P^{\pm}(\underline{\alpha};x,t,\pm z)_{s+i,s+j} =\frac{(-1)^{\lfloor\underline{\varepsilon}_{i}\rfloor_{j-1}}(-z) ^{\delta_{ij}-1}}{\tau_{\underline{\alpha}}(x,t)}e^{\pm((-z^{-1}).\tilde{ \partial}_{t(j)})}\left(\tau_{\underline{\alpha}\mp(\underline{\varepsilon}_{ i}-\underline{\varepsilon}_{j})}(x,t)\right)\big{|}_{t^{(s+a)}=0}, \tag{4.25}\] and \[R^{\pm}(\underline{\alpha};\pm z) =\sum_{j=1}^{s}(-1)^{\lfloor\underline{\alpha}\rfloor_{j-1}} \left(z^{\pm\alpha_{j}}E_{jj}+(-z)^{\mp\alpha_{j}}E_{s+j,s+j}\right), \tag{4.26}\] \[S^{\pm}(t,\pm z) =\sum_{j=1}^{s}\big{(}e^{\pm z\cdot t(j)}E_{jj}+e^{\mp(-z)\cdot t (j)}E_{s+j,s+j}\big{)}.\] For \(r\geq 1\) define the \(2s\times r\)-matrix valued functions \[W^{\pm}(\underline{\alpha};x,t,z)=(W^{\pm}(\underline{\alpha};x,t)_{ij},z)_{1 \leq i\leq 2s,1\leq j\leq r},=\sum_{n=0}^{\infty}W^{\pm}(\underline{\alpha};x,t)^{n}z ^{-n},\] with \[W^{\pm}(\underline{\alpha};x,t,z)_{i,2c-1} =\sqrt{-1}(-1)^{\lfloor\underline{k}\rfloor+\lfloor\underline{ \ell}\rfloor_{c-1}}\frac{e^{\pm 2((z^{-1})^{\circ}\tilde{\partial}_{t(s+2c-1)})}\tau_{ \underline{k}\pm\underline{\varepsilon}_{i}\underline{\ell}+\underline{ \varepsilon}_{c}}(x,t)}{\tau_{\underline{k},\underline{\ell}}(x,t)}\big{|}_{t ^{(s+a)}=0},\] \[W^{\pm}(\underline{\alpha};x,t,z)_{s+i,2c-1} =\sqrt{-1}(-1)^{\lfloor\underline{k}\rfloor+\lfloor\underline{ \ell}\rfloor_{c}}\frac{e^{\pm 2((z^{-1})^{\circ}\tilde{\partial}_{t(s+2c-1)})}\tau_{ \underline{k}\mp\underline{\varepsilon}_{i}\underline{\ell}+\underline{ \varepsilon}_{c}}(x,t)}{\tau_{\underline{k},\underline{\ell}}(x,t)}\big{|}_{t ^{(s+a)}=0},\] \[W^{\pm}(\underline{\alpha};x,t,z)_{i,2c} =(-1)^{\lfloor\underline{k}\rfloor+\lfloor\underline{\ell}\rfloor_{c}} \frac{e^{\pm 2((z^{-1})^{\circ}\tilde{\partial}_{t(s+2c)})}\tau_{\underline{k}\mp \underline{\varepsilon}_{i}\underline{\ell}+\underline{\varepsilon}_{c}}(x,t)}{ \tau_{\underline{k},\underline{\ell}}(x,t)}\big{|}_{t^{(s+a)}=0},\] \[W^{\pm}(\underline{\alpha};x,t,z)_{s+i,2c} =(-1)^{\lfloor\underline{k}\rfloor+\lfloor\underline{\ell}\rfloor_{c}} \frac{e^{\pm 2((z^{-1})^{\circ}\tilde{\partial}_{t(s+2c)})}\tau_{\underline{k}\mp \underline{\varepsilon}_{i}\underline{\ell}+\underline{\varepsilon}_{c}}(x,t)}{ \tau_{\underline{k},\underline{\ell}}(x,t)}\big{|}_{t^{(s+a)}=0}, \tag{4.27}\] for \(1\leq c\leq p\). If \(r=2p\) that is all, but if \(r=2p+1\) we define \[\begin{split}& W^{\pm}(\underline{\alpha};x,t,z)_{i,r}=\sqrt{-1}(-1)^ {|\underline{k}|+|\underline{\ell}|}\frac{e^{\pm 2((z^{-1})\circ\partial_{t(s+r)})} \tau_{\underline{k}\underline{\ell},\underline{\ell}+\underline{\epsilon}_{p+1 }}(x,t)}{\tau_{\underline{k}\underline{\ell}}(x,t)}\big{|}_{t^{(s+a)}=0},\\ & W^{\pm}(\underline{\alpha};x,t,z)_{s+i,r}=\sqrt{-1}(-1)^{| \underline{k}|+|\underline{\ell}|}\frac{e^{\pm 2(z^{-1}\circ\partial_{t(s+r)})} \tau_{\underline{k}\underline{\tau}\underline{\epsilon}_{i},\underline{\ell}+ \underline{\epsilon}_{p+1}}(x,t)}{\tau_{\underline{k}\underline{\ell}}(x,t)} \big{|}_{t^{(s+a)}=0},\end{split} \tag{4.27}\] and the \(2s\times 1\) matrix \(T^{\pm}(\alpha;x,t)\) \[\begin{split}& T^{\pm}(\alpha;x,t)_{i1}=\frac{\tau_{\underline{k} \underline{\epsilon}_{i},\underline{\ell}}(x,t)}{\tau_{\underline{k} \underline{\ell}}(x,t)}\big{|}_{t^{s+a}=0},\\ & T^{\pm}(\alpha;x,t)_{s+i,1}=\frac{\tau_{\underline{k}\underline {\tau}\underline{\epsilon}_{i},\underline{\ell}}(x,t)}{\tau_{\underline{k} \underline{\ell}}(x,t)}\big{|}_{t^{s+a}=0}.\end{split} \tag{4.28}\] We thus obtain the following bilinear identity for the wave function: \[\begin{split}&\operatorname{Res}_{s=0}\sum_{j=1}^{s}V^{+}( \underline{\alpha};x,t,z)\left(E_{jj}-E_{s+j,s+j}\right)V^{-}(\underline{ \beta};\overline{x},\overline{t},z)^{t}dz=\\ &(\frac{1}{2}-\frac{\delta_{r0}}{2})W^{+}(\underline{\alpha};x,t )^{0}W^{-}(\underline{\beta};\overline{x},\overline{t})^{0t}+\left(\frac{1}{2 }-\frac{\delta_{r,2p}}{2}(-1)^{|\underline{\alpha}-\underline{\beta}|}\right) T^{+}(\alpha;x,t)T^{-}(\beta;\overline{x},\overline{t})^{t}.\end{split} \tag{4.29}\] Using the fundamental lemma, we obtain that \[\begin{split}&\sum_{j=1}^{s}\left(P^{+}(\underline{\alpha};x,t, \partial)R^{+}(\underline{\alpha}-\underline{\beta},\partial)\left(E_{jj}-E_ {s+j,s+j}\right)P^{-}(\underline{\beta};x,t,\partial)^{*}\right)_{-}=\\ &\quad(\frac{1}{2}-\frac{\delta_{r0}}{2})W^{+}(\underline{\alpha };x,t)^{0}\partial^{-1}W^{-}(\underline{\beta};x,t)^{0t}+\\ &\quad+\left(\frac{1}{2}-\frac{\delta_{r,2p}}{2}(-1)^{|\underline {\alpha}-\underline{\beta}|}\right)T^{+}(\alpha;x,t)\partial^{-1}T^{-}(\beta; x,t)^{t}.\end{split} \tag{4.30}\] Taking \(\underline{\alpha}=\underline{\beta}\), it is straightforward to check that the right-hand side of (4.30) is equal to \(0\). Let \(\partial=\partial_{x}\) and using the fundamental lemma (see [15], Lemma 4.1), we deduce the following expression for the matrix pseudo-differential operators \[\sum_{j=1}^{s}(P^{+}(\underline{\alpha};x,t,\partial)R^{+}(\underline{\alpha };\partial)\left(E_{jj}-E_{s+j,s+j}\right)R^{-}(\underline{\beta};x,t, \partial)^{*}P^{-}(\underline{\beta};x,t,\partial)^{*})_{-}=0. \tag{4.31}\] Letting \[J=\sum_{j=1}^{s}E_{jj}-E_{s+j,s+j}=J^{-1}, \tag{4.32}\] we thus get \[\begin{split}& JP^{-}(\underline{\beta};x,t,\partial)^{*}=P^{+}( \underline{\beta};x,t,\partial)^{-1}J,\\ &\frac{\partial P^{+}(\underline{\beta};x,t,\partial)}{\partial t _{i}^{j}}=-(P^{+}(\underline{\beta};x,t,\partial)\left(\partial^{i}E_{jj}-(- \partial)^{i}E_{s+j,s+j}\right)P^{+}(\underline{\beta};x,t,\partial)^{-1})_{- }P^{+}(\underline{\beta};x,t,\partial).\end{split} \tag{4.33}\] Now introducing the Lax operators \[L =L(\beta;t,x,\partial)=P^{+}(\underline{\beta};x,t,\partial)J \partial P^{+}(\underline{\beta};x,t,\partial)^{-1},\] \[D^{j} =D^{j}(\underline{\beta};x,t,\partial)=P^{+}(\underline{\beta};x, t,\partial)\left(E_{jj}-E_{s+j,s+j}\right)P^{+}(\underline{\beta};x,t,\partial)^{-1},\] \[E^{j} =(D^{j})^{2}=E^{j}(\underline{\beta};x,t,\partial)=P^{+}( \underline{\beta};x,t,\partial)\left(E_{jj}+E_{s+j,s+j}\right)P^{+}(\underline {\beta};x,t,\partial)^{-1}, \tag{4.34}\] we deduce from (4.33) that \[\frac{\partial L}{\partial t_{i}^{(j)}}=[(L^{i}D^{j})_{+},L],\quad\frac{ \partial D^{k}}{\partial t_{i}^{(j)}}=[(L^{i}D^{j})_{+},D^{k}],\quad\frac{ \partial E^{k}}{\partial t_{i}^{(j)}}=[(L^{i}D^{j})_{+},E^{k}]. \tag{4.35}\] Let \[N=\sum_{j=1}^{s}\left(E_{j,s+j}+E_{s+,j}\right), \tag{4.36}\] then it is straightforward to check, using (4.24), that \[P^{-}(\underline{\beta};x,t,\partial)=NP^{+}(\underline{\beta};x,t,\partial)N\] Now let \[K=JN=\sum_{j=1}^{s}\left(E_{j,s+j}-E_{s+j,j}\right), \tag{4.37}\] then \[L(\partial)^{*}=KL(\partial)K^{-1},\quad D^{j}(\partial)^{*}=-KD^{j}(\partial) K^{-1},\quad E^{j}(\partial)^{*}=KE^{j}(\partial)K^{-1}.\] **Example 4.2**: _Let's consider the special case \(s=1\). In this case_ \[L= \begin{pmatrix}1&0\\ 0&-1\end{pmatrix}\partial+\begin{pmatrix}u_{11}^{(1)}&u_{12}^{(1)}\\ u_{21}^{(1)}&-u_{11}^{(1)}\end{pmatrix}\partial^{-1}+\begin{pmatrix}u_{11}^{(2) }&-\frac{1}{2}u_{12x}^{(1)}\\ -\frac{1}{2}u_{21x}^{(1)}&u_{11}^{(2)}+u_{11x}^{(1)}\end{pmatrix}\partial^{-2}+\] \[\qquad+\begin{pmatrix}u_{11}^{(3)}&u_{12}^{(3)}\\ u_{21}^{(3)}&-u_{11}^{(3)}-2u_{11x}^{(2)}-u_{11xx}^{(1)}\end{pmatrix} \partial^{-3}+\cdots,\] \[D=\begin{pmatrix}1&0\\ 0&-1\end{pmatrix}+\begin{pmatrix}0&u_{12}^{(1)}\\ u_{21}^{(1)}&0\end{pmatrix}\partial^{-2}+\begin{pmatrix}0&-u_{12x}^{(1)}\\ -u_{21x}^{(1)}&0\end{pmatrix}\partial^{-3}+\cdots\] _We obtain the following set of equations from the Lax equations for \(L\):_ \[\frac{\partial u^{(1)}_{11}}{\partial t_{2}} =\left(2u^{(2)}_{11}+u^{(1)}_{11x}\right)_{x}\,,\] \[\frac{\partial u^{(1)}_{12}}{\partial t_{2}} =2(u^{(1)}_{11}u^{(1)}_{12}+u^{(3)}_{12})\,,\] \[\frac{\partial u^{(1)}_{21}}{\partial t_{2}} =-2(u^{(1)}_{21}u^{(1)}_{11}+u^{(3)}_{21})\,,\] \[\frac{\partial u^{(1)}_{11}}{\partial t_{3}} =\frac{1}{2}\left(6(u^{(1)}_{11})^{2}+6u^{(3)}_{11}+3u^{(1)}_{12} u^{(1)}_{21}+6u^{(2)}_{11xx}+2u^{(1)}_{11xx}\right)_{x}\,,\] \[\frac{\partial u^{(1)}_{21}}{\partial t_{3}} =\frac{1}{2}\left(-12u^{(2)}_{11}u^{(1)}_{21}+6u^{(3)}_{21x}-u^{( 1)}_{21xxx}\right)\,,\] \[\frac{\partial u^{(1)}_{12}}{\partial t_{3}} =\frac{1}{2}\left(12(u^{(2)}_{11}+u^{(1)}_{11x})u^{(1)}_{12}+6u^ {(3)}_{12x}-u^{(1)}_{12xxx}\right)\,.\] **The BKP hierarchy (second approach)** In this case we assume that \(r\geq 1\), in this case \(s\) can be \(0\). We follow the approach of the previous case, setting \(t^{(s+c)}_{j}=0\), but only for \(1\leq c<r\), we do not put \(t^{(s+r)}_{j}=0\), but leave them in the equation. Next we replace \(t^{(a)}_{1}\) by \(t^{(a)}_{1}+x\), for \(1\leq a\leq s\) and \(a=s+r\) and define the \((2s+1)\times(2s+1)\)-matrix valued wave functions \[\tilde{V}^{\pm}(\underline{\alpha};x,t,z)=P^{\pm}(\underline{\alpha};x,t,\pm z )R^{\pm}(\underline{\alpha};\pm z)S^{\pm}(t,\pm z)e^{\pm xz}, \tag{4.38}\] where for \(1\leq i,j\leq 2s\), the coefficients \(P^{\pm}(\underline{\alpha};x,t,\pm z)_{ij}\) are given by (4.24). The other coefficients are as follows (\(a=1,\ldots,r-1\)) \[P^{\pm}(\underline{\alpha};x,t,\pm z)_{2s+1,j} =\frac{z^{-1}}{\tau_{\underline{\alpha}}(x,t)}e^{\mp z^{-1}\cdot \tilde{\delta}_{t^{(j)}}}\left(\tau_{\underline{\alpha}\mp\underline{ \varepsilon}_{j}}(x,t)\right)\big{|}_{t^{(s+a)}=0},\] \[P^{\pm}(\underline{\alpha};x,t,\pm z)_{2s+1,s+j} =\frac{(-z)^{-1}}{\tau_{\underline{\alpha}}(x,t)}e^{\pm((-z^{-1} )\cdot\tilde{\delta}_{t^{(j)}})}\left(\tau_{\underline{\alpha}\pm\underline{ \varepsilon}_{j}}(x,t)\right)\big{|}_{t^{(s+a)}=0},\] \[P^{\pm}(\underline{\alpha};x,t,\pm z)_{i,2s+1} =\frac{1}{\tau_{\underline{\alpha}}(x,t)}e^{\pm 2(z^{-1}\circ\tilde{ \delta}_{t^{(s+r)}})}\left(\tau_{\underline{\alpha}\pm\underline{\varepsilon}_ {i}}(t)\right)\big{|}_{t^{(s+a)}=0}, \tag{4.39}\] \[P^{\pm}(\underline{\alpha};x,t,\pm z)_{s+i,2s+1} =\frac{1}{\tau_{\underline{\alpha}}(x,t)}e^{\pm 2((z^{-1})\circ\tilde{ \delta}_{t^{(s+r)}})}\left(\tau_{\underline{\alpha}\mp\underline{ \varepsilon}_{i}}(t)\right)\big{|}_{t^{(s+a)}=0},\] \[P^{\pm}(\underline{\alpha};x,t,\pm z)_{2s+1,2s+1} =\frac{1}{\tau_{\underline{\alpha}}(x,t)}e^{\pm 2((z^{-1})\circ\tilde{ \delta}_{t^{(s+r)}})}\left(\tau_{\underline{\alpha}}(t)\right)\big{|}_{t^{(s+a )}=0},\] where \(\underline{\alpha}=(\underline{k},\underline{\ell})\) and \(|\underline{\alpha}|=|\underline{k}|+|\underline{\ell}|\). Here, \[R^{\pm}(\underline{\alpha};\pm z) =(-1)^{|\underline{\alpha}|}E_{2s+1,2s+1}+\sum_{j=1}^{s}(-1)^{| \underline{\alpha}|_{j-1}}\left(z^{\pm\alpha_{j}}E_{jj}+(-z)^{\mp\alpha_{j}}E_ {s+j,s+j}\right),\] \[S^{\pm}(t,\pm z) =e^{2(z\circ t^{(s+r)})}E_{2s+1,2s+1}+\sum_{j=1}^{s}\big{(}e^{\pm (z\cdot t^{(j)})}E_{jj}+e^{\mp((-z)\cdot t^{(j)})}E_{s+j,s+j}\big{)}.\] We define the \((2s+1)\times(r-1)\)-matrix \(W^{\pm}(\underline{\alpha};x,t,z)=\left(W^{\pm}(\underline{\alpha};x,t,z)\right)_ {i,j}\), where \(W^{\pm}(\underline{\alpha};x,t,z)_{i,j}\) for \(i=1,\ldots 2s\) is given as in (4.26) and define \[\begin{split} W^{\pm}(\underline{\alpha};x,t,z)_{2s+1,2c-1}=& \sqrt{-1}(-1)^{|\underline{k}|+|\underline{\ell}|_{c}-1}\frac{e^{\pm 2((z^{-1}) \circ\tilde{\partial}_{t(s+2c-1)})}\tau_{\underline{k},\underline{\ell}+ \underline{\epsilon}_{c}}(x,t)}{\tau_{\underline{k},\underline{\ell}}(x,t)}|_{t (s+a)=0},\\ W^{\pm}(\underline{\alpha};x,t,z)_{2s+1,2c}=&(-1)^{| \underline{k}|+|\underline{\ell}|_{c}}\frac{e^{\pm 2((z^{-1})\circ\tilde{ \partial}_{t(s+2c)})}\tau_{\underline{k},\underline{\ell}+\underline{ \epsilon}_{c}}(x,t)}{\tau_{\underline{k},\underline{\ell}}(x,t)}|_{t(s+a)=0}. \end{split} \tag{4.41}\] Define also the \((2s+1)\times 1\)-matrix \(T^{\pm}(\alpha;x,t)\), where \(T^{\pm}(\alpha;x,t)_{2s+1,1}=1\) and all the other coefficients are given by (4.28). Then we obtain \[\begin{split}\operatorname{Res}_{z=0}&\tilde{V}^{+}( \underline{\alpha};x,t,z)J(z)\tilde{V}^{-}(\underline{\beta};\overline{x}, \overline{t},z)^{t}dz=\\ &\frac{1}{2}W^{+}(\underline{\alpha};x,t)^{0}W^{-}(\underline{ \beta};\overline{x},\overline{t})^{0t}+\left(\frac{1}{2}-\frac{\delta_{r,2p}} {2}(-1)^{|\underline{\alpha}-\underline{\beta}|}\right)T^{+}(\alpha;x,t)T^{-}( \beta;\overline{x},\overline{t})^{t},\end{split} \tag{4.42}\] where \[J(z)=\frac{1}{2}E_{2s+1,2s+1}z^{-1}+\sum_{i=1}^{s}(E_{jj}-E_{s+j,s+j}). \tag{4.43}\] We thus obtain \[\begin{split}\left(P^{+}(\underline{\alpha};x,t,\partial)R^{+}( \underline{\alpha}-\underline{\beta},\partial)J(\partial)P^{-}(\underline{ \beta};x,t,\partial)^{*}\right)_{-}=\\ \frac{1}{2}W^{+}(\underline{\alpha};x,t)^{0}\partial^{-1}W^{-}( \underline{\beta};x,t)^{0t}+\left(\frac{1}{2}-\frac{\delta_{r,2p}}{2}(-1)^{| \underline{\alpha}-\underline{\beta}|}\right)T^{+}(\alpha;x,t)\partial^{-1}T^ {-}(\beta;x,t)^{t}.\end{split} \tag{4.44}\] Now taking \(\underline{\alpha}=\underline{\beta}\), it is straightforward to check that if \(r=2p+1\) the part with the \(W^{\pm}\) is equal to zero, and that the \(T^{\pm}\) are equal to the constant coefficients \(P^{\pm}-I+E_{2s+1,2s+1}\). If \(r=2p\), the \(T^{\pm}\) part drops out and many things cancel in the \(W^{\pm}\) expression, except for the last column, which is the \(2p-1\)-th one, of both \(W^{\pm}\). A careful inspection gives for both cases that \[\begin{split}\operatorname{Res}_{z=0}&\tilde{V}^{+} (\underline{\alpha};x,t,z)R^{+}(\underline{\alpha};z)J(z)R^{-}(\underline{ \beta};-z)\tilde{V}^{-}(\underline{\beta};\overline{x},\overline{t},z)^{t}dz= \\ &=\operatorname{Res}_{z=0}P^{+}(\underline{\alpha};x,t)^{0}J(z)P^ {-}(\underline{\beta};\overline{x},\overline{t})^{0t}dz,\end{split} \tag{4.45}\] where \(P^{\pm}(\underline{\alpha};x,t)^{0}\) is the constant coefficient of \(P^{\pm}(\underline{\alpha};x,t,z)\). Now choosing \(\underline{\alpha}=\underline{\beta}\) and using the fundamental lemma, we deduce that \[\left(P^{+}(\underline{\alpha};x,t,\partial)J(\partial)P^{-}(\underline{ \alpha};x,t,\partial)^{*}\right)_{-}=\frac{1}{2}P^{+}(\underline{\alpha};x,t) ^{0}E_{2s+1,2s+1}\partial^{-1}P^{-}(\underline{\alpha};x,t)^{0t},\] which gives that \[P^{+}(\underline{\alpha};x,t,\partial)J(\partial)P^{-}(\underline{\alpha};x,t, \partial)^{*}=P^{+}(\underline{\alpha};x,t)^{0}J(\partial)P^{-}(\underline{ \alpha};x,t)^{0t}. \tag{4.46}\] Note that \(P^{\pm}(\underline{\alpha};x,t)^{0}\) is invertible and hence it makes sense to redefine the wave function and \(P^{\pm}(\underline{\alpha};x,t,\partial)\) \[\begin{split}\hat{V}^{\pm}(\underline{\alpha};x,t,z)& =(P^{\pm}(\underline{\alpha};x,t)^{0})^{-1}\tilde{V}^{\pm}( \underline{\alpha};x,t,z),\\ \hat{P}^{\pm}(\underline{\alpha};x,t,\partial)&=(P^{ \pm}(\underline{\alpha};x,t)^{0})^{-1}P^{\pm}(\underline{\alpha};x,t,\partial). \end{split} \tag{4.47}\] Then (4.45) turns into \[\text{Res}_{z=0}\hat{V}^{+}(\underline{\alpha};x,t,z)R^{+}(\underline{\alpha};z) J(z)R^{-}(\underline{\alpha};-z)\hat{V}^{-}(\underline{\alpha};\overline{x}, \overline{t},z)^{t}dz=\text{Res}_{z=0}\,J(z)dz, \tag{4.48}\] which means that \[\hat{P}^{-}(\underline{\alpha};x,t,\partial)^{*}=J(\partial)^{-1}\hat{P}^{+}( \underline{\alpha};x,t,\partial)^{-1}J(\partial). \tag{4.49}\] Next differentiate (4.48) by \(t_{j}^{a}\), where \(j\) is odd if \(a=s+r\), then we obtain for \(\underline{\alpha}=\underline{\beta}\) \[\begin{split}\big{[}\left(\frac{\partial\hat{P}^{+}(\underline{ \alpha})}{\partial t_{j}^{a}}+\hat{P}^{+}(\underline{\alpha})\left((1- \delta_{a,s+r})(E_{aa}\partial^{j}-E_{s+a,s+a}(-\partial)^{j})+2\delta_{a,s+ r}E_{2s+1,2s+1}\partial^{j}\right)\right)\times\\ J(\partial)\hat{P}^{-}(\underline{\alpha})^{*}\big{]}_{-}=0.\end{split} \tag{4.50}\] Using (4.46), this turns into \[\begin{split}\frac{\partial\hat{P}^{+}(\underline{\alpha})}{ \partial t_{j}^{(a)}}\hat{P}^{+}(\underline{\alpha})^{-1}=&- \bigg{(}\hat{P}^{+}(\underline{\alpha})\bigg{(}(1-\delta_{a,s+r})(E_{aa} \partial^{j}-E_{s+a,s+a}(-\partial)^{j})+\\ &\qquad+2\delta_{a,s+r}E_{2s+1,2s+1}\partial^{j}\bigg{)}\hat{P}^ {+}(\underline{\alpha})^{-1}J(\partial)\bigg{)}_{-}J(\partial)^{-1}.\end{split} \tag{4.51}\] Finally, following [16], we define a new splitting of the algebra of matrix pseudo-differential operators, viz. \[P(\partial)_{\geq}=(P(\partial)J(\partial))_{+}J(\partial)^{-1},\quad P( \partial)_{<}=P(\partial)-P(\partial)_{\geq}, \tag{4.52}\] Then (4.51), turns into \[\begin{split}\frac{\partial\hat{P}^{+}(\underline{\alpha})}{ \partial t_{j}^{(a)}}&=-\bigg{(}\hat{P}^{+}(\underline{\alpha}) \bigg{(}(1-\delta_{a,s+r})(E_{aa}\partial^{j}-E_{s+a,s+a}(-\partial)^{j})+\\ &\qquad\qquad+2\delta_{a,s+r}E_{2s+1,2s+1}\partial^{j}\bigg{)} \hat{P}^{+}(\underline{\alpha})^{-1}\bigg{)}_{<}\hat{P}^{+}(\underline{ \alpha}).\end{split} \tag{4.53}\] Define the following Lax operators (\(1\leq a\leq s\)): \[\begin{split} L(\underline{\alpha})&=\hat{P}^{+}( \underline{\alpha})\left(E_{2s+1,2s+1}+\sum_{i=1}^{s}\left(E_{ii}-E_{s+i,s+i} \right)\right)\partial\hat{P}^{+}(\underline{\alpha})^{-1},\\ D^{s+r}(\underline{\alpha})&=2\hat{P}^{+}(\underline {\alpha})E_{2s+1,2s+1}\hat{P}^{+}(\underline{\alpha})^{-1},\\ D^{a}(\underline{\alpha})&=\hat{P}^{+}(\underline {\alpha})(E_{aa}-E_{s+a,s+a})\hat{P}^{+}(\underline{\alpha})^{-1}\\ E^{a}(\underline{\alpha})&=D^{a}(\underline{\alpha}) ^{2}=\hat{P}^{+}(\underline{\alpha})(E_{aa}+E_{s+a,s+a})\hat{P}^{+}( \underline{\alpha})^{-1}.\end{split} \tag{4.54}\] Using (4.53), it is straightforward to verify that \[\frac{\partial L(\underline{\alpha})}{\partial t_{j}^{(a)}}= [\left(L(\underline{\alpha})^{j}D^{a}(\underline{\alpha})\right)_{ \geq},L(\underline{\alpha})],\] \[\frac{\partial D^{b}(\underline{\alpha})}{\partial t_{j}^{(a)}}= [\left(L(\underline{\alpha})^{j}D^{a}(\underline{\alpha})\right)_{ \geq},D^{b}(\underline{\alpha})], \tag{4.55}\] \[\frac{\partial E^{b}(\underline{\alpha})}{\partial t_{j}^{(a)}}= [\left(L(\underline{\alpha})^{j}D^{a}(\underline{\alpha})\right)_{ \geq},E^{b}(\underline{\alpha})].\] **Remark 4.3**: _Note that one gets the wave-functions and the Lax equations of the first approach of the BKP here as the \(2s\times 2s\)-principal minor, which one gets by deleting the last row and column._ Since (cf. (4.36)) \[P^{-}(\underline{\alpha};x,t,\partial)=(N+E_{2s+1,2s+1})P^{+}(\underline{ \alpha};x,t,\partial)(N+E_{2s+1,2s+1}),\] this also holds for \(\hat{P}^{\pm}(\underline{\alpha};x,t,\partial)\), and thus \[\hat{P}^{+}(\underline{\alpha};x,t,\partial)^{*-1}=K(\partial)\hat{P}^{+}( \underline{\alpha};x,t,\partial)K(\partial)^{-1},\] where \[K(\partial)=-J(\partial)^{*-1}(N+E_{2s+1,2s+1})=2\partial E_{s+1,s+1}+\sum_{a =1}^{s}E_{s+a,a}-E_{a,s+a}.\] Then \[D^{s+r*}=K(\partial)D^{s+r}K(\partial)^{-1},\quad D^{a*}=-K(\partial)D^{a}K( \partial)^{-1},\quad E^{a*}=K(\partial)E^{a}K(\partial)^{-1},\] \(a=1,\ldots,s\), and \[L^{*}=(L(D^{s+r}+\sum_{a=1}^{s}E^{a}))^{*}=K(\partial)(-D^{s+r}+\sum_{a=1}^{s} E^{a})LK(\partial)^{-1}.\] In particular if \(s=0\), then \(L^{*}=-\partial L\partial^{-1}\). Note that the classical 1-component BKP hierarchy is the case that \(s=0\) and \(r=1\). **The DKP hierarchy (first approach)** We assume that if \(r\neq 0\)\(r=2p\) or \(r=2p-1\) and take equation (4.13) and set all the variables \(t_{2j+1}^{(c)}=\overline{t}_{2j+1}^{(c)}=0\) for \(c>s\) and \(j\in\mathbb{Z}_{\geq 0}\). Next replace all \(t_{1}^{(a)}\) for \(1\leq a\leq s\) by \(t_{1}^{(a)}+x\) and similarly \(\overline{t}_{1}^{(a)}\) by by \(\overline{t}_{1}^{(a)}+\overline{x}\). Define the wave function \(V^{\pm}(\alpha;x,t,z)\) as in the first approach of the BKP case case by (4.23), but now for \(\underline{\alpha}=(\underline{k},\underline{\ell})\), where we also substitute \(t_{2j+1}^{(s+c)}=0\). Define the \(s\times r\)-matrix, which we assume to be zero if \(r=0\), \[W^{\pm}(\underline{\alpha};x,t,z)=(W^{\pm}(\underline{\alpha};x,t)_{ij},z)_{1 \leq i\leq 2s,1\leq j\leq r},=\sum_{n=0}^{\infty}W^{\pm}(\underline{\alpha};x,t )^{n}z^{-n},\] with coefficients as in (4.26) and the matrix \(T^{\pm}\) by \[\begin{split} T^{\pm}(\alpha;x,t)_{i1}&=\frac{ \tau_{\underline{k}\pm\underline{\epsilon},\underline{\ell}+\underline{ \epsilon}_{p}}(x,t)}{\tau_{\underline{k},\underline{\ell}}(x,t)}\big{|}_{t^{(s +a)}=0},\\ T^{\pm}(\alpha;x,t)_{s+i,1}&=\frac{\tau_{\underline{ k}\mp\underline{\epsilon},\underline{\ell}+\underline{\epsilon}_{p}}(x,t)}{ \tau_{\underline{k},\underline{\ell}}(x,t)}\big{|}_{t^{(s+a)}=0}.\end{split} \tag{4.56}\] We thus obtain the following bilinear identity for the wave function: \[\begin{split}&\operatorname{Res}_{z=0}\sum_{j=1}^{s}V^{+}( \underline{\alpha};x,t,z)\left(E_{jj}-E_{s+j,s+j}\right)V^{-}(\underline{ \beta};\overline{x},\overline{t},z)^{t}dz=\\ &\frac{1}{2}W^{+}(\underline{\alpha};x,t)^{0}W^{-}(\underline{ \beta};\overline{x},\overline{t})^{0t}+\frac{\delta_{r,2p-1}}{2}T^{+}( \alpha;x,t)T^{-}(\beta;\overline{x},\overline{t})^{t}.\end{split} \tag{4.57}\] Now using the fundamental lemma, we obtain that \[\begin{split}&\sum_{j=1}^{s}\left(P^{+}(\underline{\alpha};x,t, \partial)R^{+}(\underline{\alpha}-\underline{\beta},\partial)\left(E_{jj}-E_ {s+j,s+j}\right)P^{-}(\underline{\beta};x,t,\partial)^{*}\right)_{-}=\\ &\frac{1}{2}W^{+}(\underline{\alpha};x,t)^{0}\partial^{-1}W^{-}( \underline{\beta};x,t)^{0t}+\frac{\delta_{r,2p-1}}{2}T^{+}(\alpha;x,t) \partial^{-1}T^{-}(\beta;x,t)^{t}.\end{split} \tag{4.58}\] Choosing \(\underline{\alpha}=\underline{\beta}\), gives that the right-hand side of (4.58) is equal to \(0\), hence we obtain the same equations as in the first approach of the BKP case. **The DKP hierarchy (second approach)** Now we assume that \(r\geq 1\), in this case \(s\) can be \(0\). \(r\) is either \(r=2p\) or \(r=2p-1\). We follow the approach of the previous case,s, now setting \(t_{j}^{(s+c)}=0\), but only for \(1\leq c<r\), we do not put \(t_{j}^{(s+r)}=0\). Next we replace \(t_{1}^{(a)}\) by \(t_{1}^{(a)}+x\), for \(1\leq a\leq s\) and \(a=s+r\) and define the \((2s+1)\times(2s+1)\)-matrix valued wave functions \[\tilde{V}^{\pm}(\underline{\alpha};x,t,z)=P^{\pm}(\underline{\alpha};x,t,\pm z )R^{\pm}(\underline{\alpha};\pm z)S^{\pm}(t,\pm z)e^{\pm xz}, \tag{4.59}\] Where the coefficients \(P^{\pm}(\underline{\alpha};x,t,\pm z)\) are given by (4.24) and (4.39) and the other matrices by (4.25). The \((2s+1)\times(r-1)\) is \(W^{\pm}(\underline{\alpha};x,t,z)\) is again defined as in (4.27) and (4.41) and \(T^{\pm}\underline{\alpha};x,t)\) as in the BKP second approach case. Then we obtain the following bilinear identity: \[\begin{split}\operatorname{Res}_{z=0}&\tilde{V}^{+}( \underline{\alpha};x,t,z)J(z)\tilde{V}^{-}(\underline{\beta};\overline{x}, \overline{t},z)^{t}dz=\\ &\frac{1}{2}W^{+}(\underline{\alpha};x,t)^{0}W^{-}(\underline{ \beta};\overline{x},\overline{t})^{0t}+\frac{\delta_{r,2p-1}}{2}T^{+}(\alpha ;x,t)T^{-}(\beta;\overline{x},\overline{t})^{t},\end{split} \tag{4.60}\] where \(J(z)\) is as in (4.43). One can now continue as in the second approach of the BKP case and nothing changes. This is not surprising considering the first approach in the DKP case and considering Remark 4.1. **Remark 4.4**: _Note that the first approach gives the same wave-functions and Lax operators in both the BKP as DKP hierarchy as in the case \(r=0\). The second approach with \(r>1\) does not differ from the case \(r=1\). Hence both approaches seem to be superfluous. This however will not be the case if we consider reductions to affine Lie algebras. The case \(r=0\) will produce a differential operator like in the Gelfand-Dickey case, but the first approach for \(r>0\), will produce a Lax operator which is of constrained KP-type._ **The CKP hierarchy** As in the previous cases, we replace \(t_{1}^{b}\) by \(t_{1}^{b}+x\), for \(1\leq b\leq r+s\), and introduce \((2s+r)\times(2s+r)\)-matrix valued wave functions \[V^{\pm}(x,t,z)=P^{\pm}(x,t,\pm z)S^{\pm}(t,\pm z)e^{\pm xz}, \tag{4.61}\] where \[S^{\pm}(t,\pm z)= \sum_{a=1}^{s}\left(e^{\pm(z\cdot t^{(a)})}E_{aa}+e^{\mp((-z) \cdot t^{(a)})}E_{s+a,s+a}\right)+\sum_{c=s+1}^{r+s}e^{\pm(zot^{(c)})}E_{a+c,a+c},\] \[P^{\pm}(x,t,\pm z)= \left(P^{\pm}(x,t,\pm z)_{i,j}\right)_{1\leq i,j\leq r+2s},\] and (\(1\leq a\neq b\leq s\leq c,d\leq r+s\)) \[\tau_{0}(x,t)P^{\pm}(x,t,\pm z)_{a,a} =e^{\pm z^{-1}\cdot\tilde{\theta}_{t^{(a)}}}\left(\tau_{0}(x,t)- \sum_{k=1}^{\infty}\tau_{\xi_{\mp k}^{a}\cup\xi_{\pm 1}^{a}}(x,t)z^{-k-1}\right),\] \[\tau_{0}(x,t)P^{\pm}(x,t,\pm z)_{a,b} =e^{\pm z^{-1}\cdot\tilde{\theta}_{t^{(b)}}}\left(\mp\sum_{k=1}^{ \infty}\tau_{\xi_{\mp k}^{b}\cup\xi_{\pm 1}^{a}}(x,t)z^{-k}\right),\] \[\tau_{0}(x,t)P^{\pm}(x,t,\pm z)_{a,s+a} =e^{\mp((-z^{-1})\cdot\tilde{\theta}_{t^{(a)}})}\left(-\sum_{k=2} ^{\infty}\tau_{\xi_{\pm k}^{a}\cup\xi_{\pm 1}^{a}}(x,t)(-z)^{1-k}\right),\] \[\tau_{0}(x,t)P^{\pm}(x,t,\pm z)_{a,s+b} =e^{\mp((-z^{-1})\cdot\tilde{\theta}_{t^{(b)}})}\left(\pm\sum_{k= 1}^{\infty}\tau_{\xi_{\pm k}^{b}\cup\xi_{\pm 1}^{a}}(x,t)(-z)^{-k}\right),\] \[\tau_{0}(x,t)P^{\pm}(x,t,\pm z)_{a,s+c} =e^{\pm 2(z^{-1}\circ\tilde{\theta}_{t^{(s+c)}})}\left(-\sum_{m\in \frac{1}{2}+\mathbb{Z}_{\geq 0}}\tau_{\xi_{\pm 1}^{a}\cup\xi_{m}^{c}}(t)(\pm z )^{-m-\frac{1}{2}}\right),\] \[\tau_{0}(x,t)P^{\pm}(x,t,\pm z)_{s+a,a} =e^{\pm z^{-1}\cdot\tilde{\theta}_{t^{(a)}}}\left(-\sum_{k=2}^{ \infty}\tau_{\xi_{\mp k}^{a}\cup\xi_{\mp 1}^{a}}(x,t)z^{1-k}\right),\] \[\tau_{0}(x,t)P^{\pm}(x,t,\pm z)_{s+c,a} =e^{\pm z^{-1}\cdot\tilde{\theta}_{t^{(a)}}}\left(\mp\sum_{k=1}^{ \infty}\tau_{\xi_{\mp k}^{a}\cup\xi_{\frac{k}{2}}^{a}}(x,t)z^{-k}\right),\] \[\tau_{0}(x,t)P^{\pm}(x,t,\pm z)_{s+c,s+c} =e^{\pm 2(z^{-1}\circ\tilde{\theta}_{t^{(s+c)}})}\left(\tau_{0}(x,t)- \sum_{m\in\frac{1}{2}+\mathbb{Z}_{\geq 0}}\tau_{\xi_{\frac{k}{2}}^{c}\cup\xi_{m}^{c}}(t)( \pm z)^{-m-\frac{1}{2}}\right),\] \[\tau_{0}(x,t)P^{\pm}(x,t,\pm z)_{s+c,s+d} =e^{\pm 2(z^{-1}\circ\tilde{\theta}_{t^{(s+r)}})}\left(-\sum_{m\in \frac{1}{2}+\mathbb{Z}_{\geq 0}}\tau_{\xi_{\frac{k}{2}}^{c}\cup\xi_{m}^{d}}(t)( \pm z)^{-m-\frac{1}{2}}\right).\] The other coefficients are given by: \[P^{\pm}(x,t,\partial)_{s+a,s+a} =P^{\mp}(x,t,\partial)_{a,a}, \tag{4.63}\] \[P^{\pm}(x,t,.\partial)_{s+a,s+b} =P^{\mp}(x,t,\partial)_{a,b},\] \[P^{\pm}(x,t,\partial)_{s+a,b} =P^{\mp}(x,t,\partial)_{a,s+b},\] \[P^{\pm}(x,t,\partial)_{s+a,s+c} =P^{\mp}(x,t,\partial)_{a,s+c},\] \[P^{\pm}(x,t,\partial)_{s+c,s+a} =P^{\mp}(x,t,\partial)_{s+c,a}.\] Then (4.16) gives \[\operatorname{Res}_{z=0}V^{+}(x,t,z)V^{-}(\overline{x},\overline{t},z)^{T}dz =0. \tag{4.64}\] Note that \(P^{\pm}(x,t,\partial)=I_{r+2s}+\sum_{k=1}^{\infty}P_{k}^{\pm}(x,t)\partial^{-k}\) and that \[P^{\pm}(x,t,\partial)=(N+I)P^{\mp}(x,t,\partial)(N+I), \tag{4.65}\] where (cf. (4.36)) \[N=\sum_{a=1}^{s}\left(E_{a,s+a}+E_{s+a,a}\right)\quad\text{and }I=\sum_{c=s+1}^{s+r}E_{s+c,s+c}.\] Hence, applying the fundamental lemma gives that \[P^{\mp}(x,t,\partial)^{-1} =P^{\pm}(x,t,\partial)^{*} \tag{4.66}\] \[=\left((N+I)P^{\mp}(x,t,\partial)(N+I)\right)^{*}\] \[=(N+I)P^{\mp}(x,t,\partial)^{*}(N+I).\] Next differntiate (4.61) by \(t_{k}^{a}\), \(t_{\ell}^{c}\) for \(1\leq a\leq s\), \(s+1\leq c\leq r+s\), \(k\in\mathbb{Z}_{>0}\) and \(\ell\in\mathbb{Z}_{odd>0}\) and apply the fundamental lemma, this gives the following Sato-Wilson equation: \[\frac{\partial P^{+}(x,t,\partial)}{\partial t_{k}^{(a)}} =-\left(P^{+}(x,t,\partial)\left(\partial^{k}E_{aa}-(-\partial)^ {k}E_{s+a,s+a}\right)P^{+}(x,t,\partial)^{-1}\right)_{-}P^{+}(x,t,\partial), \tag{4.67}\] \[\frac{\partial P^{+}(x,t,\partial)}{\partial t_{\ell}^{(c)}} =-\left(P^{+}(x,t,\partial)\partial^{\ell}E_{s+c,s+c}P^{+}(x,t, \partial)^{-1}\right)_{-}P^{+}(x,t,\partial).\] Define the following Lax operators \[L =L(x,t,\partial)=P^{+}(x,t,\partial)\left(\sum_{a=1}^{s}\left( \partial E_{aa}-\partial E_{s+a,s+a}\right)+\sum_{c=s+1}^{s+c}\partial E_{s+c, s+c}\right)P^{+}(x,t,\partial)^{-1}, \tag{4.68}\] \[C^{b} =C^{b}(x,t,\partial)=P^{+}(x,t,\partial)E_{bb}P^{+}(x,t,\partial) ^{-1},\qquad 1\leq b\leq r+2s,\] \[D^{a} =D^{a}(x,t,\partial)=P^{+}(x,t,\partial)(E_{aa}-E_{s+a,s+a})P^{+} (x,t,\partial)^{-1},\] \[E^{a} =(D^{a})^{2}=E^{a}(x,t,\partial)=P^{+}(x,t,\partial)(E_{aa}+E_{ s+a,s+a})P^{+}(x,t,\partial)^{-1}.\] Now let \(a=1,\ldots s\), \(c=s+1,\ldots r+s\), \(b=1,\ldots r+2s\), then we have the following Lax Equations \[\begin{split}\frac{\partial L}{\partial t_{j}^{(a)}}& =[(L^{j}D_{+}^{a},L],\quad\frac{\partial C^{b}}{\partial t_{j}^{(a)}}=[(L^{j }D^{a})_{+},C^{b}],\\ \frac{\partial L}{\partial t_{j}^{(c)}}&=[(L^{j}C^{ s+c})_{+},L],\qquad\frac{\partial C^{b}}{\partial t_{j}^{(c)}}=[(L^{j}C_{+}^{s+c},C^{ b}].\end{split} \tag{4.69}\] From (4.66) we deduce that \[C^{s+b*}=(N+I)C^{s+b}(N+I),\quad E^{a*}=(N+I)E^{a}(N+I),\quad D^{a*}=-(N+I)D^{a }(N+I),\] \(1\leq a\leq s<b\leq r+s\), and \[L^{*} =((\sum_{a=1}^{s}E^{a}+\sum_{b=s+1}^{r+s}C^{s+b})L)^{*}=(\sum_{a=1 }^{s}E^{a}+\sum_{b=s+1}^{r+s}C^{s+b})L\] \[=(N+I)((\sum_{a=1}^{s}E^{a}-\sum_{b=s+1}^{r+s}C^{s+b})L)(N+I).\] There are two special cases, viz. if \(s=0\) or \(r=0\), which give that \[\begin{split} s=0:& L(x,t,\partial)^{*}=-L(x,t, \partial),\qquad C^{c}(x,t,\partial)^{*}=C^{c}(x,t,\partial),\\ r=0:& L(x,t,\partial)^{*}=NL(x,t,\partial)N,\qquad D ^{a}(x,t,\partial)^{*}=-ND^{a}(x,t,\partial)N,\end{split}\] Note that the classical 1-component CKP case (cf. [26], [21]) corresponds to \(s=0\) and \(r=1\), hence the case that \(L^{*}=-L\). Conjugacy classes of the Weyl group for classical Lie algebras and generating fields for \(\hat{\mathfrak{g}}\) and \(\hat{\mathfrak{g}}^{(2)}\) ### Weyl group elements and the twisted realizations of \(\hat{\mathfrak{g}}\) Let \(\mathfrak{g}\) be one of the classical Lie algebras over \(\mathbb{C}\) and let \(G\) be the corresponding classical complex Lie group. Choose a Cartan subalgebra \(\mathfrak{h}\) in \(\mathfrak{g}\) and let \(W\in GL(\mathfrak{h})\) be the Weyl group. Fix \(w\in W\) and let \(\mathfrak{h}_{0}\) be the fixed point set of \(w\) in \(\mathfrak{h}\). We can lift \(w\) (non-uniquely) to an element \(\tilde{w}\in G\) of order \(N\), such that \[\tilde{w}=Se^{2\pi\sqrt{-1}h_{w}}S^{-1}, \tag{5.1}\] where \(S\in G\) and \(h_{w}\in\mathfrak{h}\) is such that \[(h_{w}|\mathfrak{h}_{0})=0, \tag{5.2}\] in all cases, except for \(so_{2n+1}\). For the level one \(so_{2n+1}\)-module with the highest weight \(\Lambda_{n}=\frac{1}{2}\sum_{i=1}^{n}\epsilon_{i}\), condition (5.2) is replaced by \[(h_{w}-\Lambda_{n}|\mathfrak{h}_{0})=0. \tag{5.3}\] Consider the \(\mathbb{Z}_{N}=\mathbb{Z}/N\mathbb{Z}\)-gradation of \(\mathfrak{g}\) defined by \(\sigma=e^{2\pi\sqrt{-1}\mathrm{ad}\,h_{w}}\): \[\mathfrak{g}=\bigoplus_{\overline{j}\in\mathbb{Z}_{N}}\mathfrak{g}_{\overline {j}},\quad\text{where }\mathfrak{g}_{\overline{j}}=\{g\in\mathfrak{g}\,|\,[h_{w},g]= \frac{j}{N}g\ \text{ for some }j\in\mathbb{Z},\ \overline{j}\equiv j\,\mathrm{mod}\,N\}. \tag{5.4}\] Define the twisted affine Lie algebra \(\hat{L}(\mathfrak{g},\sigma)\) as the following subalgebra of \(\hat{\mathfrak{g}}\): \[\hat{L}(\mathfrak{g},\sigma)=\mathbb{C}K\oplus\bigoplus_{j\in\mathbb{Z}}t^{j} \mathfrak{g}_{j\,\mathrm{mod}\,N}. \tag{5.5}\] The Lie algebra \(\hat{\mathfrak{g}}\) and \(\hat{L}(\mathfrak{g},\sigma)\) are isomorphic for any inner automorphism \(\sigma\) of \(\mathfrak{g}\) ([13], Chapter 8). Explicitly the isomorphism \(\tilde{\phi}_{h_{w}}:\hat{L}(\mathfrak{g},\sigma)\to\hat{\mathfrak{g}}\) is given by [19], [22] \[\tilde{\phi}_{h_{w}}(t^{j}g)=\hat{g}(j):=g(j)-\delta_{j,0}(h_{w}|g)K\quad \text{for }g\in\mathfrak{g}_{j\,\mathrm{mod}N}, \tag{5.6}\] where \[g(j)=t^{-\mathrm{ad}\,h_{w}}(t^{\frac{j}{N}}g)\in\hat{\mathfrak{g}},\quad g \in\mathfrak{g}_{j\,\mathrm{mod}N}. \tag{5.7}\] Each element \(g\in\mathfrak{g}\) decomposes with respect to the \(\mathbb{Z}_{N}\)-gradation (5.4) as \(g=\sum_{\overline{j}\in\mathbb{Z}_{N}}g_{\overline{j}}\), and we consider the corresponding generating field \[g(z)=\sum_{j\in\mathbb{Z}}\hat{g}_{j\,\mathrm{mod}\,N}(j)z^{-j-1}. \tag{5.8}\] Let \(d=\dim\mathfrak{g}\) and let \(g^{1},g^{2},\ldots,g^{d}\) be the standard basis of \(\mathfrak{g}\), i.e. the root vectors together with a basis of \(\mathfrak{h}\). Then we obtain a new basis \(a^{i}=S^{-1}g^{i}S\), which has the same commutation relations as the \(g^{i}\). One of the main constructions of this paper will be the construction of the vertex operators for \(a^{i}(z)\). Note that the standard Cartan subalgebra \(\mathfrak{h}\) gets conjugated to a non-standard one \(\mathfrak{h}_{\sigma}\) by \(S^{-1}\). Hence if \(h^{1},h^{2},\ldots h^{n}\) is a basis of \(\mathfrak{h}\), the modes of the vertex operators for \((S^{-1}h^{i}S)(z)\), for \(1\leq i\leq n\) (see formula (5.8)), together with \(K\) span a non-standard Heisenberg subalgebra of \(\hat{\mathfrak{g}}\). Of course for \(w=1\) we get the standard homogeneous Heisenberg subalgebra \((\mathfrak{h}\otimes\mathbb{C}[t,t^{-1}])\oplus\mathbb{C}K\). **Example 5.1** (The Lie algebra \(\mathfrak{g}=gl_{2}\)): _There are two elements in the Weyl group, namely the identity \(id\) and the element \(w\) that interchanges \(\epsilon_{1}\) and \(\epsilon_{2}\). The first case is related to the homogeneous realization and the second to the principal realization of the subalgebra \(\hat{sl}_{2}\). It is easy to find a lift in both cases satisfying (5.2):_ \[I=\begin{pmatrix}1&0\\ 0&1\end{pmatrix},\quad\tilde{w}=\begin{pmatrix}0&-i\\ -i&0\end{pmatrix},\text{ respectively.}\] _Then \(S=I\) in the first case and \(S=\frac{1}{\sqrt{2}}\begin{pmatrix}-1&1\\ 1&1\end{pmatrix}\) for the second one, which leads respectively to the following choices of \(h_{w}\):_ \[I=\exp(2\pi\sqrt{-1}h_{1}),\text{ where }h_{1}=0\text{, and }\tilde{w}=\exp(2\pi\sqrt{-1}h_{w}), \text{ where }h_{w}=\begin{pmatrix}\frac{1}{4}&0\\ 0&-\frac{1}{4}\end{pmatrix}.\] _In the first case we have 4 fields, for \(\hat{gl}_{2}\), that generate the algebra, namely \(e_{ij}(z)=\sum_{k\in\mathbb{Z}}t^{k}e_{ij}z^{-k-1}\), \(1\leq i,j\leq 2\). For \(\hat{sl}_{2}\) we have 3 fields, namely \(e_{12}(z)\), \(e_{21}(z)\) and \((e_{11}-e_{22})(z)\). In the second case we have only two fields, for \(\hat{gl}_{2}\), namely_ \[\left(S^{-1}e_{22}S\right)(z)=-\left(S^{-1}e_{11}S\right)(-z)=\sum _{k\in\mathbb{Z}}\frac{t^{k}}{2}\begin{pmatrix}1&z^{-1}\\ tz^{-1}&1\end{pmatrix}z^{-2k-1},\] \[\left(S^{-1}e_{12}S\right)(z)=-\left(S^{-1}e_{21}S\right)(-z)= \sum_{k\in\mathbb{Z}}\frac{t^{k}}{2}\begin{pmatrix}-1&-z^{-1}\\ tz^{-1}&1\end{pmatrix}z^{-2k-1}.\] _For \(\hat{sl}_{2}\) we have two fields: \((S^{-1}e_{12}S)(z)\) and_ \[\left(S^{-1}(e_{22}-e_{11})S\right)(z)=\sum_{k\in\mathbb{Z}}t^{k}\begin{pmatrix} 0&1\\ t&0\end{pmatrix}z^{-2k-2}\] _The modes of the second field, together with \(K\), form a basis of the so called principal Heisenberg subalgebra of \(\hat{sl}_{2}\)._ ### \(gl_{n}\) Recall that the conjugacy classes of the Weyl group \(W=S_{n}\) for \(gl_{n}\) are in one-to-one correspondence with the partitions \[\lambda=(\lambda_{1}\geq\lambda_{2}\geq\cdots\geq\lambda_{s}>0) \tag{5.9}\] of \(n\), i.e. \[|\lambda|=\lambda_{1}+\lambda_{2}+\cdots+\lambda_{s}=n. \tag{5.10}\] We can decompose each element \(w\in W\) in a product of disjoint cycles of length \(\lambda_{i}\). Then two elements with the same cycle type are conjugate. Let \(|\lambda|_{0}=0\) and denote for \(j=1,2\ldots s\), \[|\lambda|_{j}=\lambda_{1}+\lambda_{2}+\cdots+\lambda_{j}, \tag{5.11}\] thus \(|\lambda|=|\lambda|_{s}\). We can associate to a partition \(\lambda\) of \(n\) the standard element \(w_{\lambda}\in W\), namely in the basis \(\{\epsilon_{i}\}\) of \(\mathfrak{h}\): \[w_{\lambda}:\epsilon_{|\lambda|_{j}}\mapsto\epsilon_{|\lambda|_{j-1}+1},\text { and }\epsilon_{|\lambda|_{j-1}+i}\mapsto\epsilon_{|\lambda|_{j-1}+i+1},\text{ }i=1,2,\ldots,\lambda_{j}-1,\quad 1\leq j\leq s. \tag{5.12}\] We obtain this \(w_{\lambda}\) if we restrict \(\operatorname{Ad}\tilde{w}_{\lambda}\) to the Cartan subalgebra \(\mathfrak{h}\), where \[\tilde{w}_{\lambda}=\sum_{a=1}^{s}\left(e_{1,\lambda_{a}}^{aa}+\sum_{i=1}^{ \lambda_{a}-1}e_{i+1,i}^{aa}\right)\in GL_{n}, \tag{5.13}\] where we redefine the basis vectors as follows: \[e_{ij}^{ab}=e_{|\lambda|_{a-1}+i,|\lambda|_{b-1}+j},\quad 1\leq i\leq\lambda_{ a},\text{ }1\leq j\leq\lambda_{b},\text{ }1\leq a,b\leq s. \tag{5.14}\] So for the diagonal block related to the part of \(\lambda_{j}\) this is the Coxeter element in the Weyl group of \(gl_{\lambda_{j}}\). Hence, it will be useful to study this case first. **Example 5.2** (One cycle, the principal Heisenberg algebra): _Assume that \(s=1\), i.e. \(\lambda=(n)\) and let_ \[\tilde{w}^{\prime}_{(n)}=e_{1n}+\sum_{i=1}^{n-1}e_{i+1,i}.\] _Let \(\omega=e^{\frac{2\pi\sqrt{-1}}{n}}\). The vectors \((\omega^{j},\omega^{2j},\cdots,\omega^{nj})^{T}\) are eigenvectors of \(\tilde{w}\) with the eigenvalue \(\omega^{-j}\). Thus let_ \[S=\frac{1}{\sqrt{n}}\sum_{i,j=1}^{n}\omega^{ij}e_{ij},\quad\text{ then }S^{-1}=\frac{1}{\sqrt{n}}\sum_{i,j=1}^{n}\omega^{-ij}e_{ij}\] _and_ \[S^{-1}\tilde{w}^{\prime}_{(n)}S=\sum_{j=1}^{n}\omega^{-j}e_{jj}=\exp\left(2\pi \sqrt{-1}\sum_{j=1}^{n}-\frac{j}{n}e_{jj}\right).\] _Hence we could choose \(h^{\prime}_{w}=\sum_{j=1}^{n}-\frac{j}{n}e_{jj}\), then \(\operatorname{Ad}\tilde{w}_{(n)}=Se^{2\pi\sqrt{-1}\operatorname{ad}h_{w}}S^{ -1}\). However, \((h^{\prime}_{w}|I_{n})\neq 0\), but we can always add a multiple of \(I_{n}\) to \(h^{\prime}_{w}\), hence we choose_ \[h_{w}=\sum_{j=1}^{n}\frac{n-2j+1}{2n}e_{jj},\quad\text{so that }(h_{w}|\mathfrak{h }_{0})=0.\] _Note that this means that we let_ \[\tilde{w}_{(n)}=\omega^{\frac{n+1}{2}}\tilde{w}^{\prime}_{(n)}.\] _Now, let_ \[a_{k\ell}=S^{-1}e_{k\ell}S=\frac{1}{n}\sum_{i,j=1}^{n}\omega^{j\ell-ik}e_{ij} \in gl_{n},\] _then_ \[a_{k\ell}(z)=\frac{1}{n}\sum_{m\in\mathbb{Z}}\sum_{i,j=1}^{n}\omega^{j\ell-ik }(t^{m}e_{ij})z^{-mn+i-j-1}.\] _Using (2.22) and (3.2), we find that_ \[r(a_{k\ell})(z)=\frac{\omega^{-k}}{n}\sum_{m,\ell\in\mathbb{Z}}\sum_{i,j=1}^{ n}\omega^{j\ell-ik}:\psi^{+}_{-\ell n-i+\frac{1}{2}}\psi^{-}_{(\ell+m)n+j- \frac{1}{2}}:z^{-mn+i-j-1}.\] _In terms of the charged fermionic fields_ \[\psi^{\pm}(z)=\sum_{i\in\frac{1}{2}+\mathbb{Z}}\psi^{\pm}_{i}z^{-i-\frac{1}{2 }}, \tag{5.15}\] _this becomes_ \[r(a_{k\ell})(z)=\frac{1}{n}:\psi^{+}(\omega^{-k}z)\psi^{-}(\omega^{-\ell}z):,\] _and in particular_ \[\alpha(z)=\sum_{j\in\mathbb{Z}}\alpha_{j}z^{-j-1}=nr(a_{nn})(z)=\sum_{j\in\mathbb{Z }}r((te_{n1}+\sum_{i=1}^{n-1}e_{i,i+1})^{j})z^{-j-1}=:\psi^{+}(z)\psi^{-}(z):\] _will be the generating field for the principal Heisenberg algebra, i.e._ \[[\alpha_{j},\alpha_{k}]=j\delta_{jk},\quad\alpha_{j}|0\rangle=0,\quad\text{ for }j\geq 0.\] The general construction follows from this example. Consider the partition(5.9) of \(n\). Introduce the associated relabeling, \(j\equiv|\lambda|_{a-1}+1,\ldots,|\lambda|_{a}\mod n\), induced by (5.14), (2.22) and (3.2): \[\psi^{+a}_{j\lambda_{a}-i+\frac{1}{2}}=\psi^{+}_{jn-|\lambda|_{a-1}-i+\frac{1} {2}},\quad\psi^{-a}_{j\lambda_{a}+i-\frac{1}{2}}=\psi^{-}_{jn+|\lambda|_{a-1} +i-\frac{1}{2}}, \tag{5.16}\] for \(1\leq i\leq\lambda_{a}\), \(j\in\mathbb{Z}\), \(a=1,\ldots,s\). Let \(\omega_{a}=e^{\frac{2\pi\sqrt{-1}}{\lambda_{a}}}\). It is straightforward to deduce the following proposition from the above example. **Proposition 5.3**: _(a) The element_ \[\tilde{w}_{\lambda}=\left(\sum_{a=1}^{s}\omega_{a}^{\frac{\lambda_{a}+1}{2}} \sum_{j=1}^{\lambda_{a}}e^{aa}_{jj}\right)\tilde{w}^{\prime}_{\lambda}\in GL_ {n}\] _for \(\tilde{w}^{\prime}_{\lambda}\) as in (5.13), is equal to \(S^{-1}\exp(2\pi\sqrt{-1}h_{w})S\), where_ \[h_{w}=\sum_{a=1}^{s}\sum_{i=1}^{\lambda_{a}}\frac{\lambda_{a}-2j+1}{2\lambda_{ a}}e^{aa}_{jj}\in\mathfrak{h}\] _satisfies (5.2), and_ \[S=\sum_{a=1}^{s}\sum_{i=1}^{\lambda_{a}}\frac{1}{\sqrt{\lambda_{a}}}\sum_{i,j =1}^{\lambda_{a}}\omega_{a}^{ij}e^{aa}_{ij},\quad S^{-1}=\sum_{a=1}^{s}\sum_{i =1}^{\lambda_{a}}\frac{1}{\sqrt{\lambda_{a}}}\sum_{i,j=1}^{\lambda_{a}}\omega _{a}^{-ij}e^{aa}_{ij}.\] _(b) The order \(N\) of \(\exp(2\pi\sqrt{-1}\mathrm{ad}\,h_{w})\) is equal to the least common multiple \(N^{\prime}\) of \(\lambda_{1},\lambda_{2},\ldots,\lambda_{s}\) if \(N^{\prime}(\frac{1}{\lambda_{a}}+\frac{1}{\lambda_{b}})\in 2\mathbb{Z}\) for all \(1\leq a,b\leq s\) and \(2N^{\prime}\) otherwise._ _(c) The elements_ \[a^{ab}_{k,\ell}:=S^{-1}e^{ab}_{k\ell}S=\frac{1}{\sqrt{\lambda_{a}\lambda_{b}}} \sum_{i=1}^{\lambda_{a}}\sum_{j=1}^{\lambda_{b}}\omega_{a}^{-ik}\omega_{b}^{j \ell}e^{ab}_{ij}\] _form a new basis of \(gl_{n}\) and the representation \(r\) in \(F\) of their generating fields_ \[a^{ab}_{k,\ell}(z)=\frac{1}{\sqrt{\lambda_{a}\lambda_{b}}}\sum_{m\in\mathbb{Z }}\sum_{i=1}^{\lambda_{a}}\sum_{j=1}^{\lambda_{b}}\left(\omega_{a}^{-ik}\omega _{b}^{j\ell}t^{m}e^{ab}_{ij}\right)z^{-mN+i\frac{N}{\lambda_{a}}-j\frac{N}{ \lambda_{b}}-1},\] _are equal to_ \[r(a^{ab}_{k,\ell})(z)=\frac{\omega_{a}^{-k}}{\sqrt{\lambda_{a}\lambda_{b}}}:\psi^{ +a}(\omega_{a}^{-k}z^{\frac{N}{\lambda_{a}}})\psi^{-b}(\omega_{b}^{-\ell}z^{ \frac{N}{\lambda_{b}}}):z^{\frac{N}{\lambda_{a}}-1}, \tag{5.17}\] _where_ \[\psi^{\pm a}(z)=\sum_{i\in\frac{1}{2}+\mathbb{Z}}\psi_{i}^{\pm a}z^{-i-\frac{1 }{2}}\] _are the generating fields for the relabeld generators \(\psi_{j}^{\pm a}\)of the Clifford algebra \(C\ell\), \(j\in\frac{1}{2}+\mathbb{Z}\) and \(1\leq a\leq s\)._ _(d) The representation of_ \(\hat{gl}_{n}\) _is given in terms of vertex operators by formula (_3.66_), by substitution of roots of unity_ \(\omega_{a}\) _and powers of_ \(z\) _as given in formula (_5.17_). In particular, the free bosonic fields are_ \[\alpha^{b}(z)=\sum_{j\in\mathbb{Z}}\alpha_{j}^{b}z^{-j-1}=:\psi^{+b}(z)\psi^{ -b}(z):\quad 1\leq b\leq s,\] _so that_ \(\alpha^{b}(z)=\lambda_{b}z^{\frac{\lambda_{a}}{N}-1}r(a^{bb}_{\lambda_{b} \lambda_{b}})(z^{\frac{\lambda_{a}}{N}})\) _and their modes satisfy the commutation relations of a Heisenberg Lie algebra of "rank"_ \(s\)_:_ \[[\alpha_{j}^{a},\alpha_{k}^{b}]=j\delta_{ab}\delta_{jk},\quad j,k\in\mathbb{Z},\;a,b=1,\ldots,s,\quad\alpha_{j}^{a}|0\rangle=0,\quad\text{for }j\geq 0.\qed\] Let \(\hat{\mathfrak{a}}_{\lambda}\) be the span of \(K\) and the elements appearing as the modes of the fields \(a^{bb}_{\lambda_{b}\lambda_{b}}\) for \(1\leq b\leq s\). It is a Heisenberg subalgebra of \(\hat{gl}_{n}\). Then the \(\hat{gl}_{n}\)-module \(F^{(m)}\), when restricted to the subalgebra \(\hat{\mathfrak{a}}_{\lambda}\) remains irreducible when \(\lambda=(n)\) and is not irreducible, otherwise. It was shown [19], that the representation remains irreducible under \(\hat{\mathfrak{a}}_{\lambda}\) and the centralizer of \(\hat{\mathfrak{a}}_{\lambda}\) in \(SL_{n}(\mathbb{C}[t,t^{-1}])\). This means in this case that we have to add the operators that describe the group action of the elements \[\sum_{b=1}^{s}\frac{1}{\lambda_{b}}a^{bb}_{\lambda_{b}\lambda_{b}}(k_{b}), \quad k_{b}\in\mathbb{Z},\quad k_{1}+\cdots+k_{s}=0.\] These are the operators \(Q_{1}^{k_{1}}\cdots Q_{s}^{k_{s}}\), wich are products of \(Q_{i}Q_{i+1}^{-1}\) for \(i=1,\ldots s-1\) and their inverses. This gives the (twisted) group algebra of the root lattice of \(sl_{s}\). The \(q\)-dimension formula for the \(\hat{gl}_{n}\) module \(F\) now differs from the formula of the \(a_{\infty}\)-module (3.39), since the gradation is different, see e.g. [22]. Let \(N\) be the least common multiple of \(\lambda_{1},\ldots,\lambda_{s}\), then [22] \[\dim_{q}F^{(m)}=\sum_{j_{1},\ldots,j_{s}\in\mathbb{Z}\atop j_{1}+\cdots+j_{s} =m}q^{\frac{N}{2}(\frac{j^{2}}{\lambda_{1}^{2}}+\frac{j^{2}_{2}}{\lambda_{2}^ {2}}+\cdots+\frac{j^{2}_{s}}{\lambda_{2}^{2}})}\prod_{a=1}^{s}\prod_{i=1}^{ \infty}\frac{1}{1-q^{\frac{iN}{\lambda_{a}}}}.\] The module \(F^{(m)}\) when restricted to \(\hat{sl}_{n}\) is no longer irreducible. However, it is straightforward to calculate the \(q\)-dimension formula of the irreducible component \(F^{(m)}_{sl_{n}}\), that contains \(|m\rangle\). For that one only has to remove all \[\alpha_{j\lambda_{1}}^{1}+\alpha_{j\lambda_{2}}^{2}+\cdots+\alpha_{j\lambda_{ s}}^{1}s\quad\text{for all }j\in\mathbb{Z}.\] Since such an element has energy \(jN\), this gives, \[\dim_{q}F_{sl_{n}}^{(m)}=\sum_{j_{1},\ldots,j_{s}\in\mathbb{Z}\atop j_{1}+\cdots +j_{s}=m}q^{\frac{N}{2}(\frac{j_{1}^{2}}{\lambda_{1}^{2}}+\frac{j_{2}^{2}}{ \lambda_{2}^{2}}+\cdots+\frac{j_{2}^{2}}{\lambda_{s}^{2}})}\prod_{j=1}^{\infty }(1-q^{jN})\prod_{a=1}^{s}\prod_{i=1}^{\infty}\frac{1}{1-q^{\frac{jN}{\lambda_{ a}}}}.\] ### \(sp_{2n}\) Recall that the conjugacy classes of the Weyl group \(W\) of \(sp_{2n}\) are in one-to-one correspondence with the pair of partitions \[\lambda=(\lambda_{1}\geq\lambda_{2}\geq\cdots\geq\lambda_{s}>0)\quad\text{and }\mu=(\mu_{1}\geq\mu_{2},\geq\cdots\geq\mu_{r}>0), \tag{5.18}\] of \(n\), i.e. \[|\lambda|+|\mu|=\lambda_{1}+\cdots+\lambda_{s}+\mu_{1}+\cdots+\mu_{r}=n. \tag{5.19}\] Namely, each element \(w\in W\) decomposes in a product of \(s\) disjoint positive cycles of length \(\lambda_{i}\), and \(r\) disjoint negative cycles of length \(\mu_{j}\), such that (5.19) holds. Two elements with the same cycle type are conjugate. We can associate to a pair of partition \((\lambda,\mu)\) of \(n\) a standard element \(w_{(\lambda,\mu)}\in W\), namely: \[w_{(\lambda,\mu)}: \epsilon_{|\lambda|_{i}}\mapsto\epsilon_{|\lambda|_{i-1}+1},\ \epsilon_{|\lambda|+|\mu|_{j}}\mapsto-\epsilon_{|\lambda|+|\mu|_{j-1}+1}, \tag{5.20}\] \[\epsilon_{|\lambda|_{i-1}+k}\mapsto\epsilon_{|\lambda|_{i-1}+k+1 },\quad k=1,2,\ldots,\lambda_{i}-1,\quad 1\leq i\leq s,\] \[\epsilon_{|\lambda|+|\mu|_{j-1}+k}\mapsto\epsilon_{|\lambda|+|\mu| _{j-1}+k+1},\quad k=1,2,\ldots,\mu_{j}-1,\quad 1\leq j\leq r.\] This suggests to redefine the elements that span \(sp_{2n}\) as follows. Let \(m_{a}=\lambda_{a}\), \(M_{a}=|\lambda|_{a-1}\) for \(1\leq a\leq s\) and \(m_{a}=\mu_{a-s}\), \(M_{a}=|\lambda|+|\mu|_{a-s-1}\) for \(s<a\leq r+s\) and \(M_{r+s+1}=|\lambda|+|\mu|,\). Let \((1\leq a,b\leq s+r,\)\(1\leq i\leq m_{a},\)\(1\leq j\leq m_{b})\) \[e_{ij}^{+a,-b}= e_{M_{a}+i,M_{b}+j}-(-1)^{M_{a}+M_{b}+i+j}e_{2n+1-M_{b}-j,2n+1-M_ {a}-i} \tag{5.21}\] \[e_{ij}^{+a,+b}= (-1)^{M_{a}+M_{b}+i+j}e_{ji}^{+b,+a}=e_{M_{a}+i,2n+1-M_{b}-j}-(-1) ^{M_{a}+M_{b}+i+j+1}e_{M_{b}+j,2n+1-M_{a}-i}\] \[e_{ij}^{-a,-b}=(-1)^{M_{a}+M_{b}+i+j}e_{ji}^{-b,-a}=e_{2n+1-M_{a }-i,M_{b}+j}-(-1)^{M_{a}+M_{b}+i+j+1}e_{2n+1-M_{b}-j,M_{a}+i}.\] Before finding the lift of the element (5.20), we first investigate two special cases, which we present as examples. **Example 5.4**: **(One positive cycle)** _In this case \(s=1\) and \(r=0\), hence_ \[w_{((n),\emptyset)}:\epsilon_{n}\mapsto\epsilon_{1},\quad\epsilon_{k}\mapsto \epsilon_{k+1},\quad k=1,2,\ldots,n. \tag{5.22}\] _We let_ \[\tilde{w}=\tilde{w}_{((n),\emptyset)}=e_{1,n}-(-1)^{n}e_{2n,n+1}+\sum_{i=1}^{ n-1}(e_{i+1,i}-e_{2n-i,2n+1-i}).\] _Then it is straightforward to check that \((\tilde{w}_{((n),\emptyset)}u,\tilde{w}_{((n),\emptyset)}v)_{sp_{2n}}=(u,v)_{ sp_{2n}}\) for \(u,v\in\mathbb{C}^{2n}\), i.e. \(\tilde{w}\in Sp_{2n}\), and that \(\operatorname{Ad}\tilde{w}_{((n),\emptyset)}|_{\emptyset}=w_{((n),\emptyset)}\)._ _The matrix \(\tilde{w}\) has eigenvectors \((\omega^{j},\omega^{2j},\ldots\omega^{nj},0,\cdots,0)^{T}\) and \((0,\ldots,0,(-\omega^{j})^{n},(-\omega^{j})^{n-1},\ldots,-\omega^{j})^{T}\) with eigenvalue \(\omega^{-j}\), where as before \(\omega=e^{\frac{2\pi\sqrt{-1}}{n}}\). Set_ \[S=\frac{1}{\sqrt{n}}\sum_{i,j=1}^{n}(\omega^{ij}e_{ij}+(-1)^{i+j}\omega^{-ij}e_ {2n+1-i,2n+1-j}),\] _then_ \[S^{-1}=\frac{1}{\sqrt{n}}\sum_{i,j=1}^{n}(\omega^{-ij}e_{ij}+(-1)^{i+j}\omega^{ ij}e_{2n+1-i,2n+1-j}),\] _and_ \[(Sv_{k},Sv_{2n+1-\ell})_{sp_{2n}} =\frac{1}{n}\sum_{i,j=1}^{n}\omega^{ki}(-1)^{i+\ell}\omega^{-j\ell }(v_{i},v_{2n+1-j})_{sp_{2n}}\] \[=\frac{1}{n}\sum_{i=1}^{n}(-1)^{\ell}\omega^{(k-\ell)i}\] \[=(-1)^{\ell}\delta_{k\ell}=(v_{k},v_{2n+1-\ell})_{sp_{2n}}.\] _Thus also \(S\in Sp_{2n}\) and_ \[\tilde{w}_{((n),\emptyset)}=S\sum_{j=1}^{n}\omega^{-j}(e_{jj}-e_{2n+1-j,2n+1-j} )S^{-1}.\] _So if we define, analogously to the principal case for \(gl_{n}\),_ \[h_{w}=\sum_{j=1}^{n}\frac{n-2j+1}{2n}(e_{jj}-e_{2n+1-j,2n+1-j}),\] _then \(S^{-1}\mathrm{Ad}\,\tilde{w}S=\exp(2\pi\sqrt{-1}\mathrm{ad}\,h_{w})\). Note that condition (5.2) is satisfied since \(\mathfrak{h}_{0}=\mathbb{C}\sum_{i=1}^{n}(e_{ii}-e_{2n+1-i,2n+1-i})\)._ _Let_ \[a_{k\ell}^{+-}=S^{-1}e_{k\ell}^{+-}S=\frac{1}{n}\sum_{i,j=1}^{n}(\omega^{-k})^ {i}(\omega^{\ell})^{j}e_{ij}^{+-},\] \[a_{k\ell}^{++}=S^{-1}e_{k\ell}^{++}S=(-1)^{\ell}\frac{1}{n}\sum_{i,j=1}^{n}( \omega^{-k})^{i}(-\omega^{-\ell})^{j}e_{ij}^{++},\] \[a_{k\ell}^{--}=S^{-1}e_{k\ell}^{--}S=(-1)^{k}\frac{1}{n}\sum_{i,j=1}^{n}(- \omega^{k})^{i}(\omega^{\ell})^{j}e_{ij}^{--}.\] _Then_ \[a_{k\ell}^{+-}(z) =\frac{1}{n}\sum_{m\in\mathbb{Z}}\sum_{i,j=1}^{n}\left((\omega^{- k})^{i}(\omega^{-\ell})^{-j}t^{m}e_{ij}^{+-}\right)z^{-mn+i-j-1},\] \[a_{k\ell}^{++}(z) =(-1)^{\ell}\frac{1}{n}\sum_{m\in\mathbb{Z}}\sum_{i,j=1}^{n}\left( (\omega^{-k})^{i}(-\omega^{-\ell})^{j}t^{m}e_{ij}^{++}\right)z^{-(m+1)n+i+j-2},\] \[a_{k\ell}^{--}(z) =(-1)^{k}\frac{1}{n}\sum_{m\in\mathbb{Z}}\sum_{i,j=1}^{n}\left((- \omega^{-k})^{-i}(\omega^{-\ell})^{-j}t^{m}e_{ij}^{--}\right)z^{-(m-1)n-i-j}.\] _Using \((2.27)\) and \((3.9)\), we can calculate_ \[r(a_{k\ell}^{+-})(z) =\frac{1}{n}\sum_{\ell,m\in\mathbb{Z}}\sum_{i,j=1}^{n}(\omega^{-k})^ {i}(-\omega^{-\ell})^{-j}:b_{-2\ell n-i+\frac{1}{2}}b_{2(\ell+m)n+j-\frac{1}{2} }:z^{-mn+i-j-1},\] \[r(a_{k\ell}^{++})(z) =(-1)^{\ell+1}\frac{1}{n}\sum_{\ell,m\in\mathbb{Z}}\sum_{i,j=1}^{ n}(\omega^{-k})^{i}(\omega^{-\ell})^{j}:b_{-2\ell n-i+\frac{1}{2}}b_{2(\ell+m+1)n-j+ \frac{1}{2}}:z^{-(m+1)n+i+j-2},\] \[r(a_{k\ell}^{--})(z) =(-1)^{k}\frac{1}{n}\sum_{m\in\mathbb{Z}}\sum_{i,j=1}^{n}(-\omega ^{-k})^{-i}(-\omega^{\ell})^{-j}:b_{-2(\ell-1)n+i-\frac{1}{2}}b_{2(\ell+m)n+j- \frac{1}{2}}:z^{-(m-1)n-i-j}.\] _Thus, relabeling the \(b_{j}\) as follows:_ \[b_{\ell n-i+\frac{1}{2}}^{+}=b_{2\ell n-i+\frac{1}{2}},\quad b_{\ell n+i-\frac {1}{2}}^{-}=(-1)^{i}b_{2\ell n+i-\frac{1}{2}}, \tag{5.23}\] _and their generating series as in (5.15), we find that_ \[r(a_{k\ell}^{+-})(z) =\frac{\omega^{-k}}{n}:b^{+}(\omega^{-k}z)b^{-}(\omega^{-\ell}z):,\] \[r(a_{k\ell}^{++})(z) =(-1)^{\ell+1}\frac{\omega^{-k-\ell}}{n}:b^{+}(\omega^{-k}z)b^{+} (\omega^{-\ell}z):,,\] \[r(a_{k\ell}^{--})(z) =(-1)^{k}\frac{1}{n}:b^{-}(\omega^{-k}z)b^{-}(\omega^{-\ell}z):.\] _Note that the \(b_{j}^{\pm}\) defined by (5.23) satisfy_ \[b_{i}^{\pm}b_{j}^{\pm}-b_{j}^{\pm}b_{i}^{\pm}=0,\quad b_{i}^{+}b_{j}^{-}-b_{j} ^{-}b^{+}=\delta_{i,-j}\quad i,j\in\frac{1}{2}+\mathbb{Z},\] _and_ \[b_{i}^{\pm}|0\rangle=0\quad\text{for }i>0.\] _Again letting \(\alpha(z)=\sum_{j\in\mathbb{Z}}\alpha_{j}z^{-j-1}=nr(a_{nn}^{+-})(z)\), this defines the Heisenberg subalgebra of \(r(\hat{sp}_{2n})\), with commutation relations_ \[[\alpha_{j},\alpha_{k}]=-j\delta_{jk},\quad\alpha_{j}|0\rangle=0,\quad\text{ for }j\geq 0.\] _Finally,_ \[[\alpha_{j},b^{\pm}(z)]=\mp z^{j}b^{\pm}(z).\] **Example 5.5** (One negative cycle, the principal Heisenberg subalgebra): _In this case \(r=1\) and \(s=0\), hence_ \[w_{(\emptyset,(n))}:\epsilon_{n}\mapsto-\epsilon_{1},\quad\epsilon_{k}\mapsto \epsilon_{k+1},\quad k=1,2,\ldots,n-1.\] _We choose_ \[\tilde{w}=\tilde{w}_{(\emptyset,(n))}=e_{2n,n}+(-1)^{n}e_{1,n+1}+\sum_{i=1}^{n -1}(e_{i+1,i}-e_{2n-i,2n+1-i}).\] _Then it is straightforward to check that \(\operatorname{Ad}\tilde{w}|_{\mathfrak{h}}=w_{(\emptyset,(n))}\) and that this element satisfies_ \[(\tilde{w}u,\tilde{w}v)_{sp_{2n}}=(u,v)_{sp_{2n}}\quad u,v\in\mathbb{C}^{2n}.\] _Thus \(\tilde{w}\in Sp_{2n}\). Let \(\omega=e^{\frac{2\pi\sqrt{-1}}{4n}}\). The matrix \(\tilde{w}\) has eigenvectors_ \[(\omega^{j},\omega^{2j},\ldots,\omega^{nj},(-1)^{n},(-1)^{n}(-\omega^{-j})^{1},(-1)^{n}(-\omega^{-j})^{2},\ldots,(-1)^{n}(-\omega^{-j})^{n-2},(-1)^{n}(- \omega^{-j})^{n-1})^{T},\] _with eigenvalues \(\omega^{-j}\), for \(j=1,3,5,\ldots 4n-1\). Hence, we can diagonalize \(\tilde{w}\) by \(S=(S_{ij})_{1\leq i,j\leq 2n}\), where_ \[S_{i,j}=\frac{\omega^{\frac{n}{2}}(-\omega^{2j-1})^{i}}{\sqrt{2n}},\quad S_{2 n+1-i,j}=\frac{\omega^{\frac{n}{2}}(\omega^{1-2j})^{n-i}}{\sqrt{2n}},\quad \text{for $1\leq i\leq n,\ 1\leq j\leq 2n$},\] _so that \(\operatorname{Ad}\tilde{w}=S(\exp(2\pi\sqrt{-1}\mathrm{ad}\,h_{w})S^{-1}\), for_ \[h_{w}=\sum_{i=1}^{n}\frac{2n-2i+1}{4n}(e_{ii}-e_{2n+1-i,2n+1-i})\] _Note that \(S^{-1}=(T_{ij})_{1\leq i,j\leq 2n}\), where_ \[T_{i,j}=\frac{\omega^{-\frac{n}{2}}(-\omega^{2i-1})^{-j}}{\sqrt{2n}},\quad T_ {i,2n+1-j}=\frac{\omega^{-\frac{n}{2}}(\omega^{1-2i})^{j-n}}{\sqrt{2n}},\quad \text{for $1\leq i\leq 2n,\ 1\leq j\leq n$}.\] _Moreover, it is straightforward to check that \((Su,Sv)_{sp_{2n}}=(u,v)_{sp_{2n}}\) for \(u,v\in\mathbb{C}^{2n}\), hence \(S\in Sp_{2n}\). Thus_ \[a_{k\ell}^{+-}=S^{-1}e_{k\ell}^{+-}S=\frac{(-\omega)^{k-\ell}}{2 n}\sum_{i,j=1}^{2n}(\omega^{-2k})^{i}(\omega^{2\ell})^{j}(e_{ij}-(-1)^{i+j}e_{2n+1 -j,2n+1-i}),\] \[a_{k\ell}^{++}=S^{-1}e_{k\ell}^{++}S=\frac{(-\omega)^{k}\omega^ {n-\ell}}{2n}\sum_{i,j=1}^{2n}(\omega^{-2k})^{i}(-\omega^{2\ell})^{j}(e_{ij}- (-1)^{i+j}e_{2n+1-j,2n+1-i})\] \[a_{k\ell}^{--}=S^{-1}e_{k\ell}^{--}S=\frac{(-\omega)^{-\ell} \omega^{k-n}}{2n}\sum_{i,j=1}^{2n}(-\omega^{-2k})^{i}(\omega^{2\ell})^{j}(e_{ ij}-(-1)^{i+j}e_{2n+1-j,2n+1-i}),\] _and the corresponding generating fields are_ \[a_{k\ell}^{+-}(z)=\frac{(-\omega)^{k-\ell}}{2n}\sum_{m\in\mathbb{Z}}\sum_{i,j =1}^{2n}\left((\omega^{-2k})^{i}(\omega^{2\ell})^{j}t^{m}(e_{ij}-(-1)^{i+j}e_{ 2n+1-j,2n+1-i})\right)z^{-2mn+i-j-1},\] \[a_{k\ell}^{++}(z)=\frac{(-\omega)^{k}\omega^{n-\ell}}{2n}\sum_{m\in\mathbb{Z} }\sum_{i,j=1}^{2n}\left((\omega^{-2k})^{i}(-\omega^{2\ell})^{j}t^{m}(e_{ij}-(- 1)^{i+j}e_{2n+1-j,2n+1-i})\right)z^{-2mn+i-j-1},\] \[a_{k\ell}^{--}(z)=\frac{(-\omega)^{-\ell}\omega^{k-n}}{2n}\sum_{m\in\mathbb{Z} }\left((-\omega^{-2k})^{i}(\omega^{2\ell})^{j}t^{m}(e_{ij}-(-1)^{i+j}e_{2n+1-j,2n+1-i})\right)z^{-2mn+i-j-1}.\] _Using (2.27) and (3.9), we can express them in terms of the \(b_{j}\):_ \[r(a_{k\ell}^{+-})(z) =\frac{(-\omega)^{k-\ell}}{2n}\sum_{\ell,m\in\mathbb{Z}}\sum_{i,j=1} ^{2n}(\omega^{-2k})^{i}(-\omega^{2\ell})^{j}:b_{-2\ell n-i+\frac{1}{2}}b_{2(\ell +m)n+j-\frac{1}{2}}:z^{-2mn+i-j-1},\] \[r(a_{k\ell}^{++})(z) =\frac{(-\omega)^{k}\omega^{n-\ell}}{2n}\sum_{\ell,m\in\mathbb{Z} }\sum_{i,j=1}^{2n}(\omega^{-2k})^{i}(\omega^{2\ell})^{j}:b_{-2\ell n-i+\frac{1} {2}}b_{2(\ell+m)n+j-\frac{1}{2}}:z^{-2mn+i-j-1},\] \[r(a_{k\ell}^{--})(z) =\frac{(-\omega)^{-\ell}\omega^{k-n}}{2n}\sum_{\ell,m\in\mathbb{ Z}}(-\omega^{-2k})^{i}(-\omega^{2\ell})^{j}:b_{-2\ell n-i+\frac{1}{2}}b_{2( \ell+m)n+j-\frac{1}{2}}:z^{-2mn+i-j-1}.\] _Defining the generating field of the \(\tilde{\phi}_{j}\) as \(b(z)=\sum_{j\in\frac{1}{2}+\mathbb{Z}}b_{j}z^{-j-\frac{1}{2}}\), we find that for \(\delta,\epsilon=+,-\):_ \[r(a_{k,\ell}^{\delta,\epsilon})(z)=\frac{c_{k,\ell}^{\delta,\epsilon}}{2n}:b( \delta\omega^{-2k}z)b(\epsilon\omega^{-2\ell}z):,\] _where_ \[c_{k,\ell}^{+-}=(-\omega)^{-k-\ell},\quad c_{k,\ell}^{++}=\sqrt{-1}(-\omega)^{ -k}\omega^{-\ell},\quad c_{k,\ell}^{--}=\sqrt{-1}(-\omega)^{-\ell}\omega^{-k}.\] _Furthermore,_ \[\alpha(z):=nr(a_{nn}^{+-})(z)=\sum_{j\in 1+2\mathbb{Z}}\alpha_{j}z^{-j}=\frac{1 }{2}:b(z)b(-z):\] _is the generating field for the Heisenberg subalgebra. The modes of this field satisfy_ \[[\alpha_{j},\alpha_{k}]=-\frac{j}{2}\delta_{jk},\quad\alpha_{j}|0\rangle=0 \quad\text{for }j>0.\] _Finally,_ \[[\alpha_{j},b(z)]=z^{j}b(z).\] The general construction follows from these two examples. Namely, it is straightforward to deduce the following proposition. **Proposition 5.6**: _let \(w_{(\lambda,\mu)}\) be given by (5.20). Set \(\lambda_{s+a}=:\mu_{a}\) and \(\omega_{a}=e^{\frac{2\pi\sqrt{-1}}{\lambda_{a}}}\) if \(1\leq a\leq s\) and \(\omega_{a}=e^{\frac{2\pi\sqrt{-1}}{2\lambda_{a}}}\) if \(s+1\leq a\leq r+s\). Then (a) The element_ \[\tilde{w}_{(\lambda,\mu)} =\sum_{a=1}^{s}\omega_{a}^{\frac{\lambda_{a}+1}{2}}\big{(}e_{M_{a }+1,M_{a+1}}-(-1)^{\lambda_{a}}e_{2n-M_{a},2n-M_{a+1}+1}+\] \[\qquad\qquad+\sum_{i=1}^{\lambda_{a}-1}(e_{M_{a}+i+1,M_{a}+i}-E_{ 2n-M_{a}-i,2n+1-Ma-i})\big{)}+\] \[\quad+\sum_{a=s+1}^{r+s}\big{(}e_{2n-M_{a},M_{a+1}}+(-1)^{\lambda _{a}}e_{M_{a}+1,N-M_{a+1}+1}+\] \[\qquad\qquad+\sum_{i=1}^{\lambda_{a}-1}(e_{M_{a}+i+1,M_{a}+i}-e_{ 2n-M_{a}-i,2n+1-Ma-i})\big{)}\in Sp_{2n}\] _is a lift of \(w_{(\lambda,\mu)}\) and \(\tilde{w}_{(\lambda,\mu)}=S^{-1}\exp(2\pi\sqrt{-1}h_{w})S\), where_ \[h_{w}=\sum_{a=1}^{s}\sum_{j=1}^{\lambda_{a}}\frac{\lambda_{a}-2j+1}{2\lambda_{a} }\epsilon_{M_{a}+j}+\sum_{b=s+1}^{r+s}\sum_{j=1}^{\lambda_{b}}\frac{\lambda_{b }-j+\frac{1}{2}}{2\lambda_{b}}\epsilon_{M_{b}+j}\in\mathfrak{h}\] _satisfies (5.2), and_ \[\begin{split} S=&\sum_{a=1}^{s}\sum_{i=1}^{\lambda_ {a}}\frac{1}{\sqrt{\lambda_{a}}}\sum_{i,j=1}^{\lambda_{a}}(\omega_{a}^{ij}e_{M _{a}+i,M_{a}+j}+(-1)^{i+j}\omega^{-ij}e_{2n+1-M_{a}-i,2n+1-M_{a}-j})+\\ &+\sum_{a=s+1}^{r+s}\frac{1}{\sqrt{2\lambda_{a}}}\sum_{i,j=1}^{ \lambda_{a}}\big{(}(-1)^{i}\omega_{a}^{\frac{\lambda_{a}}{4}}(\omega_{a}^{-i}) ^{\frac{1}{2}-j}e_{M_{a}+i,M_{a}+j}+(-1)^{i}\omega_{a}^{\frac{\lambda_{a}}{4}} (\omega_{a}^{-i})^{\frac{1}{2}-\lambda_{a}-j}e_{M_{a}+i,2n-M_{a+1}+j}+\\ &+\omega_{a}^{\frac{\lambda_{a}}{4}}(\omega^{\lambda_{a}-i})^{ \frac{1}{2}-j}e_{2n+1-M_{a}-i,M_{a}+j}+\omega_{a}^{\frac{\lambda_{a}}{4}}( \omega^{\lambda_{a}-i})^{\frac{1}{2}-\lambda_{a}-j}e_{2n+1-M_{a}-i,2n-M_{a+1}+ j}\big{)}.\end{split}\] _(b) The order \(N\) of \(\exp(2\pi\sqrt{-1}\mathrm{ad}\,h_{w})\) is equal to the least common multiple \(N^{\prime}\) of \(\lambda_{1},\lambda_{2},\ldots,\lambda_{s},2\lambda_{s+1},2\lambda_{s+2},\ldots 2 \lambda_{r+s}\) if \(N^{\prime}(\frac{1}{\lambda_{a}}+\frac{1}{\lambda_{b}})\in 2\mathbb{Z}\) for all \(1\leq a,b\leq s\), \(N^{\prime}(\frac{1}{2\lambda_{a}}+\frac{1}{2\lambda_{b}})\in 2\mathbb{Z}\) for all \(s+1\leq a,b\leq r+s\), \(N^{\prime}(\frac{1}{\lambda_{a}}+\frac{1}{2\lambda_{b}})\in 2\mathbb{Z}\) for all \(1\leq a\leq s\), \(s+1\leq b\leq r+s\) and \(2N^{\prime}\) otherwise. (c) The elements \(a_{k,\ell}^{\delta a,eb}:=S^{-1}e_{k\ell}^{\delta a,eb}S=\) form a new basis of \(sp_{2n}\) and the representation \(r\) in \(F_{c}\) of their generating fields \(a_{k\ell}^{\delta a,eb}(z)\) are equal to_ \[r(a_{k,\ell}^{\delta_{a,eb}})(z)=L_{k}^{\delta a}R_{\ell}^{\epsilon b}:B_{k}^{ \delta a}(z)B_{\ell}^{\epsilon b}(z):,\] _where_ \[\begin{split} L_{k}^{+a}=&\begin{cases}\frac{(-1)^{k+ M_{a}}}{\sqrt{\lambda_{a}}}&1\leq a\leq s,\\ \frac{\gamma_{a}^{-1}}{\sqrt{2\lambda_{a}}}\omega_{a}^{-\frac{k}{2}+\frac{ \lambda_{a}}{4}}&s<a\leq r+s,\end{cases}&L_{k}^{-a}=\begin{cases}\frac{ \omega_{a}^{-k}}{\sqrt{\lambda_{a}}}&1\leq a\leq s,\\ \frac{(-1)^{k}\gamma_{a}^{-1}}{\sqrt{2\lambda_{a}}}\omega_{a}^{-\frac{k}{2}- \frac{\lambda_{a}}{4}}&s<a\leq r+s\end{cases}\\ R_{k}^{+a}=&\begin{cases}\frac{(-1)^{k+M_{a}+1}}{\sqrt{\lambda_{a}}}\omega_{a}^ {-k}&1\leq a\leq s,\\ \frac{\gamma_{a}}{\sqrt{2\lambda_{a}}}\omega_{a}^{-\frac{k}{2}+\frac{\lambda_{ a}}{4}}&s<a\leq r+s,\end{cases}&R_{k}^{-a}=\begin{cases}\frac{1}{\sqrt{\lambda_{a}}}&1\leq a \leq s,\\ \frac{(-1)^{k}\gamma_{a}}{\sqrt{2\lambda_{a}}}\omega_{a}^{-\frac{k}{2}+\frac{ \lambda_{a}}{4}}&s<a\leq r+s,\end{cases}\end{split}\] _where \(\gamma_{a}=1\) if \(M_{a}\) is even and \(\gamma_{a}=i\) if \(M_{a}\) is odd,and_ \[B_{k}^{\pm a}(z)=\begin{cases}b^{\pm a}(\omega_{a}^{-k}z^{\frac{N}{\lambda_{a}}} )z^{\frac{N}{2\lambda_{a}}-\frac{1}{2}}&1\leq a\leq s,\\ b^{a}(\pm\omega_{a}^{-k}z^{\frac{N}{2\lambda_{a}}})z^{\frac{N}{4\lambda_{a}}- \frac{1}{2}}&s<a\leq s+r,\end{cases}\] _where the_ \[b^{\pm a}(z)=\sum_{i\in\frac{1}{2}+\mathbb{Z}}b_{i}^{\pm a}z^{-i-\frac{1}{2}}, \quad b^{s+b}(z)=\sum_{i\in\frac{1}{2}+\mathbb{Z}}b^{s+b}z^{-i-\frac{1}{2}}, \quad 1\leq a\leq s,\ 1\leq b\leq r,\] _are the generating fields for the relabelled generators of the Weyl algebra \(C\ell_{c}\) (\(1\leq a\leq s\), \(1\leq b\leq r\), \(1\leq i\leq\lambda_{a}\), \(1\leq j\leq\lambda_{s+b}\), \(\ell\in\mathbb{Z}\)):_ \[\begin{split} b_{\ell\lambda_{a}-i+\frac{1}{2}}^{+a}=b_{2\ell n-M_{a }-i+\frac{1}{2}},\quad b_{\ell\lambda_{a}+i-\frac{1}{2}}^{-a}=(-1)^{M_{a}+i}b_{2 \ell n+M_{a}+i-\frac{1}{2}},\\ b_{2\ell\lambda_{s+b}-j+\frac{1}{2}}^{s+b}=\gamma_{s+b}b_{2\ell n-M_{s+b}-j+ \frac{1}{2}},\quad b_{2\ell\lambda_{s+b}+j-\frac{1}{2}}^{s+b}=\gamma_{s+b}b_{2 \ell n+M_{s+b}+j-\frac{1}{2}}.\end{split} \tag{5.24}\] _They satisfy (\(1\leq a,c\leq s,\ 1\leq b,d\leq r,\quad i,j\in\frac{1}{2}+\mathbb{Z}\))_ \[b_{i}^{\pm a}b_{j}^{\pm c}-b_{j}^{\pm c}b_{i}^{\pm a} =0, \tag{5.25}\] \[b_{i}^{+a}b_{j}^{-}-b_{j}^{-c}b^{+a} =\delta_{ac}\delta_{i,-j}\] \[b_{i}^{s+b}b_{j}^{s+d}-b_{j}^{s+d}b_{i}^{s+b} =(-1)^{i-\frac{1}{2}}\delta_{bd}\delta_{i,-j},\] \[b_{i}^{\pm a}b_{j}^{s+b}-b_{j}^{s+b}b_{i}^{\pm a} =0,\] \[b_{i}^{\pm a}|0\rangle=b_{i}^{s+b}|0\rangle =0\quad i>0.\] _(d) The representation of \(\hat{sp}_{n}\) is given in terms of vertex operators by formulas (3.79), (3.80) and (3.81) by substitution of roots of unity \(\omega_{a}\) and powers of \(z\) as given in the above formulas. In particular, the Heisenberg algebra, which is described by the fields \(\alpha^{a}(z)\) are defined as in (3.41) and they satisfy (3.42)._ The \(q\) dimension formula differs from the one given in (3.60), since the gradation is different. Let \(x^{2}=1\) and \(N\) be the least common multiple of \(\lambda_{1},\ldots,\lambda_{s}\),\(2\mu_{1},\ldots,2\mu_{r}\), then \[\dim_{q}F^{\overline{0}}+xF^{\overline{1}}=\prod_{a=1}^{s}\prod_{ j=1}^{\infty}\frac{1}{(1-xq^{\frac{(j-\frac{1}{2})N}{\lambda_{a}}})^{2}} \prod_{b=1}^{r}\prod_{j=1}^{\infty}\frac{1}{1-xq^{\frac{(j-\frac{1}{2})N}{\mu_ {b}}}}\] \[=\prod_{a=1}^{s}\prod_{j=1}^{\infty}\frac{1}{1-q^{\frac{iN}{ \lambda_{a}}}}\sum_{j,k=0}^{\infty}\frac{q^{\frac{kN}{2\lambda_{a}}}x^{k}}{(1- q^{\frac{N}{\lambda_{a}}})\cdots(1-q^{\frac{kN}{\lambda_{a}}})}q^{jk\frac{N}{ \lambda_{a}}}\frac{q^{\frac{jN}{2\lambda_{a}}}x^{j}}{(1-q^{\frac{N}{\lambda_{ a}}})\cdots(1-q^{\frac{iN}{\lambda_{a}}})}\times\] \[\qquad\qquad\prod_{b=1}^{r}\prod_{j=1}^{\infty}\frac{1+xq^{\frac{ N(j-\frac{1}{2})}{\mu_{b}}}}{1-q^{\frac{(2j-1)N}{\mu_{b}}}}.\] ### \(so_{2n}\) Recall that the conjugacy classes of the Weyl group for \(so_{2n}\) are in one-to-one correspondence with the pair of partitions \[\lambda=(\lambda_{1}\geq\lambda_{2}\geq\cdots\geq\lambda_{s}>0),\mu=(\mu_{1} \geq\mu_{2},\geq\cdots\geq\mu_{r}>0), \tag{5.26}\] of \(n\), with \(r\in 2\mathbb{Z}\), i.e. \[|\lambda|+|\mu|=\lambda_{1}+\cdots+\lambda_{s}+\mu_{1}+\cdots+\mu_{r}=n. \tag{5.27}\] We can decompose each element \(w\in W\) in \(s\) disjoint positive cycles of length \(\lambda_{i}\), and \(r\) disjoint negative cycles of length \(\mu_{j}\) such that (5.27) holds. It is straightforward to show that two elements with the same cycle type are conjugate under \(O_{2n}\). We relabel the basis as follows. Let \(m_{a}=\lambda_{a}\), \(M_{a}=|\lambda|_{a-1}\) for \(1\leq a\leq s\) and \(m_{a}=\mu_{a-s}\), \(M_{a}=|\lambda|+|\mu|_{a-s-1}\) for \(s<a\leq r+s\) and \(M_{r+s+1}=|\lambda|+|\mu|\), then set (\(1\leq a,b\leq s+r\), \(1\leq i\leq m_{a}\), \(1\leq j\leq m_{b}\)) \[e_{ij}^{+a,-b}= e_{M_{a}+i,M_{b}+j}-e_{2n+1-M_{b}-j,2n+1-M_{a}-i}\] \[e_{ij}^{+a,+b}= -=e_{M_{a}+i,2n+1-M_{b}-j}-e_{M_{b}+j,2n+1-M_{a}-i} \tag{5.28}\] \[e_{ij}^{-a,-b}=-e_{ji}^{-b,-a}=e_{2n+1-M_{a}-i,M_{b}+j}-e_{2n+1- M_{b}-j,M_{a}+i}.\] We can associate, as for \(sp_{2n}\) to a pair of partition \((\lambda,\mu)\) of \(n\) a standard element \(w_{(\lambda,\mu)}\in W\), namely the one given by (5.20), but now we want to lift this to \(SO_{2n}\). As in the case of \(sp_{2n}\) we will first investigate two examples. **Example 5.7**: **(One positive cycle)** _In this case \(s=1\) and \(r=0\) and we have the same element as in (5.22). However now we have to lift this to \(SO_{2n}\). Let \(\omega=e^{\frac{2\pi\sqrt{-1}}{n}}\) choose_ \[\tilde{w}_{((n),\emptyset)}=\omega^{\frac{n+1}{2}}e_{1,n}+\omega^{-\frac{n+1}{ 2}}e_{2n,n+1}+\sum_{i=1}^{n-1}(\omega^{\frac{n+1}{2}}e_{i+1,i}+\omega^{-\frac{n +1}{2}}e_{2n-i,2n+1-i}).\] _Clearly, \((\tilde{w}_{((n),\emptyset)}v,\tilde{w}_{((n),\emptyset)}w)_{so_{2n}}=(v,w)_{ so_{2n}}\) and we can diagonalize \(\tilde{w}_{((n),\emptyset)}\) by choosing_ \[S=\sum_{i,j=1}^{n}(\omega^{ij}e_{ij}+\omega^{-ij}e_{2n+1-i,2n+1-j}).\] _Then_ \[S^{-1}=\sum_{i,j=1}^{n}(\omega^{-ij}e_{ij}+\omega^{ij}e_{2n+1-i,2n+1-j})\] _and \(\tilde{w}_{((n),\emptyset)}=Se^{2\pi\sqrt{-1}h_{w}}S^{-1}\) with_ \[h_{w}=\sum_{j=1}^{n}\frac{n-2j+1}{2n}(e_{jj}-e_{2n+1-j,2n+1-j}).\] _Let \(a_{k\ell}^{\delta\epsilon}=S^{-1}e_{k\ell}^{\delta\epsilon}S\), then_ \[a_{k\ell}^{+-}(z) =\frac{1}{n}\sum_{m\in\mathbb{Z}}\sum_{i,j=1}^{n}(\omega^{-k})^{ i}(\omega^{-\ell})^{-j}t^{m}e_{ij}^{+-}z^{-mn+i-j-1},\] \[a_{k\ell}^{++}(z) :=\frac{1}{n}\sum_{m\in\mathbb{Z}}\sum_{i,j=1}^{n}(\omega^{-k})^{ i}(\omega^{-\ell})^{j}t^{m}e_{ij}^{++}z^{-(m+1)n+i+j-2}, \tag{5.29}\] \[a_{k\ell}^{--}(z) :=\frac{1}{n}\sum_{m\in\mathbb{Z}}\sum_{i,j=1}^{n}(\omega^{-k})^{ -i}(\omega^{-\ell})^{-j}t^{m}e_{ij}^{--}z^{-(m-1)n-i-j}.\] _Using (2.27) and (2.23), we can calculate_ \[r(a_{k\ell}^{+-})(z) =\frac{1}{n}\sum_{\ell,m\in\mathbb{Z}}\sum_{i,j=1}^{n}(\omega^{- k})^{i}(\omega^{-\ell})^{-j}:\phi_{-2\ell n-i+\frac{1}{2}}\phi_{2(\ell+m)+j- \frac{1}{2}}:z^{-mn+i-j-1},\] \[r(a_{k\ell}^{++})(z) =\frac{1}{n}\sum_{\ell,m\in\mathbb{Z}}\sum_{i,j=1}^{n}(\omega^{- k})^{i}(\omega^{-\ell})^{j}:\phi_{-2\ell n-i+\frac{1}{2}}\phi_{2(\ell+m+1)-j+ \frac{1}{2}}:z^{-(m+1)n+i+j-2},\] \[r(a_{k\ell}^{--})(z) =\frac{1}{n}\sum_{m\in\mathbb{Z}}\sum_{i,j=1}^{n}(\omega^{-k})^{ -i}(\omega^{-\ell})^{-j}:\phi_{-2(\ell-1)n+i-\frac{1}{2}}\phi_{2(\ell+m)+j- \frac{1}{2}}:z^{-(m-1)n-i-j}.\] _Thus, relabeling the \(\tilde{\phi}_{j}\) as follows:_ \[\psi^{+}_{\ell n-i+\frac{1}{2}}=\phi_{2\ell n-i+\frac{1}{2}},\quad\psi^{-}_{\ell n +i-\frac{1}{2}}=\phi_{2\ell n+i-\frac{1}{2}}, \tag{5.30}\] _and their generating series as in (5.15), we find that_ \[\begin{split}& r(a^{+-}_{k\ell})(z)=\frac{\omega^{-k}}{n}:\psi^{+}( \omega^{-k}z)\psi^{-}(\omega^{-\ell}z):,\\ & r(a^{++}_{k\ell})(z)=\frac{\omega^{-k-\ell}}{n}:\psi^{+}(\omega ^{-k}z)\psi^{+}(\omega^{-\ell}z):,,\\ & r(a^{--}_{k\ell})(z)=\frac{1}{n}:\psi^{-}(\omega^{-k}z)\psi^{-} (\omega^{-\ell}z):.\end{split} \tag{5.31}\] _Note that the \(\psi^{\pm}_{j}\) defined by (5.30) satisfy_ \[\psi^{\pm}_{i}\psi^{\pm}_{j}+\psi^{\pm}_{j}\psi^{\pm}_{i}=0,\quad\psi^{+}_{i} \psi^{-}_{j}+\psi^{-}_{j}\psi^{+}=\delta_{i,-j}\quad i,j\in\frac{1}{2}+\mathbb{ Z},\] _and_ \[\psi^{\pm}_{i}|0)=0\quad\text{for }i>0.\] _Again defining \(\alpha(z)=\sum_{j\in\mathbb{Z}}\alpha_{j}z^{-j-1}=nr(a^{+-}_{nn})(z)\), this defines the Heisenberg algebra, with commutation relations_ \[[\alpha_{j},\alpha_{k}]=j\delta_{jk},\quad\alpha_{j}|0)=0,\quad\text{for }j \geq 0.\] _Finally,_ \[[\alpha_{j},\psi^{\pm}(z)]=\pm z^{j}\psi^{\pm}(z).\] **Example 5.8** (Two negative cycles): _In this case \(s=0\) and \(r=2\), hence \(\mu=(\mu_{1},\mu_{2})\) with \(\mu_{1}+\mu_{2}=n\). The special case that \(\mu=(n-1,1)\) corresponds to the principal realization. Take the following lift of \(w_{(\emptyset,(\mu_{1},\mu_{2}))}\) in \(O_{2n}\):_ \[\tilde{w}_{(\emptyset,(\mu_{1},\mu_{2}))}= e_{1,2n+1-\mu_{1}}+e_{2n,\mu_{1}}+e_{\mu_{1}+1,2n+1-\mu_{1}-\mu_{2}}+e_{2n -\mu_{1},\mu_{1}+\mu_{2}}+\] \[+\sum_{i=1}^{\mu_{1}-1}e_{i+1,i}+e_{2n-i,2n+1-i}+\sum_{i=1}^{\mu_ {2}-1}e_{\mu_{1}+i+1,\mu_{1}+i}+e_{2n-i-\mu_{1},2n+1-i-\mu_{1}}.\] _Let \(\omega_{j}=e^{\frac{\pi i}{\mu_{j}}}\), and \(\gamma_{i}=1\) if \(\mu_{i}\) is even and \(\gamma_{i}=\sqrt{-1}\) if \(\mu_{i}\) is odd. Let_ \[S= \frac{1}{\sqrt{4\mu_{1}}}\sum_{k=1}^{\mu_{1}}[(-1)^{k}(\gamma_{1}e _{k1}+\gamma_{1}^{-1}e_{2n+1-k,1}+\gamma_{1}^{-1}e_{2n+1-k,2n}+\gamma_{1}e_{k,2 n})+\] \[\qquad\quad+(e_{k,\mu_{1}+\mu_{2}}+e_{2n+1-k,\mu_{1}+\mu_{2}}+e_{2 n+1-k,2n+1-\mu_{1}-\mu_{2}}+e_{k,2n+1-\mu_{1}-\mu_{2}})]+\] \[+\frac{1}{\sqrt{4\mu_{2}}}\sum_{k=1}^{\mu_{2}}\sqrt{-1}[(-1)^{k} (\gamma_{2}e_{\mu_{1}+k,1}+\gamma_{2}^{-1}e_{2n+1-\mu_{1}-k,1}+-\gamma_{2}^{-1 }e_{2n+1-\mu_{1}-k,2n}-\gamma_{2}e_{\mu_{1}+k,2n})+\] \[\qquad\quad+(e_{\mu_{1}+k,\mu_{1}+\mu_{2}}+e_{2n+1-\mu_{1}-k,\mu_ {1}+\mu_{2}}-e_{2n+1-\mu_{1}-k,2n+1-\mu_{1}-\mu_{2}}-e_{\mu_{1}+k,2n+1-\mu_{1} -\mu_{2}})]+\] \[+\frac{1}{\sqrt{2\mu_{1}}}\sum_{i=1}^{\mu_{1}}\sum_{j=2}^{\mu_{1 }}(\omega_{1}^{(j-\mu_{1}-1)i}e_{ij}+\omega_{1}^{(j-\mu_{1}-1)(\mu_{1}+i)}e_{2 n+1-i,j}+\omega_{1}^{-(j-\mu_{1}-1)(i-\mu_{1})}e_{i,2n+1-j}+\] \[\qquad\quad+\omega_{1}^{-(j-\mu_{1}-1)i}e_{2n+1-i,2n+1-j})+\] \[+\frac{1}{\sqrt{2\mu_{2}}}\sum_{i=1}^{\mu_{2}}\sum_{j=1}^{\mu_{2 }-1}(\omega_{2}^{(j-\mu_{2})i}e_{\mu_{1}+i,\mu_{1}+j}+\omega_{2}^{(j-\mu_{2})( \mu_{2}+i)}e_{2n+1-\mu_{1}-i,\mu_{1}+j}+\] \[\qquad\quad+\omega_{2}^{-(j-\mu_{2})(i-\mu_{2})}e_{\mu_{1}+i,2n+1 -\mu_{1}-j}+\omega_{2}^{-(j-\mu_{2})i}e_{2n+1-\mu_{1}-i,2n+1-\mu_{1}-j}).\] _It is straightforward to check that \(S\) is indeed in \(O_{2n}\) and hence, the inverse of this matrix is equal to_ \[S^{-1}= \frac{1}{\sqrt{4\mu_{1}}}\sum_{k=1}^{\mu_{1}}[(-1)^{k}(\gamma_{1} e_{1,k}+\gamma_{1}e_{2n,k}+\gamma_{1}^{-1}e_{1,2n+1-k}+\gamma_{1}^{-1}e_{2n,2n+1-k})+\] \[\qquad\quad+e_{\mu_{1}+\mu_{2},k}+e_{2n+1-\mu_{1}-\mu_{2},k}+e_{ \mu_{1}+\mu_{2},2n+1-k}+e_{2n+1-\mu_{1}-\mu_{2},2n+1-k})]+\] \[-\frac{1}{\sqrt{4\mu_{2}}}\sum_{k=1}^{\mu_{2}}\sqrt{-1}[(-1)^{k} (-\gamma_{2}e_{1,\mu_{1}+k}+\gamma_{2}e_{2n,\mu_{1}+k}-\gamma_{2}^{-1}e_{1,2n+ 1-\mu_{1}-k}+\gamma_{2}^{-1}e_{2n,2n+1-\mu_{1}-k})+\] \[\qquad\quad+(-e_{\mu_{1}+\mu_{2},\mu_{1}+k}+e_{2n+1-\mu_{1}-\mu_{ 2},\mu_{1}+k}-e_{\mu_{1}+\mu_{2},2n+1-\mu_{1}-k}+e_{2n+1-\mu_{1}-\mu_{2},2n+1- \mu_{1}-k})]+\] \[+\frac{1}{\sqrt{2\mu_{1}}}\sum_{i=1}^{\mu_{1}}\sum_{j=2}^{\mu_{1 }}(\omega_{1}^{(j-\mu_{1}-1)i}e_{ji}+\omega_{1}^{-(j-\mu_{1}-1)i}e_{2n+1-j,2n+1 -i}+\omega_{1}^{-(j-\mu_{1}-1)(\mu_{1}+i)}e_{2n+1-j,i}+\] \[\qquad\quad+\omega_{1}^{(j-\mu_{1}-1)(i-\mu_{1})}e_{j,2n+1-i}+\] \[+\frac{1}{\sqrt{2\mu_{2}}}\sum_{i=1}^{\mu_{2}}\sum_{j=1}^{\mu_{2 }-1}(\omega_{2}^{(j-\mu_{2})i}e_{\mu_{1}+j,\mu_{1}+i}+\omega_{2}^{-(j-\mu_{2}) i}e_{2n+1-\mu_{1}-j,2n+1-\mu_{1}-i}\] \[\qquad\quad\quad+\omega_{2}^{-(j-\mu_{2})(\mu_{2}+i)}e_{2n+1-\mu_ {1}-j,\mu_{1}+i}+\omega_{2}^{(j-\mu_{2})(i-\mu_{2})}e_{\mu_{1}+j,2n+1-\mu_{1}- i}).\] _Then \(S^{-1}\tilde{w}_{(\emptyset,(\mu_{1},\mu_{2}))}S=\exp\big{(}2\pi\sqrt{-1}h_{w} \big{)}\), where_ \[h_{w}=\sum_{i=1}^{\mu_{1}}\frac{\mu_{1}-i+1}{2\mu_{1}}(e_{ii}-e_{2n+1-i,2n+1-i})+ \sum_{i=1}^{\mu_{2}}\frac{\mu_{2}-i}{2\mu_{2}}(e_{\mu_{1}+i,\mu_{1}+i}-e_{2n+1 -\mu_{1}-i,2n+1-\mu_{1}-i}).\] _\(S\) and \(S^{-1}\) suggest the following relabeling of the \(\tilde{\phi}_{j}\), namely (\(1\leq j\leq\mu_{1}-1\), \(1\leq k\leq\mu_{2}-1\)):_ \[\tilde{\phi}^{1}_{2\ell\mu_{1}-j} =\gamma_{1}^{-1}\phi_{2\ell n-j-\frac{1}{2}},\qquad\tilde{\phi}^{1} _{2\ell\mu_{1}+j}=\gamma_{1}(-1)^{j}\phi_{2\ell n+j+\frac{1}{2}}, \tag{5.32}\] \[\tilde{\phi}^{1}_{2\ell\mu_{1}+\mu_{1}} =\frac{\gamma_{1}^{-1}}{\sqrt{2}}(\phi_{2\ell n+\mu_{1}+\mu_{2}- \frac{1}{2}}+\phi_{2(\ell+1)-\mu_{1}-\mu_{2}+\frac{1}{2}}),\] \[\tilde{\phi}^{1}_{2\ell\mu_{1}} =\frac{1}{\sqrt{2}}(\phi_{2\ell n+\frac{1}{2}}+\phi_{2\ell n- \frac{1}{2}}),\] \[\tilde{\phi}^{2}_{2\ell\mu_{2}-k} =\gamma_{2}^{-1}\phi_{2\ell n-\mu_{1}-k+\frac{1}{2}},\qquad \tilde{\phi}^{2}_{2\ell\mu_{2}+k}=\gamma_{2}(-1)^{k}\phi_{2\ell n+\mu_{1}+k- \frac{1}{2}},\] \[\tilde{\phi}^{2}_{2\ell\mu_{2}} =\sqrt{\frac{-1}{2}}(\phi_{2\ell-\frac{1}{2}}-\phi_{2\ell n+ \frac{1}{2}}),\] \[\tilde{\phi}^{2}_{2\ell\mu_{2}+\mu_{2}} =\frac{\gamma_{2}^{-1}\sqrt{-1}}{\sqrt{2}}(\phi_{2(\ell+1)n-\mu_{ 1}-\mu_{2}+\frac{1}{2}}-\phi_{2\ell n+\mu_{1}+\mu_{2}-\frac{1}{2}}).\] _We find that the new \(\tilde{\phi}^{a}_{j}\) defined by (5.32) satisfy_ \[\tilde{\phi}^{a}_{j}\tilde{\phi}^{b}_{k}+\tilde{\phi}^{b}_{k}\tilde{\phi}^{a}_ {j}=(-1)^{j}\delta_{ab}\delta_{j,-k}\quad a,b=1,2,\ j,k\in\mathbb{Z},\] _and_ \[\tilde{\phi}^{a}_{j}|0\rangle=0,\qquad j>0.\] _Defining the fields_ \[\tilde{\phi}^{b}(z)=\sum_{j\in\mathbb{Z}}\tilde{\phi}^{b}_{j}z^{-j},\] _then for \(a^{\delta b,\epsilon c}_{k,\ell}:=S^{-1}e^{\delta b,\epsilon c}_{k,\ell}S\), with \(b,c=1,2\), \(\delta,\epsilon=+,-\), \(1\leq k\leq\mu_{b}\) and \(1\leq\ell\leq\mu_{c}\), we have_ \[r(a^{\delta b,\epsilon c}_{k,\ell})(z) =\sum_{j\in\mathbb{Z}}r(a^{\delta b,\epsilon c}_{k,\ell}(j)_{j\, \mathrm{mod}\,N})z^{-j-1}\] \[=\frac{\gamma_{b}^{\delta 1}\gamma_{c}^{\epsilon 1}(-1)^{k+\ell}z^{-1}}{2 \sqrt{\mu_{b}\mu_{c}}}:\tilde{\phi}^{b}(\delta\omega_{b}^{-k}z^{\frac{N}{2\mu_ {b}}})\tilde{\phi}^{b}(\epsilon\omega_{c}^{-\ell}z^{\frac{N}{2\mu_{c}}}): \qquad b,c=1,2,\ \delta,\epsilon=+,-.\] _where \(N=2\,\mathrm{lcm}(\mu_{1},\mu_{2})\), is the order of the automorphism \(\exp(2\pi\sqrt{-1}\mathrm{ad}\,h_{w})\) (see [25] for more details). Let_ \[\alpha^{b}(z)=\sum_{k\in 1+2\mathbb{Z}}\alpha^{b}_{k}z^{-k}=-\lambda_{b}z^{ \frac{2\lambda_{b}}{N}}r(a^{+b,-b}_{\lambda_{b},\lambda_{b}})(z^{\frac{2 \lambda_{b}}{N}})=:\frac{1}{2}:\tilde{\phi}^{b}(z)\tilde{\phi}^{b}(-z):.\] _be the generating field for the Heisenberg algebra. The modes of the field satisfy_ \[[\alpha_{j},\alpha_{k}]=\frac{j}{2}\delta_{jk},\quad\alpha_{j}|0\rangle=0, \quad\text{for }j\geq 0.\] _Finally,_ \[[\alpha_{j},\tilde{\phi}(z)]=z^{j}\tilde{\phi}(z).\] The general description follows from the above two examples. It is straightforward to deduce the following proposition. **Proposition 5.9**: _Let \(w_{(\lambda,\mu)}\) be given by (5.20), where now \(r\) is even. Set \(\lambda_{s+a}=:\mu_{a}\) and \(\omega_{a}=e^{\frac{2\pi\sqrt{-1}}{\lambda_{a}}}\) if \(1\leq a\leq s\) and \(\omega_{a}=e^{\frac{2\pi\sqrt{-1}}{2\lambda_{a}}}\) is \(s+1\leq a\leq r+s\). Then (a) The element_ \[\tilde{w}_{(\lambda,\mu)} =\sum_{a=1}^{s}\left(\omega_{a}^{\frac{\lambda_{a}+1}{2}}e_{M_{a} +1,M_{a+1}}+\omega_{a}^{-\frac{\lambda_{a}+1}{2}}e_{2n-M_{a},2n-M_{a+1}+1}+ \sum_{i=1}^{\lambda_{a}-1}(\omega_{a}^{\frac{\lambda_{a}+1}{2}}e_{M_{a}+i+1,M_ {a}+i}+\right.\] \[\qquad+\omega^{-\frac{\lambda_{a}+1}{2}}e_{2n-M_{a}-i,2n+1-M_{a} -i})\right)+\sum_{a=1}^{\frac{r}{2}}\left(e_{M_{s+2a-1}+1,2n+1-M_{s+2a}}+\] \[\qquad+e_{2n-M_{s+2a},M_{s+2a}}+e_{M_{s+2a}+1,2n+1-M_{s+2a+1}}+e_ {2n-M_{s+2a},M_{s+2a+1}}\] \[\qquad+\sum_{i=1}^{\lambda_{s+2a-1}-1}e_{M_{s+2a-1}+i+1,M_{s+2a-1 }+i}+e_{2n-M_{s+2a-1}-i,2n+1-M_{s+2a-1}-i}+\] \[\qquad+\sum_{i=1}^{\lambda_{s+2a}-1}e_{M_{s}+2a+i+1,M_{s+2a}+i}+e_ {2n-M_{s+2a}-i,2n+1-M_{s+2a}-i}.\big{)}\in O_{2n}\] _is a lift of \(w_{(\lambda,\mu)}\) and \(\tilde{w}_{(\lambda,\mu)}=S^{-1}\exp(2\pi\sqrt{-1}h_{w})S\), where_ \[h_{w}=\sum_{a=1}^{s}\sum_{i=1}^{\lambda_{a}}\frac{\lambda_{a}-2j+1}{2\lambda_{ a}}\epsilon_{M_{a}+j}+\sum_{a=1}^{\frac{r}{2}}\sum_{i=1}^{\mu_{2a-1}}\frac{ \mu_{2a-1}-i+1}{2\mu_{2a-1}}\epsilon_{M_{s+2a-1}+i}+\sum_{i=1}^{\mu_{2a}}\frac {\mu_{2a}-i}{2\mu_{2a}}\epsilon_{M_{s+2a}+i}\in\mathfrak{h}\] satisfies (5.2), and_ \[S= \sum_{a=1}^{s}\sum_{i=1}^{\lambda_{a}}\sum_{i,j=1}^{\lambda_{a}}( \omega_{a}^{ij}e_{M_{a}+i,M_{a}+j}+\omega_{a}^{-ij}e_{2n+1-M_{a}-i,2n+1-M_{a}-j})\] \[+\sum_{a=1}^{\frac{r}{2}}\frac{1}{\sqrt{4\mu_{2a-1}}}\sum_{k=1}^{ \mu_{2a-1}}[(-1)^{k}(\gamma_{s+2a-1}e_{M_{s+2a-1}+k,M_{s+2a-1}+1}+\gamma_{s+2a- 1}^{-1}e_{2n+1-M_{s+2a-1}-k,M_{s+2a-1}+1}+\] \[+\gamma_{s+2a-1}^{-1}e_{2n+1-M_{s+2a-1}-k,2n-M_{s+2a-1}}+\gamma_{s+2a-1}e_{M_{s +2a-1}+k,2n-M_{s+2a-1}})+\] \[+(e_{M_{s+2a-1}+k,M_{s+2a+1}}+e_{2n+1-M_{s-2a-1}-k,M_{s+2a+1}}+\] \[+e_{2n+1-M_{s+2a-1}k,2n+1-M_{s+2a+1}}+e_{M_{s+2a-1}+k,2n+1-M_{s+2a+1}})]+\] \[+\frac{1}{\sqrt{4\mu_{2a}}}\sum_{k=1}^{\mu_{2a}}\sqrt{-1}[(-1)^{k}(-\gamma_{s+2 a}e_{M_{s+2a}+k,M_{s+2a-1}+1}+\gamma_{s+2a}^{-1}e_{2n+1-M_{s+2a}-k,M_{s+2a-1}+1}\] \[-\gamma_{s+2a}^{-1}e_{2n+1-M_{s+2a}-k,2n-M_{s+2a-1}}+\gamma_{s+2a}e_{M_{s+2a}+ k,2n-M_{s-2a-1}})+\] \[+(e_{M_{s+2a}+k,M_{s+2a+1}}+e_{2n+1-M_{s+2a}-k,M_{s+2a+1}}\] \[-e_{2n+1-M_{s+2a}-k,2n+1-M_{s+2a+1}}-e_{M_{s+2a}+k,2n+1-M_{s+2a+1}})]+\] \[+\frac{1}{\sqrt{2\mu_{2a-1}}}\sum_{i=1}^{\mu_{2a-1}}\sum_{j=2}^{\mu_{2a-1}}( \omega_{s+2a-1}^{(j-\mu_{2a-1}-1)i}e_{M_{s+2a-1}+i,M_{s+2a-1}+j}+\] \[+\omega_{s+2a-1}^{(j-\mu_{2a-1}-1)(\mu_{2a-1}+i)}e_{2n+1-M_{s+2a-1}-i,M_{s+2a- 1}+j}+\] \[+\omega_{s+2a-1}^{(j-\mu_{2a-1}-1)(i-\mu_{2a-1})}e_{M_{s+2a-1}+i,2n+1-M_{s+2a- 1}-j}+\] \[+\omega_{s+2a-1}^{(j-\mu_{2a-1}-1)i}e_{2n+1-M_{s+2a-1}+i,2n+1-M_{s+2a-1}-j})+\] \[+\frac{1}{\sqrt{2\mu_{2a}}}\sum_{i=1}^{\mu_{2a}}\sum_{j=1}^{\mu_{2a}-1}(\omega_ {s+2a}^{(j-\mu_{2a})i}e_{M_{s+2a}+i,M_{s+2a}+j}+\omega_{s+2a}^{(j-\mu_{2a})( \mu_{2a}+i)}e_{2n+1-M_{s+2a}-i,M_{s+2a}+j}+\] \[+\omega_{s+2a}^{-(j-\mu_{2a})(i-\mu_{2a})}e_{M_{s+2a}+i,2n+1-M_{s+2a}-j}+ \omega_{s+2a}^{-(j-\mu_{2a})i}e_{2n+1-M_{s+2a}-i,2n+1-M_{s+2a}-j}),\] _where \(\gamma_{a}=1\) if \(M_{a}\) is even and \(\gamma_{a}=i\) if \(M_{a}\) is odd. (b) The order \(N\) of \(\exp(2\pi\sqrt{-1}\mathrm{ad}\,h_{w})\) is equal to the least common multiple \(N^{\prime}\) of \(\lambda_{1},\lambda_{2},\ldots,\lambda_{s},2\lambda_{s+1},2\lambda_{s+2}, \ldots\allowbreak\lambda_{r+s}\) if \(N^{\prime}(\frac{1}{\lambda_{a}}+\frac{1}{\lambda_{b}})\in 2\mathbb{Z}\) for all \(1\leq a,b\leq s\), \(N^{\prime}(\frac{1}{2\lambda_{a}}+\frac{1}{2\lambda_{b}})\in 2\mathbb{Z}\) for all \(s+1\leq a,b\leq r+s\), \(N^{\prime}(\frac{1}{\lambda_{a}}+\frac{1}{2\lambda_{b}})\in 2\mathbb{Z}\) for all \(1\leq a\leq s\), \(s+1\leq b\leq r+s\) and \(2N^{\prime}\) otherwise. (c) The elements \(a_{k,\ell}^{\delta a,\epsilon b}:=S^{-1}e_{k\ell}^{\delta a,\epsilon b}S=\) form a new basis of \(so_{2n}\) and the representation \(r\) in \(F_{d}\) of their generating fields \(a_{k\ell}^{\delta a,\epsilon b}(z)\) are equal to_ \[r(a_{k,\ell}^{\delta a,\epsilon b})(z)=C_{k}^{\delta a}C_{\ell}^{\epsilon b}: \Psi_{k}^{\delta a}(z)\Psi_{\ell}^{\epsilon b}(z):,\] _where_ \[C_{k}^{+a}= \begin{cases}\frac{\omega_{a}^{-k}}{\sqrt{\lambda_{a}}}&1\leq a\leq s,\\ \frac{\gamma_{a}(-1)^{k}}{\sqrt{2\lambda_{a}}}&s<a\leq r+s,\end{cases}\quad C_{k} ^{-a}=\begin{cases}\frac{1}{\sqrt{\lambda_{a}}}&1\leq a\leq s,\\ \frac{\gamma_{a}^{-1}(-1)^{k}}{\sqrt{2\lambda_{a}}}&s<a\leq r+s,\end{cases}\] _and_ \[\Psi_{k}^{\pm a}(z)=\begin{cases}\psi^{\pm a}(\omega_{a}^{-k}z^{\frac{N}{ \lambda_{a}}})z^{\frac{N}{2\lambda_{a}}-\frac{1}{2}}&1\leq a\leq s,\\ \tilde{\phi}^{a}(\pm\omega_{a}^{-k}z^{\frac{N}{\lambda_{a}}})z^{-\frac{1}{2}}&s<a \leq s+r,\end{cases}\] _where the_ \[\psi^{\pm a}(z)=\sum_{i\in\frac{1}{2}+\mathbb{Z}}\psi^{\pm a}_{i}z^{-i-\frac{1}{2 }},\quad\tilde{\phi}^{s+b}(z)=\sum_{i\in\mathbb{Z}}\tilde{\phi}^{s+b}z^{-i},\quad 1 \leq a\leq s,\ 1\leq b\leq r,\] _are the generating fields for the relabeld generators of the Clifford algebra \(C\ell_{d}\) (\(1\leq a\leq s\), \(1\leq b\leq\frac{r}{2}\), \(1\leq i\leq\lambda_{a}\), \(1\leq j<\lambda_{s+2b-1}\) or \(\lambda_{s+2b}\)):_ \[\psi^{+a}_{\ell\lambda_{a}-i+\frac{1}{2}} =\phi_{2\ell n-M_{a}-i+\frac{1}{2}}, \psi^{-a}_{\ell\lambda_{a}+i-\frac{1}{2}}=\phi_{2\ell n+M_{a}+i-\frac{1}{2}},\] \[\tilde{\phi}^{s+2b-1}_{2\ell\mu_{2b-1}-j} =\gamma^{-1}_{s+2b-1}\phi_{2\ell n-M_{s+2b-1}-j-\frac{1}{2}}, \tilde{\phi}^{s+2b-1}_{2\ell\mu_{2b-1}+j}=\gamma_{s+2b-1}(-1)^{j}\phi_{2\ell n +M_{s+2b-1}+j+\frac{1}{2}},\] \[\tilde{\phi}^{s+2b-1}_{2\ell\mu_{2b-1}+\mu_{2b-1}} =\frac{\gamma^{-1}_{s+2b-1}}{\sqrt{2}}(\phi_{2\ell n+M_{s+2b+1}- \frac{1}{2}}+\phi_{2(\ell+1)n-M_{s+2b+1}+\frac{1}{2}}),\] \[\tilde{\phi}^{s+2b-1}_{2\ell\mu_{2b}} =\frac{1}{\sqrt{2}}(\phi_{2\ell n+M_{s+2b-1}+\frac{1}{2}}+\phi_{ 2\ell n-M_{s+2b-1}-\frac{1}{2}}),\] \[\tilde{\phi}^{s+2b}_{2\ell\mu_{2b}-j} =\gamma^{-1}_{s+2b}\phi_{2\ell n-M_{s+2b}-j+\frac{1}{2}},\quad \tilde{\phi}^{s+2b}_{2\ell\mu_{2b}+j}=\gamma_{s+2b}(-1)^{j}\phi_{2\ell n+M_{ s+2b}+j-\frac{1}{2}},\] \[\tilde{\phi}^{s+2b}_{2\ell\mu_{2b}} =\sqrt{\frac{-1}{2}}(\phi_{2\ell n-M_{s+2b-1}-\frac{1}{2}}-\phi_ {2\ell n+M_{s+2b-1}+\frac{1}{2}}),\] \[\tilde{\phi}^{s+2b}_{2\ell\mu_{s+2b}+\mu_{s+2b}} =\frac{\gamma^{-1}_{s+2b}\sqrt{-1}}{\sqrt{2}}(\phi_{2(\ell+1)n-M_ {s+2b+1}+\frac{1}{2}}-\phi_{2\ell n+M_{s+2b+1}-\frac{1}{2}}).\] _They satisfy (\(1\leq a,c\leq s,\ 1\leq b,d\leq r,\quad i,j\in\frac{1}{2}+\mathbb{Z}\))_ \[\psi^{\pm a}_{i}\psi^{\pm c}_{j}+\psi^{\pm c}_{j}\psi^{\pm a}_{i} =0,\] \[\psi^{+a}_{i}\psi^{-c}_{j}+\psi^{-c}_{j}\psi^{+a} =\delta_{ac}\delta_{i,-j}\] \[\tilde{\phi}^{s+b}_{i}\tilde{\phi}^{s+d}_{j}+\tilde{\phi}^{s+d}_{ j}\tilde{\phi}^{s+b}_{i} =(-1)^{i}\delta_{bd}\delta_{i,-j}, \tag{5.34}\] \[\psi^{\pm a}_{i}\tilde{\phi}^{s+b}_{j}+\tilde{\phi}^{s+b}_{j}\psi ^{\pm a}_{i} =0,\] \[\psi^{\pm a}_{i}|0\rangle=\tilde{\phi}^{s+b}_{i}|0\rangle =0\quad i>0.\] _(d) The representation of \(\hat{so}_{2n}\) is given in terms of vertex operators by formulas (3.66) and (3.73) by substitution of roots of unity \(\omega_{a}\) and powers of \(z\) as given in the above formulas. In particular, we have the Heisenberg algebra (3.30), which are equal to \(\lambda_{a}z^{\frac{\lambda_{a}}{N}-1}r(a^{+a,-a}_{\lambda_{a}\lambda_{a}})(z^{ \frac{\lambda_{a}}{N}})\), respectively \(\lambda_{s+b}z^{\frac{2\lambda_{s+b}}{N}}r(a^{+(s+b),-(s+b)}_{\lambda_{s+b} \lambda_{s+b}})(z^{\frac{2\lambda_{s+b}}{N}})\) and their modes satisfy (3.31)._ \(F\) splits in to two irreducible \(\hat{so}_{2n}\) modules \(F^{\overline{0}}\) and \(F^{\overline{1}}\), which differ from the formulas (3.40), since the gradation is different in this case. Let \(N\) be the least common multiple of \(\lambda_{1},\dots,\lambda_{s}\),\(2\mu_{1},\dots,2\mu_{r}\), then the \(q\)-dimension formulas in this case combines \(s\) copies of the Jacobi triple product identity, which is the equality between (1.20), and (1.21), where we replace in each copy \(q\) by \(q^{\frac{N}{\lambda_{a}}}\), and \(r\) copies of (1.41), where we replace in each copy \(q\) by \(q^{\frac{N}{\mu_{b}}}\), and this multiplied by some factor (see [23]): \[\dim_{q}F^{\overline{x}}=\begin{cases}2^{[\frac{r-1}{2}]}\sum\nolimits_{j_{1}, \dots,j_{s}\in\mathbb{Z}\atop j_{1}+\dots+j_{s}\in\epsilon+2}q^{\frac{N}{2}( \frac{j_{1}^{2}}{\lambda_{1}^{2}}+\frac{j_{2}^{2}}{\lambda_{2}^{2}}+\dots+\frac {j_{s}^{2}}{\lambda_{s}^{2}})}\prod_{a=1}^{s}\prod_{i=1}^{\infty}\frac{1}{1-q^ {\frac{2N}{\lambda_{a}}}}\prod_{b=1}^{r}\prod_{j=1}^{\infty}\frac{1}{1-q^{ \frac{(2j-1)N}{\mu_{b}}}}\\ \hskip 113.811024pt\mbox{for $r\neq 0$},\\ \sum\nolimits_{j_{1},\dots,j_{s}\in\mathbb{Z}\atop j_{1}+\dots+j_{s}\in \epsilon+2}q^{\frac{N}{2}(\frac{j_{1}^{2}}{\lambda_{1}^{2}}+\frac{j_{2}^{2}}{ \lambda_{2}^{2}}+\dots+\frac{j_{s}^{2}}{\lambda_{s}^{2}})}\prod_{a=1}^{s}\prod _{i=1}^{\infty}\frac{1}{1-q^{\frac{2N}{\lambda_{a}}}},\qquad\mbox{when $r=0$}.\end{cases} \tag{5.35}\] ### \(so_{2n+1}\) Recall that the conjugacy classes of the Weyl group for \(so_{2n+1}\) are in one-to-one correspondence with the pair of partitions \[\lambda=(\lambda_{1}\geq\lambda_{2}\geq\dots\geq\lambda_{s}>0),\mu=(\mu_{1} \geq\mu_{2}\geq\dots\geq\mu_{r}>0), \tag{5.36}\] of \(n\), i. e. (5.27) holds. We can decompose each element \(w\in W\) in \(s\) disjoint positive cycles of length \(\lambda_{i}\), and \(r\) disjoint negative cycles of length \(\mu_{j}\) such that (5.19) holds. It is straightforward to show that two elements with the same cycle type are conjugate under \(O_{2n}\). We relabel the basis as follows. Let \(m_{a}=\lambda_{a}\), \(M_{a}=|\lambda|_{a-1}\) for \(1\leq a\leq s\), \(m_{a}=\mu_{a-s}\), \(M_{a}=|\lambda|+|\mu|_{a-s-1}\) for \(s<a\leq r+s\), and \(M_{r+s+1}=|\lambda|+|\mu|\),then set (\(1\leq a,b\leq s+r\), \(1\leq i\leq m_{a}\), \(1\leq j\leq m_{b}\)) \[e_{ij}^{+a,-b}= e_{M_{a}+i,M_{b}+j}-e_{2n+2-M_{b}-j,2n+2-M_{a}-i}, \tag{5.37}\] \[e_{ij}^{+a,+b}= -e_{M_{a}+i,2n+2-M_{b}-j}-e_{M_{b}+j,2n+2-M_{a}-i},\] \[e_{ij}^{-a,-b}= -e_{ji}^{-b,-a}=e_{2n+2-M_{a}-i,M_{b}+j}-e_{2n+2-M_{b}-j,M_{a}+i},\] \[e_{i}^{+a}= e_{M_{a}+i,n+1}-e_{n+1,2n+2-Ma-i},\] \[e_{i}^{-a}= e_{2n+2-M_{a}-i,n+1}-e_{n+1,M_{a}+i}.\] We will embed this affine algebra into \(d_{\infty}\) and into \(b_{\infty}\). In the first case this will give the representation of the affine Lie algebra of \(so_{2n+1}\) with highest weights \(\Lambda_{0}\) and \(\Lambda_{1}\), here we assume that \(h_{w}\) satisfies (5.2). For the embedding in \(b_{\infty}\), we obtain the highest weight representation with highest weight \(\Lambda_{n}\). In this case we assume that \(h_{w}\) satisfies (5.3). To distinguish these two cases we set \(x\) either to \(d\) or to \(b\). Again we will first investigate two examples. **Example 5.10**: **(One positive cycle)** _In this case \(s=1\) and \(r=0\) and we have the same element as in (5.22). However now we have to lift this to \(O_{2n+1}\). Let \(\omega=e^{\frac{2\pi\sqrt{-1}}{n}}\) choose_ \[\tilde{w}_{((n),\emptyset)}=e_{n+1,n+1}+(-1)^{\delta_{xb}}(\omega^{\frac{n+1} {2}}e_{1,n}+\omega^{-\frac{n+1}{2}}e_{2n+1,n+2}+\sum_{i=1}^{n-1}(\omega^{\frac{n +1}{2}}e_{i+1,i}+\omega^{-\frac{n+1}{2}}e_{2n+1-i,2n+2-i})).\] _Clearly, we are in the same situation as in the case of one positive cycle of \(so_{2n}\). And the same construction holds. In this case,_ \[S=e_{n+1,n+1}+\sum_{i,j=1}^{n}(\omega^{ij}e_{ij}+\omega^{-ij}e_{2n+2-i,2n+2-j}),\] _and \(\tilde{w}_{((n),\emptyset)}=Se^{2\pi\sqrt{-1}h_{w}}S^{-1}\) with_ \[h_{w}=\sum_{j=1}^{n}\frac{(1+\delta_{xb})n-2j+1+\delta_{xb}n}{2n}(e_{jj}-e_{2n +2-j,2n+2-j}).\] **Embedding in \(d_{\infty}\):** _This situation is similar to the case of one positive cycle in the \(so_{2n}\)-case. We now relabel the elements \(\tilde{\phi}_{j}\) as follows_ \[\psi^{+}_{\ell n-i+\frac{1}{2}}=\phi_{\ell(2n+1)-i+\frac{1}{2}},\quad\psi^{-} _{\ell n+i-\frac{1}{2}}=\phi_{\ell(2n+1)+i-\frac{1}{2}},\quad\sigma_{\ell+ \frac{1}{2}}=\phi_{\ell(2n+1)+n+\frac{1}{2}},\] _and define their fields as in (5.15) and as_ \[\sigma(z)=\sum_{j\in\frac{1}{2}+\mathbb{Z}}\sigma_{j}z^{-j-\frac{1}{2}}.\] _Here (\(i,j\in\frac{1}{2}+\mathbb{Z}\))_ \[\sigma_{i}\sigma_{j}+\sigma_{j}\sigma_{i}=\delta_{i,-j}. \tag{5.38}\] _Let \(a^{\delta\epsilon}_{k\ell}=S^{-1}e^{\delta\epsilon}_{k\ell}S\), and \(a^{\delta}_{k}=S^{-1}e^{\delta}_{k}S\). Then we have (5.31), and_ \[a^{\pm}_{k}(z)=\frac{1}{\sqrt{n}}\sum_{m\in\mathbb{Z}}\sum_{i=1}^{n}\omega^{ \mp ki}r(t^{m}e^{\pm}_{i})z^{-mn\mp\frac{1}{2}(n-2i+1)-1} \tag{5.39}\] _and_ \[r(a^{+}_{k})(z)=\frac{\omega^{k}z^{-\frac{1}{2}}}{\sqrt{n}}:\psi^{+}(\omega^{ -k}z)\sigma(z^{n}):,\quad r(a^{-}_{k})(z)=\frac{z^{-\frac{1}{2}}}{\sqrt{n}}: \psi^{-}(\omega^{-k}z)\sigma(z^{n}):.\] _Note that we get a half integer power of \(z\), this is because the order of the automorphism is \(2n\) instead of \(n\)._ **Embedding in \(b_{\infty}\):** _Again we have (5.29) and (5.39), but now the embedding (2.25), which suggest the relabeling_ \[\psi^{\pm}_{\ell n\pm(n+1-i-\frac{1}{2})}=\tilde{\phi}_{\ell(2n+1)\pm(n+1-i)},\quad\sigma_{\ell}=\tilde{\phi}_{\ell(2n+1)}.\] _Define their generating fields as in (5.15) and_ \[\sigma(z)=\sum_{i\in\mathbb{Z}}\sigma_{i}z^{-i}.\] _Since \(h_{w}\) is slightly different from the usual case, we have_ \[\begin{split} a^{+-}_{k\ell}(z)&=\frac{1}{n}\sum_{m\in \mathbb{Z}}\sum_{i,j=1}^{n}(\omega^{-k})^{i}(\omega^{-\ell})^{-j}t^{m}e^{+-}_{ ij}z^{-mn+i-j-1},\\ a^{++}_{k\ell}(z)&:=\frac{1}{n}\sum_{m\in\mathbb{ Z}}\sum_{i,j=1}^{n}(\omega^{-k})^{i}(\omega^{-\ell})^{j}t^{m}e^{++}_{ij}z^{-(m+2)n +i+j-2},\\ a^{--}_{k\ell}(z)&:=\frac{1}{n}\sum_{m\in\mathbb{ Z}}\sum_{i,j=1}^{n}(\omega^{-k})^{-i}(\omega^{-\ell})^{-j}t^{m}e^{--}_{ij}z^{-(m-2)n -i-j},\\ a^{\pm}_{k}(z)&=\frac{1}{\sqrt{n}}\sum_{m\in \mathbb{Z}}\sum_{i=1}^{n}\omega^{\mp ki}r(t^{m}e^{\pm}_{i})z^{-mn\mp\frac{1}{2 }(2n-2i+1)-1}\end{split} \tag{5.40}\] _It is then straightforward to check that_ \[\begin{split} r(a^{+-}_{k\ell})(z)&=\frac{(-\omega )^{k}(-1)^{\ell}}{n}:\psi^{+}(\omega^{-k}z)\psi^{-}(\omega^{-\ell}z):,\\ r(a^{++}_{k\ell})(z)&=\frac{(-\omega)^{k+\ell}}{n} :\psi^{+}(\omega^{-k}z)\psi^{+}(\omega^{-\ell}z):,\\ r(a^{--}_{k\ell})(z)&=\frac{(-1)^{k+\ell}}{n}: \psi^{-}(\omega^{-k}z)\psi^{-}(\omega^{-\ell}z):,\\ r(a^{+}_{k})(z)&=\frac{(-\omega)^{k}z^{-\frac{1}{2 }}}{\sqrt{n}}:\psi^{+}(\omega^{-k}z)\sigma(z^{n}):,\\ r(a^{-}_{k})(z)&=\frac{(-1)^{k}z^{-\frac{1}{2}}}{ \sqrt{n}}:\psi^{-}(\omega^{-k}z)\sigma(z^{n}):.\end{split} \tag{5.41}\] **Example 5.11**: **(One negative cycle, the principal Heisenberg algebra)** _In this case \(r=1\) and \(s=0\), hence_ \[w_{(\emptyset,(n))}:\epsilon_{n}\mapsto-\epsilon_{1},\quad\epsilon_{k}\mapsto \epsilon_{k+1},\quad k=1,2,\ldots,n.\] _This case is analogous to the principal case of \(sp_{2n}\). We choose_ \[\tilde{w}=\tilde{w}_{(\emptyset,(n))}=-e_{n+1,n+1}+e_{2n+1,n}+e_{1,n+2}+\sum_ {i=1}^{n-1}(e_{i+1,i}+e_{2n+1-i,2n+2-i})\in SO_{2n+1}.\] _Let \(\omega=e^{\frac{2\pi\sqrt{-1}}{2n}}\) and \(\gamma=1\) if \(n\) is even and \(\gamma=\sqrt{-1}\) if \(n\) is odd. Let_ \[\begin{split} S&=\sum_{i=1}^{n}\big{(}\frac{(-1)^{i }}{\sqrt{4n}}(e_{i1}+(-1)^{n}e_{i,2n+1}+(-1)^{n}e_{2n+2-i,1}+e_{2n+2-i,2n+1})+ \\ &\qquad+\frac{1}{\sqrt{2n}}(e_{i,n+1}+e_{2n+1-i,n+1})+\frac{1}{ \sqrt{2n}}\sum_{j=2}^{n}(\omega^{(j-n-1)i}e_{ij}+\omega^{(n-j+1)(n+i)}e_{i,2n+ 2-j}+\\ &\qquad+\omega^{(j-n-1)(n+i)}e_{2n+2-i,j}+\omega^{(n-j+1)i}e_{2n+ 2-i,2n+2-j})\big{)}+\sqrt{\frac{-1}{2}}(\gamma^{-1}e_{n+1,1}-\gamma e_{n+1,2n +1}),\end{split}\] _such that \(\tilde{w}_{(\emptyset,(n))}=S\exp(2\pi\sqrt{-1}h_{w})S^{-1}\), for_ \[h_{w}=\sum_{i=1}^{n}\frac{n-i+1}{2n}(e_{ii}-e_{2n+2-i,2n+2-i}).\] _It is straightforward to check that \((Sv,Sw)_{so_{2n+1}}=(v,w)_{so_{2n+1}}\), thus \(S\in O_{2n}\) and_ \[S^{-1} =\sum_{i=1}^{n}\big{(}\frac{(-1)^{i}}{\sqrt{4n}}(e_{1i}+(-1)^{n}e _{2n+1,i}+(-1)^{n}e_{1,2n+2-i}+e_{2n+1,2n+2-i})+\] \[\qquad+\frac{1}{\sqrt{2n}}(e_{n+1,i}+e_{n+1,2n+1-i})+\frac{1}{ \sqrt{2n}}\sum_{j=2}^{n}(\omega^{(n-j+1)i}e_{ji}+\omega^{(j-n-1)(i+n)}e_{2n+2- j,i}+\] \[\qquad+\omega^{(n-j+1)(n+i)}e_{j,2n+2-i}+\omega^{(j-n-1)i}e_{2n+ 2-j,2n+2-i}\big{)}+\sqrt{\frac{-1}{2}}(\gamma^{-1}e_{2n+1,n+1}-\gamma e_{1,n+ 1}).\] \(S\) _and \(S^{-1}\) suggest the following relabelling of the \(\tilde{\phi}_{i}\)._ **Embedding in \(d_{\infty}\):** _(\(\ell\in\mathbb{Z}\))_ \[\sigma_{\ell} =\sqrt{\frac{-1}{2}}(\gamma^{-1}\tilde{\phi}_{(2n+1)\ell-\frac{1 }{2}}-\gamma\tilde{\phi}_{(2n+1)\ell+\frac{1}{2}})\] \[\tilde{\phi}^{1}_{2n\ell} =\frac{1}{\sqrt{2}}(\gamma^{-1}\phi_{(2n+1)\ell-\frac{1}{2}}+ \gamma\phi_{(2n+1)\ell+\frac{1}{2}}),\quad\tilde{\phi}^{1}_{2n\ell+n}=\gamma^ {-1}\phi_{(2n+1)\ell+n+\frac{1}{2}}\] \[\tilde{\phi}^{1}_{2n\ell-k} =\gamma^{-1}\phi_{(2n+1)\ell-k-\frac{1}{2}},\quad\phi^{1}_{2n \ell+k}=\gamma(-1)^{k}\phi_{(2n+1)\ell+k+\frac{1}{2}}\quad k=1,\dots,n-1. \tag{5.42}\] _Define the corresponding generating fields as_ \[\sigma(z)=\sum_{i\in\mathbb{Z}}\sigma_{i}z^{-i},\quad\tilde{\phi}^{1}(z)=\sum _{i\in\mathbb{Z}}\tilde{\phi}^{1}_{i}z^{-i}.\] _For \(\delta,\epsilon=+,-\) and \(1\leq k,\ell\leq n\),_ \[r(a^{\delta,\epsilon}_{k,\ell})(z) =\frac{(-1)^{k+\ell}\gamma^{\delta 1+\epsilon 1}z^{-1}}{2n}:\tilde{\phi}(\delta\omega^{-k}z) \tilde{\phi}(-\epsilon\omega^{-\ell}z):,\] \[r(a^{\delta}_{k})(z) =\frac{(-1)^{k}\gamma^{\delta 1}z^{-1}}{\sqrt{2n}}:\tilde{\phi}( \delta\omega^{-k}z)\sigma(z^{2n}):.\] **Embedding in \(b_{\infty}\):** _(\(\ell\in\mathbb{Z}\))_ \[\sigma_{\ell+\frac{1}{2}} =\sqrt{\frac{-1}{2}}(\gamma^{-1}\tilde{\phi}_{(2n+1)\ell+n+1}- \gamma\tilde{\phi}_{(2n+1)\ell+n}), \tag{5.43}\] \[\tilde{\phi}^{1}_{2n\ell+n} =\frac{1}{\sqrt{2}}(\tilde{\phi}_{(2n+1)\ell+n}+(-1)^{n}\tilde{ \phi}_{(2n+1)\ell+n+1}),\quad\tilde{\phi}^{1}_{2n\ell}=\tilde{\phi}_{(2n+1) \ell}\] \[\tilde{\phi}^{1}_{2n\ell+k} =\tilde{\phi}_{(2n+1)\ell+k},\qquad\tilde{\phi}^{1}_{2n\ell-k}=(-1 )^{k}\tilde{\phi}_{(2n+1)\ell-k}\quad k=1,\dots,n-1,\] _and define the corresponding generating fields as_ \[\sigma(z)=\sum_{i\in\frac{1}{2}+\mathbb{Z}}\sigma_{i}z^{-i-\frac{1}{2}},\quad \tilde{\phi}^{1}(z)=\sum_{i\in\mathbb{Z}}\tilde{\phi}^{1}_{i}z^{-i},\] _then for \(\delta,\epsilon=+,-\) and \(1\leq k,\ell\leq n\)_ \[r(a^{\delta,\epsilon}_{k,\ell})(z) =\frac{z^{-1}}{2n}:\tilde{\phi}(\delta\omega^{-k}z)\tilde{\phi}( \epsilon\omega^{-\ell}z):,\] \[r(a^{\delta}_{k})(z) =\frac{z^{-1}}{\sqrt{2n}}:\tilde{\phi}(\delta\omega^{-k}z)\sigma( z^{2n}):.\] _In both cases the \(\tilde{\phi}^{1}_{j}\) and \(\sigma_{j}\), defined by (5.42) and (5.43) satisfy_ \[\tilde{\phi}^{1}_{i}\tilde{\phi}^{1}_{j}+\tilde{\phi}^{1}_{j}\tilde{\phi}^{1}_ {i}=(-1)^{i}\delta_{i,-j},\quad\sigma_{i}\sigma_{j}+\sigma_{j}\sigma_{i}= \delta_{i,-j},\quad\tilde{\phi}^{1}_{i}\sigma_{j}+\sigma_{j}\tilde{\phi}^{1}_ {i}=0,\] _and_ \[\sigma_{i}|0\rangle=\tilde{\phi}^{1}_{i}|0\rangle=0,\qquad i>0.\] The general case follows from these two examples and Example 5.8. There are in general both neutral and charged fermionic fields. However, we also have an additional field \(\sigma(z)\). Hence we have to extend the Fock space \(B_{(\lambda,\mu)}\) with extra variables \(\xi\), viz. let for \(x=b\) or \(d\): \[\sigma(F_{x})=B_{(\lambda,\mu)}(\xi)=\mathbb{C}[\theta_{i},\xi_{k},q_{j},q_{j }^{-1},t^{\ell};1\leq j\leq s,\;1\leq i\leq p,\;1\leq\ell\leq s+r,k\in\delta+ \mathbb{Z}_{>0}], \tag{5.44}\] where the \(\xi_{i}\) are again Grassmann variables that anti-commute with the \(\theta_{j}\)'s. **We have 4 cases** 1. \(d_{\infty}\), \(r\) is even, then \(p=\frac{r}{2}\) and \(\delta=-\frac{1}{2}\); 2. \(d_{\infty}\), \(r\) odd, then \(p=\frac{r+1}{2}\) and \(\delta=0\); 3. \(b_{\infty}\), \(r\) odd, then \(p=\frac{r-1}{2}\) and \(\delta=-\frac{1}{2}\); 4. \(b_{\infty}\), \(r\) even, then \(p=\frac{r}{2}\) and \(\delta=0\). It is straightforward to deduce the following proposition. **Proposition 5.12**: _Let \(w_{(\lambda,\mu)}\) be given by (5.20). Set \(\lambda_{s+a}=:\mu_{a}\) and \(\omega_{a}=e^{\frac{2\pi\sqrt{-1}}{\lambda_{a}}}\) if \(1\leq a\leq s\) and \(\omega_{a}=e^{\frac{2\pi\sqrt{-1}}{2\lambda_{a}}}\) is \(s+1\leq a\leq r+s\). Let \([a]\) be the largest integer greater or equal to \(a\) and let \(\eta=1\) if \(r\) is odd and \(\eta=0\) if \(r\) is even. Then_ _(a) The element_ \[\tilde{w}_{(\lambda,\mu)}=\sum_{a=1}^{s}(1-2\eta)e_{n+1,n+1}+(-1)^{ \delta_{xb}}\big{(}\omega_{a}^{\frac{\lambda_{a}+1}{2}}e_{M_{a}+1,M_{a+1}}+ \omega_{a}^{-\frac{\lambda_{a}+1}{2}}e_{2n+1-M_{a},2n+1-M_{a+1}+1}+\] \[\qquad\quad+\sum_{i=1}^{\lambda_{a}-1}(\omega_{a}^{\frac{\lambda_ {a}+1}{2}}e_{M_{a}+i+1,M_{a}+i}+\omega^{-\frac{\lambda_{a}+1}{2}}e_{2n+1-M_{a} -i,2n+2-M_{a}-i})\big{)}+\] \[\qquad\quad+\sum_{a=1}^{\left[\frac{\pi}{2}\right]}\big{(}e_{M_{s +2a-1}+1,2n+2-M_{s+2a}}+e_{2n+1-M_{s+2a},M_{s+2a}}+e_{M_{s+2a}+1,2n+2-M_{s+2a +1}}+\] \[\qquad\quad+e_{2n+1-M_{s+2a},M_{s+2a+1}}+\sum_{i=1}^{\lambda_{s+2 a-1}-1}e_{M_{s+2a-1}+i+1,M_{s+2a-1}+i}+e_{2n+1-M_{s+2a-1}-i,2n+2-M_{s+2a-1}-i}+\] \[\qquad\quad+\sum_{i=1}^{\lambda_{s+2a}-1}e_{M_{s}+2a+i+1,M_{s+2a+ i}}+e_{2n+1-M_{s+2a}-i,2n+2-M_{s+2a}-i}.\big{)}\] \[\qquad\quad+\eta\big{(}e_{2n+1-M_{s+r},n}+e_{M_{s+r}+1,n+2}+\sum_ {i=1}^{\mu_{r}-1}(e_{M_{s+r}+i+1,M_{s+r}+i}+e_{2n+1-M_{s+r}-i,2n+2-M_{s+r}-i}) \big{)}\] \[\qquad\quad\in O_{2n+1}\] _is a lift of \(w_{(\lambda,\mu)}\) and \(\tilde{w}_{(\lambda,\mu)}=S^{-1}\exp(2\pi\sqrt{-1}h_{w})S\), where_ \[h_{w}=\sum_{a=1}^{s}\sum_{i=1}^{\lambda_{a}}\frac{(1+\delta_{x, b})\lambda_{a}-2j+1}{2\lambda_{a}}\epsilon_{M_{a}+j}+\sum_{a=1}^{\left[\frac{ \pi}{2}\right]}\sum_{i=1}^{\mu_{2a-1}}\frac{\mu_{2a-1}-i+1}{2\mu_{2a-1}} \epsilon_{M_{s+2a-1}+i}+\] \[\qquad\qquad\quad+\sum_{i=1}^{\mu_{2a}}\frac{\mu_{2a}-i}{2\mu_{ 2a}}\epsilon_{M_{s+2a}+i}+\eta\sum_{i=1}^{\mu_{r}}\frac{\mu_{r}-i+1}{2\mu_{r} }\epsilon_{M_{s+r}+i}\in\mathfrak{h}\] _satisfies (5.2), if \(x=d\) and (5.3) if \(x=b\) and_ \[S= (1-\eta)e_{n+1,n+1}+\sum_{a=1}^{s}\sum_{i=1}^{\lambda_{a}}\sum_{i,j =1}^{\lambda_{a}}(\omega_{a}^{ij}e_{M_{a}+i,M_{a}+j}+\omega_{a}^{-ij}e_{2n+2-M_{ a}-i,2n+2-M_{a}-j})\] \[+\sum_{a=1}^{\left[\frac{r}{2}\right]}\frac{1}{\sqrt{4\mu_{2a-1}} }\sum_{k=1}^{\mu_{2a-1}}[(-1)^{k}(\gamma_{s+2a-1}e_{M_{s+2a-1}+k,M_{s+2a-1}+1}+ \gamma_{s+2a-1}^{-1}e_{2n+2-M_{s+2a-1}-k,M_{s+2a-1}+1}+\] \[+\gamma_{s+2a-1}^{-1}e_{2n+2-M_{s+2a-1}-k,2n+1-M_{s+2a-1}}+\gamma_ {s+2a-1}e_{M_{s+2a-1}+k,2n+1-M_{s+2a-1}})+\] \[+(e_{M_{s+2a-1}+k,M_{s+2a+1}}+e_{2n+2-M_{s-2a-1}-k,M_{s+2a+1}}+e_ {M_{s+2a-1}+k,2n+2-M_{s+2a+1}})]+\] \[+\frac{1}{\sqrt{4\mu_{2a}}}\sum_{k=1}^{\mu_{2a}} \sqrt{-1}[(-1)^{k}(-\gamma_{s+2a}e_{M_{s+2a}+k,M_{s+2a-1}+1}+ \gamma_{s+2a}^{-1}e_{2n+2-M_{s+2a}-k,M_{s+2a-1}+1}\] \[-\gamma_{s+2a}^{-1}e_{2n+2-M_{s+2a}-k,2n+1-M_{s+2a-1}}+\gamma_{s+ 2a}e_{M_{s+2a}+k,2n+1-M_{s-2a-1}})+\] \[+(e_{M_{s+2a}+k,M_{s+2a+1}}+e_{2n+2-M_{s+2a}-k,M_{s+2a+1}}\] \[-e_{2n+2-M_{s+2a}-k,2n+2-M_{s+2a+1}}-e_{M_{s+2a}+k,2n+2-M_{s+2a+1 }})]+\] \[+\frac{1}{\sqrt{2\mu_{2a-1}}}\sum_{i=1}^{\mu_{2a-1}}\sum_{j=2}^{ \mu_{2a-1}}(\omega_{s+2a-1}^{(j-\mu_{2a-1}-1)i}e_{M_{s+2a-1}+i,M_{s+2a-1}+j}+\] \[+\omega_{s+2a-1}^{(j-\mu_{2a-1}-1)(\mu_{2a-1}+i)}e_{2n+2-M_{s+2a-1 }-i,M_{s+2a-1}+j}+\] \[+\omega_{s+2a-1}^{-(j-\mu_{2a-1}-1)(i-\mu_{2a-1})}e_{M_{s+2a-1}+ i,2n+2-M_{s+2a-1}-j}+\] \[+\omega_{s+2a-1}^{-(j-\mu_{2a-1}-1)i}e_{2n+2-M_{s+2a-1}+i,2n+2-M_ {s+2a-1}-j})+\] \[+\frac{1}{\sqrt{2\mu_{2a}}}\sum_{i=1}^{\mu_{2a}}\sum_{j=1}^{\mu_{ 2a}-1}(\omega_{s+2a}^{(j-\mu_{2a})i}e_{M_{s+2a}+i,M_{s+2a}+j}+\omega_{s+2a}^{( j-\mu_{2a})(\mu_{2a}+i)}e_{2n+2-M_{s+2a}-i,M_{s+2a}+j}+\] \[+\omega_{s+2a}^{-(j-\mu_{2a})(i-\mu_{2a})}e_{M_{s+2a}+i,2n+2-M_{s +2a}-j}+\omega_{s+2a}^{-(j-\mu_{2a})i}e_{2n+2-M_{s+2a}-i,2n+2-M_{s+2a}-j})+\] \[+\eta\big{\{} \sum_{i=1}^{\mu_{r}}\big{(}\frac{(-1)^{i}}{\sqrt{4\mu_{r}}}(e_{M_{ s+r}+i1}+(-1)^{\mu_{r}}e_{M_{s+r}+i,2n+1-M_{s+r}}+(-1)^{\mu_{r}}e_{2n+2-M_{s+r}-i,M_{ s+r}+1}+\] \[+e_{2n+2-M_{s+r}-i,2n+1-M_{s+r}}\big{)}+\frac{1}{\sqrt{2\mu_{r}}} (e_{i,n+1}+e_{2n+1-M_{s+r}-i,n+1})+\] \[+\frac{1}{\sqrt{2\mu_{r}}}\sum_{j=2}^{\mu_{r}}(\omega^{(j-\mu_{r}- 1)i}e_{M_{s+r}+i,M_{s+r}+j}+\omega^{(\mu_{r}-j+1)(\mu_{r}+i)}e_{M_{s+r}+i,2n+2- M_{s+r}-j}+\] \[+\omega^{(j-\mu_{r}-1)(\mu_{r}+i)}e_{2n+2-M_{s+r}-i,M_{s+r}+j}+ \omega^{(\mu_{r}-j+1)i}e_{2n+2-M_{s+r}-i,2n+2-M_{s+r}-j})\big{)}+\] \[\sqrt{\frac{-1}{2}}(\gamma_{s+r}^{-1}e_{n+1,M_{s+r}+1}-\gamma_{s +r}e_{n+1,2n+1-M_{s+r}})\big{\}},\] _where \(\gamma_{a}=1\) if \(M_{a}\) is even and \(\gamma_{a}=i\) if \(M_{a}\) is odd. (b) The order \(N\) of \(\exp(2\pi\sqrt{-1}\mathrm{ad}\,h_{w})\) is equal to the least common multiple \(N^{\prime}\) of \(\lambda_{1},\lambda_{2},\ldots,\lambda_{s},2\lambda_{s+1},2\lambda_{s+2},\ldots 2 \lambda_{r+s}\) if \(s=0\) or \(N^{\prime}\frac{2\lambda_{a}(1-\frac{\delta_{x,b}}{2})+1}{\lambda_{a}}\in 2\mathbb{Z}\) for all \(1\leq a\leq s\),and \(2N^{\prime}\) otherwise._ _(c) The elements \(a^{\delta a,eb}_{k,\ell}:=S^{-1}e^{\delta a,eb}_{k\ell}S\) and \(a^{\delta a}_{k}:=S^{-1}e^{\delta a,eb}_{k\ell}S\) form a new basis of \(so_{2n+1}\) and the representation \(r\) in \(F_{d}\) and \(F_{b}\) of their generating fields \(a^{\delta a,eb}_{k\ell}(z)\) and \(a^{\delta a}_{k}(z)\) are equal to_ \[r(a^{\delta_{a},eb}_{k,\ell})(z)=C^{\delta a}_{k}C^{eb}_{\ell}:\Psi^{\delta a} _{k}(z)\Psi^{eb}_{\ell}(z):,\qquad r(a^{\delta_{a}}_{k})(z)=C^{\delta a}_{k}z^ {-\frac{1}{2}}:\Psi^{\delta a}_{k}(z)\sigma(z^{N}):,\] _where \(C^{\delta a}_{k}\in\mathbb{C}\) and the fields \(\Psi^{\delta a}_{k}(z)\) and \(\sigma(z)\) are described below. (1) For the embedding in \(d_{\infty}\),_ \[C^{+a}_{k}=\begin{cases}\frac{\omega^{-k}_{k}}{\sqrt{\lambda_{a}}}&1\leq a \leq s,\\ \frac{\gamma_{a}(-1)^{k}}{\sqrt{2\lambda_{a}}}&s<a\leq r+s,\end{cases}\qquad C ^{-a}_{k}=\begin{cases}\frac{1}{\sqrt{\lambda_{a}}}&1\leq a\leq s,\\ \frac{\gamma_{a}^{-1}(-1)^{k}}{\sqrt{2\lambda_{a}}}&s<a\leq r+s,\end{cases}\] _and_ \[\Psi^{\pm a}_{k}(z)=\begin{cases}\psi^{\pm a}(\omega^{-k}_{a}z^{\frac{N}{ \lambda_{a}}})z^{\frac{N}{2\lambda_{a}}-\frac{1}{2}}&1\leq a\leq s,\\ \tilde{\phi}^{a}(\pm\omega^{-k}_{a}z^{\frac{N}{2\lambda_{a}}})z^{-\frac{1}{2}}& s<a\leq s+r,\end{cases}\] _where_ \[\psi^{\pm a}(z)=\sum_{i\in\frac{1}{2}+\mathbb{Z}}\psi^{\pm a}_{i}z^{-i-\frac{1 }{2}},\quad\tilde{\phi}^{s+b}(z)=\sum_{i\in\mathbb{Z}}\tilde{\phi}^{s+b}_{i}z ^{-i},\quad 1\leq a\leq s,\ 1\leq b\leq r, \tag{5.45}\] _and_ \[\sigma(z)=\begin{cases}\sum_{i\in\mathbb{Z}}\sigma_{i}z^{-i}&r\text{ odd},\\ \sum_{i\in\frac{1}{2}+\mathbb{Z}}\sigma_{i}z^{-i-\frac{1}{2}}&r\text{ even},\end{cases}\] _are the following generating fields for the relabeld generators of the Clifford algebra \(C\ell_{d}\) (\(1\leq a\leq s\), \(1\leq b\leq\left[\frac{r+1}{2}\right]\), with \(2b\neq r+1\) and \(1\leq i\leq\lambda_{a}\), \(1\leq j<\lambda_{s+2b-1}\) or \(\lambda_{s+2b}\)):_ \[\sigma_{\ell+\frac{1-n}{2}} =\begin{cases}\phi_{(\ell+\frac{1}{2})(2n+1)}&r\text{ even},\\ \sqrt{\frac{-1}{2}}(\gamma^{-1}_{r}\phi_{(2n+1)\ell-\frac{1}{2}}-\gamma_{r} \phi_{(2n+1)\ell+\frac{1}{2}})&r\text{ odd},\end{cases}\] \[\psi^{+a}_{\ell\lambda_{a}-i+\frac{1}{2}} =\phi_{\ell(2n+1)-M_{a}-i+\frac{1}{2}},\qquad\qquad\qquad\qquad \psi^{-a}_{\ell\lambda_{a}+i-\frac{1}{2}}=\phi_{\ell(2n+1)+M_{a}+i-\frac{1}{2}},\] \[\tilde{\phi}^{s+2b-1}_{2\ell\mu_{2b-1}-j} =\gamma^{-1}_{s+2b-1}\phi_{\ell(2n+1)-M_{s+2b-1}-j-\frac{1}{2}}, \quad\tilde{\phi}^{s+2b-1}_{2\ell\mu_{2b-1}+j}=\gamma_{s+2b-1}(-1)^{j}\phi_{ \ell(2n+1)+M_{s+2b-1}+j+\frac{1}{2}},\] \[\tilde{\phi}^{s+2b-1}_{2\ell\mu_{2b-1}+\mu_{2b-1}} =\begin{cases}\frac{\gamma^{-1}_{s+2b-1}}{\sqrt{2}}(\phi_{\ell(2n +1)+M_{s+2b+1}-\frac{1}{2}}+\phi_{(\ell+1)(2n+1)-M_{s+2b+1}+\frac{1}{2}})&2b-1 \neq r,\\ \gamma^{-1}_{r}\phi_{(2n+1)\ell+n+\frac{1}{2}}&2b-1=r,\end{cases}\] \[\tilde{\phi}^{s+2b-1}_{2\ell\mu_{2b}} =\begin{cases}\frac{1}{\sqrt{2}}(\phi_{\ell(2n+1)+M_{s+2b-1}+ \frac{1}{2}}+\phi_{\ell(2n+1)-M_{s+2b-1}-\frac{1}{2}})&2b-1\neq r,\\ \frac{1}{\sqrt{2}}(\gamma^{-1}_{r}\phi_{(2n+1)\ell-\frac{1}{2}}+\gamma_{r}\phi_ {(2n+1)\ell+\frac{1}{2}})&2b-1=r,\end{cases}\] \[\tilde{\phi}^{s+2b}_{2\ell\mu_{2b}-k} =\gamma^{-1}_{s+2b}\phi_{\ell(2n+1)-M_{s+2b}-k+\frac{1}{2}}, \quad\tilde{\phi}^{s+2b}_{2\ell\mu_{2b}+k}=\gamma_{s+2b}(-1)^{k}\phi_{\ell(2n+1 )+M_{s+2b}+k-\frac{1}{2}},\] \[\tilde{\phi}^{s+2b}_{2\ell\mu_{2b}} =\sqrt{\frac{-1}{2}}(\phi_{\ell(2n+1)-M_{s+2b-1}-\frac{1}{2}}- \phi_{\ell(2n+1)+M_{s+2b-1}+\frac{1}{2}}),\] \[\tilde{\phi}^{s+2b}_{2\ell\mu_{s+2b}+\mu_{s+2b}} =\frac{\gamma^{-1}_{s+2b}\sqrt{-1}}{\sqrt{2}}(\phi_{(\ell+1)(2n+1 )-M_{s+2b+1}+\frac{1}{2}}-\phi_{\ell(2n+1)+M_{s+2b+1}-\frac{1}{2}}).\] _(2) For the embedding in_ \(b_{\infty}\)_,_ \[C_{k}^{+a}=\begin{cases}\frac{(-\omega_{a})^{-k}}{\sqrt{\lambda_{a}}}&1\leq a\leq s,\\ \frac{1}{\sqrt{2\lambda_{a}}}&s<a\leq r+s,\end{cases}\qquad C_{k}^{-a}=\begin{cases} \frac{(-1)^{k}}{\sqrt{\lambda_{a}}}&1\leq a\leq s,\\ \frac{1}{\sqrt{2\lambda_{a}}}&s<a\leq r+s,\end{cases}\] _and_ \[\Psi_{k}^{\pm a}(z)=\begin{cases}\psi^{\pm a}(\omega_{a}^{-k}z^{\frac{N}{ \lambda_{a}}})z^{\frac{N}{2\lambda_{a}}-\frac{1}{2}}&1\leq a\leq s,\\ \tilde{\phi}^{a}(\pm\omega_{a}^{-k}z^{\frac{N}{2\lambda_{a}}})z^{-\frac{1}{2} }&s<a\leq s+r,\end{cases}\] _where_ \[\psi^{\pm a}(z)=\sum_{i\in\frac{1}{2}+\mathbb{Z}}\psi_{i}^{\pm a}z^{-i-\frac{ 1}{2}},\quad\tilde{\phi}^{s+b}(z)=\sum_{i\in\mathbb{Z}}\tilde{\phi}^{s+b}z^{- i},\quad 1\leq a\leq s,\ 1\leq b\leq r, \tag{5.47}\] _and_ \[\sigma(z)=\begin{cases}\sum_{i\in\mathbb{Z}}\sigma_{i}z^{-i}&r\text{ even},\\ \sum_{i\in\frac{1}{2}+\mathbb{Z}}\sigma_{i}z^{-i-\frac{1}{2}}&r\text{ odd}\end{cases}\] _are the generating fields for the relabeled generators of the Clifford algebra \(C\ell_{b}\) (\(1\leq a\leq s\), \(1\leq b\leq\left[\frac{r+1}{2}\right]\), with \(2b\neq r+1\) and \(1\leq i\leq\lambda_{a}\), \(1\leq j\leq\lambda_{s+2b-1}\) or \(\lambda_{s+2b}\)):_ \[\sigma_{\ell+\frac{9}{2}} =\begin{cases}\tilde{\phi}_{\ell(2n+1)}&r\text{ even},\\ \sqrt{\frac{-1}{2}}(\gamma_{r}^{-1}\tilde{\phi}_{(2n+1)\ell+n+1}-\gamma_{r} \tilde{\phi}_{(2n+1)\ell+n})&r\text{ odd},\end{cases}\] \[\psi_{\ell\lambda_{a}\pm(\lambda_{a}+1-i-\frac{1}{2})}^{\pm} =\tilde{\phi}_{\ell(2n+1)\pm(n+1-M_{a}-i)},\] \[\tilde{\phi}_{2\ell\mu_{2b-1}+\mu_{2b-1}-j}^{s+2b-1}\tilde{\phi} _{\ell(2n+1)+n-M_{s+2b-1}-j},\quad\tilde{\phi}_{2\ell\mu_{2b-1}+j-\mu_{2b-1}} ^{s+2b-1}=(-1)^{\mu_{2b-1}-j}\tilde{\phi}_{\ell(2n+1)+M_{s+2b-1}+j-n},\] \[\tilde{\phi}_{2\ell\mu_{2b-1}+\mu_{2b-1}}^{s+2b-1} =\begin{cases}\frac{\gamma_{s+2b-1}}{\sqrt{2}}(\tilde{\phi}_{\ell (2n+1)+n-M_{s+2b-1}}+\tilde{\phi}_{(\ell+1)(2n+1)+n+1+M_{s+2b-1}})&2b-1\neq r, \\ \frac{1}{\sqrt{2}}(\tilde{\phi}_{(2n+1)\ell+n}+(-1)^{\mu_{r}}\tilde{\phi}_{(2 n+1)\ell+n+1})&2b-1=r,\end{cases}\] \[\tilde{\phi}_{2\ell\mu_{2b}}^{s+2b-1} =\begin{cases}\frac{1}{\sqrt{2}}(\tilde{\phi}_{\ell(2n+1)+n+1-M_{ s+2b+1}}+\tilde{\phi}_{\ell(2n+1)-n-1+M_{s+2b+1}})&2b-1\neq r,\\ \tilde{\phi}_{\ell(2n+1)}&2b-1=r,\end{cases}\] \[\tilde{\phi}_{2\ell\mu_{2b}+\mu_{2b}-j}^{s+2b} =\tilde{\phi}_{\ell(2n+1)+n+1-M_{s+2b-j}},\quad\tilde{\phi}_{2 \ell\mu_{2b}-\mu_{2b}+k}^{s+2b}=(-1)^{j-\mu_{2b}}\tilde{\phi}_{\ell(2n+1)+M_{ s+2b}+j-n-1},\] \[\tilde{\phi}_{2\ell\mu_{2b}}^{s+2b} =\sqrt{\frac{-1}{2}}(\tilde{\phi}_{\ell(2n+1)-M_{s+2b+1}}-\tilde{ \phi}_{\ell(2n+1)-n+M_{s+2b+1}}),\] \[\tilde{\phi}_{2\ell\mu_{s+2b}+\mu_{s+2b}}^{s+2b} =\frac{\gamma_{s+2b}\sqrt{-1}}{\sqrt{2}}(\tilde{\phi}_{\ell(2n+1 )+n-M_{s+2b-1}}-\tilde{\phi}_{\ell(2n+1)+n+1+M_{s+2b-1}}).\] _In both cases, the relabeled fermions satisfy (5.34) and_ \[\begin{split}\sigma_{i}\sigma_{j}+\sigma_{j}\sigma_{i}& =\delta_{i,-j},\quad\psi_{i}^{\pm a}\sigma_{j}+\sigma_{j}\psi_{i}^{\pm a }=0,\\ \tilde{\phi}_{i}^{s+b}\sigma_{j}+\sigma_{j}\tilde{\phi}_{i}^{s+b}& =0,\qquad\sigma_{i}|0\rangle=0\quad i>0.\end{split} \tag{5.49}\] _and (5.34), for_ \(1\leq a,c\leq s,\ 1\leq b,d\leq r,\quad i,j\in\frac{1}{2}+\mathbb{Z}\)_._ _(d) The representation of_ \(\hat{so}_{2n+1}\) _is given in terms of vertex operators by formulas_ (3.66) _and (_3.73_), by substitution of roots of unity_ \(\omega_{a}\) _and powers of_ \(z\) _as given in the above formulas, and by_ \[\begin{split}\sigma:&\psi^{\pm a}(\omega_{a}^{-k}z^{ \frac{N}{2\mu_{b}}}):\sigma(z^{N}):\sigma^{-1}=(-1)^{\sum_{k=a}^{s}q_{k}\frac{ \partial}{\partial q_{k}}}q_{a}^{\pm 1}(\omega_{a}^{-k}z^{\frac{N}{2\lambda_{a}}})^{ \pm q_{a}\frac{\partial}{\partial q_{a}}}\times\\ &\exp\left(\pm\sum_{j>0}t_{j}^{a}(\omega_{a}^{-k}z^{\frac{N}{ \lambda_{a}}})^{j}\right)\exp\left(\mp\sum_{j>0}\frac{\partial}{\partial t_{j} ^{a}}\frac{(\omega_{a}^{-k}z^{\frac{N}{\lambda_{a}}})^{-j}}{j}\right)\Xi(z^{N }),\end{split} \tag{5.50}\] \[\begin{split}\sigma:\tilde{\phi}^{s+b}(\omega_{s+b}^{-k}z^{ \frac{N}{2\mu_{b}}})\sigma(z^{N}):\sigma^{-1}=&(-1)^{\sum_{k=1}^ {s}q_{k}\frac{\partial}{\partial q_{k}}}R_{b}(\theta)\exp\left(\sum_{j>0,\ \mathrm{odd}}t_{j}^{s+b}(\omega_{s+b}^{-k}z^{\frac{N}{2\mu_{b}}})^{j} \right)\times\\ &\exp\left(-2\sum_{j>0,\ \mathrm{odd}}\frac{\partial}{\partial t_{j} ^{s+b}}\frac{(\omega_{s+b}^{-k}z^{\frac{N}{2\mu_{b}}})^{-j}}{j}\right)\Xi(z^{N }),\end{split} \tag{5.51}\] _where_ \[\Xi[z]=(1+2\delta)R_{r+1}(\theta)+\sum_{j\in\delta+\mathbb{Z}>0}\left(\xi_{j} z^{\delta+j}+\frac{\partial}{\partial\xi_{j}}z^{\delta-j}\right). \tag{5.52}\] _In particular, the Heisenberg algebra (_3.30_) is given by_ \(\alpha_{a}(z)=\lambda_{a}z^{\frac{\lambda_{a}}{N}-1}r(a_{\lambda_{a}\lambda_{ a}}^{+a,-a})(z^{\frac{\lambda_{a}}{N}})\) _for_ \(1\leq a\leq s\)_, respectively_ \(\alpha_{s+b}=\lambda_{s+b}z^{\frac{2\lambda_{s+b}}{N}}r(a_{\lambda_{s+b}\lambda _{s+b}}^{+(s+b),-(s+b)})(z^{\frac{2\lambda_{s+b}}{N}})\) _for_ \(1\leq b\leq r\)_, and their modes satisfy (_3.31_)._ The \(q\)-dimension formulas in this case resemble the formula given in (5.35). See [25], for the explicit formulas. ### Weyl group elements and the twisted realization of \(\hat{\mathfrak{g}}^{(2)}\) Let \(\mathfrak{g}\) be one of the classical Lie algebras \(gl_{n}\) or \(so_{2n}\). We copy the construction of Subsection 5.1, except that we replace the lift of the element of the Weyl group \(\tilde{w}\) by \(\tilde{\sigma}:=\tilde{w}\sigma\), where \(\sigma\) is the outer automorphism that acts as (2.14), i.e. \(\tilde{\sigma}(g)=-\mathrm{Ad}\,\tilde{w}(\mathrm{Ad}\,J_{n}(g^{t}))\) for \(\mathfrak{g}=gl_{n}\) and \(\tilde{\sigma}(g)=\mathrm{Ad}\,\tilde{w}(\mathrm{Ad}\,O(g))\) for \(\mathfrak{g}=so_{2n}\). **In the case of \(gl_{n}\)**, this \(\tilde{\sigma}\), produces not only a permutation of the elements \(\epsilon_{i}\), but also multiplies it with \(-1\). Thus we get both positive and negative cycles, i.e. \(\tilde{\sigma}\) acts as \[\epsilon_{i_{1}}\rightarrow-\epsilon_{i_{2}}\rightarrow\epsilon_{i_{3}} \rightarrow\cdots\rightarrow(-1)^{m}\epsilon_{i_{m-1}}\rightarrow(-1)^{m+1} \epsilon_{i_{m}}\rightarrow(-1)^{m}\epsilon_{i_{1}}.\] We observe that if \(m\) is even, we obtain a positive cycle, but if \(m\) is odd we get a negative cycle. Hence, as before \(\tilde{\sigma}\) is related to a pair of partitions \((\lambda,\mu)=((\lambda_{1},\lambda_{2},\ldots,\lambda_{s}),(\mu_{1},\mu_{2}, \ldots\mu_{r}))\), where all the parts of \(\lambda\) are even and all the parts of \(\mu\) are odd, such that \(|\lambda|+|\mu|=n\). This means that if \(n\) is even (resp. odd), \(r\) must also be even (resp. odd). As before, see [24] for more details, a part \(\lambda_{i}\) is related to a pair of charged free fermionic fields \(\psi^{\pm i}(z)\) and a part \(\mu_{j}\) to a twisted neutral free fermionic field \(\tilde{\phi}^{j}(z)\). This means that we get a realization of (resp. \({\hat{gl_{2m}^{\ \ \ (2)}}}\subset d_{\infty}\)) which has an odd (resp. even) number of twisted neutral free fermionic fields. The \(q\)-dimension formulas in the case are given by (5.35) for \({\hat{gl_{2m}^{\ \ \ (2)}}}\subset d_{\infty}\) and by by the following formula in the \({\hat{gl}_{2m+1}^{(2)}}\subset b_{\infty}\) case, see [24] for more information. \[2^{[\frac{r}{2}]}\sum_{j_{1},\ldots,j_{s}\in\mathbb{Z}}q^{\frac{N}{2}(\frac{j_ {1}^{2}}{\lambda_{1}^{2}}+\frac{j_{2}^{2}}{\lambda_{2}^{2}}+\cdots+\frac{j_{2 }^{2}}{\lambda_{s}^{2}})}\prod_{a=1}^{s}\prod_{i=1}^{\infty}\frac{1}{1-q^{ \frac{jN}{\lambda_{a}}}}\prod_{b=1}^{r}\prod_{j=1}^{\infty}\frac{1}{1-q^{\frac {(2j-1)N}{\mu_{b}}}}\] **In the case of \(so_{2n}\)**, this \(\tilde{\sigma}\) gives a permutation of all the \(\epsilon_{i}\), \(1\leq i\leq n\), together with sign changes. But since \(\operatorname{Ad}O(\epsilon_{i})=(-)^{\delta_{in}}\epsilon_{i}\), the cycle of \(\operatorname{Ad}\tilde{w}\) that contains \(\epsilon_{n}\) changes sign, i.e., if the cycle is positive (resp. negative) for \(\operatorname{Ad}\tilde{w}\) it becomes negative (resp. positive). For instance, if \(\operatorname{Ad}\tilde{w}\) acts on a positive cycle as \[\epsilon_{n}\to\epsilon_{i_{2}}\to\epsilon_{i_{3}}\to\cdots\to\epsilon_{i_{m} }\to\epsilon_{n},\] Then \(\tilde{\sigma}\) acts on these elements as \[\epsilon_{n}\to-\epsilon_{i_{2}}\to-\epsilon_{i_{3}}\to\cdots\to-\epsilon_{i_{ m}}\to-\epsilon_{n}\to\epsilon_{i_{2}}\to\epsilon_{i_{3}}\to\cdots\to\epsilon_{i_{m}}\to \epsilon_{n}.\] Hence, instead of an even number of negative cycles, as for the conjugacy classes of the Weyl group of \(so_{2n}\), we obtain an odd number of negative cycles. Thus we obtain a realization of \({\hat{so}_{2n}^{(2)}}\subset b_{\infty}\) which is related again to two partitions \((\lambda,\mu)=((\lambda_{1},\lambda_{2},\ldots,\lambda_{s}),(\mu_{1},\mu_{2}, \ldots\mu_{r}))\), where a part \(\lambda_{i}\) is related to a pair of charged free fermionic fields \(\psi^{\pm i}(z)\) and a part \(\mu_{j}\) to a twisted neutral free fermionic field \(\tilde{\phi}^{j}(z)\), but in this case the parts are arbitrary, such that \(|\lambda|+|\mu|=n\), and \(r\) is odd. The \(q\)-dimension formulas in this case are again given by (5.35) (see [24]). **Proposition 5.13**: _The representation of these twisted algebras on the Fock space \(B_{(\lambda,\mu)}\) is given in terms of the following (twisted) fermionic fields. Let \(\omega_{a}=e^{\frac{2\pi\sqrt{-1}}{\lambda_{a}}}\), \(\omega_{b}=e^{\frac{2\pi\sqrt{-1}}{\lambda_{b}}}\), \(\omega_{c}=e^{\frac{\pi\sqrt{-1}}{\mu_{c-s}}}\) and \(\omega_{d}=e^{\frac{\pi\sqrt{-1}}{\mu_{d-s}}}\), and \(1\leq a,b\leq s<c,d\leq r+s\), then (a) For \({\hat{gl}_{n}^{(2)}}\):_ \[\sigma:\psi^{(-)^{j}a}(\omega_{a}^{j}z^{\lambda_{b}})\psi^{(-)^{k+ 1}b}(\omega_{b}^{k}z^{\lambda_{a}}):\sigma^{-1}, 1\leq j\leq\lambda_{a},\ 1\leq k\leq\lambda_{b},\] \[\sigma:\psi^{(-)^{j}a}(\omega_{a}^{j}z^{2\mu_{c-s}})\tilde{\phi}^ {c}((-1)^{k+1}\omega_{c}^{k}z^{\lambda_{a}}):\sigma^{-1}, 1\leq j\leq\lambda_{a},\ 1\leq k\leq\mu_{c-s},\] \[\sigma:\tilde{\phi}^{c}((-1)^{k}\omega_{c}^{k}z^{\lambda_{a}}) \psi^{(-)^{j+1}a}(\omega_{a}^{j}z^{2\mu_{c-s}}):\sigma^{-1}, 1\leq j\leq\lambda_{a},\ 1\leq k\leq\mu_{c-s},\quad\text{and}\] \[\sigma:\tilde{\phi}^{c}((-1)^{j}\omega_{c}^{j}z^{2\mu_{d-s}}) \tilde{\phi}^{d}((-1)^{k+1}\omega_{d}^{k}z^{2\mu_{c-s}}):\sigma^{-1}, 1\leq j\leq\mu_{c-s},\ 1\leq k\leq\mu_{d-s}.\] _(b) For \({\hat{so}_{2n}^{(2)}}\): (\(\delta,\epsilon=+\) or \(-\), \(j,k\in\mathbb{Z}\))_ \[\sigma:\psi^{\epsilon a}(\omega_{a}^{j}z^{\lambda_{b}})\psi^{bb}( \omega_{b}^{k}z^{\lambda_{a}}):\sigma^{-1},\quad\sigma:\psi^{\epsilon a}( \omega_{a}^{j}z^{2\mu_{c-s}})\tilde{\phi}^{c}(\omega_{c}^{k}z^{\lambda_{a}}): \sigma^{-1}\quad\text{and}\] \[\sigma:\tilde{\phi}^{c}(\omega_{c}^{j}z^{2\mu_{d-s}})\tilde{\phi} ^{d}(\omega_{d}^{k}z^{2\mu_{c-s}}):\sigma^{-1},\] _where one uses the formulas (3.66) and (3.73) to express them as vertex operators. In particular, we have the Heisenberg algebra (3.30), which are equal to \(\sigma\alpha^{a}(z)\sigma^{-1}=\sigma:\psi^{+a}(z)\psi^{-a}(z):\sigma^{-1}\), respectively \(\sigma\alpha^{c}(z)\sigma^{-1}=\frac{1}{2}\sigma:\tilde{\phi}^{c}(z)\tilde{\phi }^{c}(-z):\sigma^{-1}\) and their modes satisfy (3.31)._ Here we use the notation \((-)^{j}a\) as upper index in the charged fermionic fields, meaning that \((-)^{j}a=+a\) (resp. \(=-a\)) if \(j\) is even (resp. odd). For more detailed information on vertex operators for these twisted affine Lie algebras we refer to [24]. ## 6 A map from the conjugacy classes of the Weyl group to nilpotent elements in \(\mathfrak{g}\) ### Construction based on Section 5 In the previous chapter we have constructed for every conjugacy class of the Weyl group of a classical Lie algebra \(\mathfrak{g}\) (equivalently, for a partion \(\lambda\) for \(gl_{n}\), and a pair of partitions \((\lambda,\mu)\) for other classical Lie algebras) the corresponding Heisenberg subalgebra in \(\tilde{\mathfrak{g}}\). This Heisenberg subalgebra is the span of \(K\) and the elements \(a^{bb}_{\lambda_{b}\lambda_{b}}\) for \(1\leq b\leq s\) appearing in the modes of the fields for \(gl_{n}\) (see Proposition 5.3), and of \(a^{+b,-b}_{\lambda_{b}\lambda_{b}}\) and \(a^{+(s+c),-(s+c)}_{\mu c\mu_{c}}\) for \(1\leq b\leq s\), \(1\leq c\leq r\), for the other classical algebras (see Subsections 5.3-5.5). This Heisenberg algebra is the centralizer in \(\tilde{\mathfrak{g}}\) of the cyclic element \[c_{w}(t)=c_{(\lambda,\mu)}(t): =r^{-1}\left(\sum_{b=1}^{s}\frac{1}{\lambda_{a}}\alpha_{1}^{b}\pm \sum_{c=1}^{r}\frac{1}{\mu_{c}}\alpha_{1}^{s+c}\right)\] \[=\sum_{b=1}^{s}a^{+b,-b}_{\lambda_{b}\lambda_{b}}(\frac{N}{ \lambda_{b}})+\sum_{c=1}^{r}a^{+(s+c),-(s+c)}_{\mu_{c}\mu_{c}}(\frac{N}{2\mu_ {c}})\in\mathfrak{g}(\mathbb{C}[t]).\] Note that this cyclic element \(c_{w}(t)\) is a linear combination of the smallest positive degree Heisenberg element \(\alpha_{1}^{a}\), in each component \(a=1,2,\dots,s+r\) of the decomposition \(n=\lambda_{1}+\dots+\lambda_{s}+\mu_{1}+\dots+\mu_{r}\). The element \(c_{w}(0)=c_{(\lambda,\mu)}(0)\) is a nilpotent element in the classical Lie algebra \(\mathfrak{g}\). Thus we have constructed a map from the conjugacy classes of the Weyl group \(W\) to the nilpotent elements in \(\mathfrak{g}\), viz. \(w\mapsto c_{w}(0)\). Below we describe this map explicitly for all the cases. Let \(w_{\lambda}\in W\) of \(gl_{n}\) be given by (5.12), the nilpotent element corresponding to this element in the Weyl group is \[gl_{n}: f_{\lambda} =\sum_{b=1}^{s}\frac{1}{\lambda_{b}}\sum_{i=1}^{\lambda_{b}-1}e^ {bb}_{i,i+1}\] \[=\sum_{b=1}^{s}\frac{1}{\lambda_{b}}\left(te^{bb}_{\lambda_{b},1} +\sum_{i=1}^{\lambda_{b}-1}e^{bb}_{i,i+1}\right)\Big{|}_{t=0}.\] This means that \(f_{\lambda}\) has the Jordan normal form decomposition with \(s\) Jordan blocks of size \(\lambda_{1}\), \(\lambda_{2}\),..., \(\lambda_{s}\). Let \(w_{(\lambda,\mu)}\) be given by (5.20), where in the case of \(so_{2n}\)\(r\) is even, the nilpotent element corresponding to this element in the Weyl group is \[so_{2n+1}: f_{(\lambda,\mu)}=\sum_{b=1}^{s}\frac{1}{\lambda_{b}}\sum_{i=1}^{ \lambda_{b}-1}e_{i,i+1}^{+b,-b}-\sum_{c=1}^{\frac{r}{2}}\big{(}\frac{1}{\mu_{2c- 1}}\big{(}\frac{\gamma_{r}^{-1}}{\sqrt{2}}e_{12}^{+(s+2c-1),-(s+2c-1)}+\] \[+\frac{1}{\sqrt{2}}e_{\mu_{2c-1}\mu_{2c}}^{+(s+2c-1),-(s+2c)}+ \frac{1}{\sqrt{2}}e_{\mu_{2c-1}\mu_{2c}}^{+(s+2c-1),+(s+2c)}+\sum_{j=2}^{\mu_{ 2c-1}-1}e_{j,j+1}^{+(s+2c-1),-(s+2c-1)}\big{)}+\] \[+\frac{1}{\mu_{2c-1}}\big{(}\frac{\sqrt{-1}}{\sqrt{2}}e_{\mu_{2c -1},\mu_{2c}}^{+(s+2c),-(s+2c)}-\frac{\sqrt{-1}}{\sqrt{2}}e_{\mu_{2c}-1,\mu_{2 c}}^{+(s+2c),+(s+2c)}+\sum_{j=1}^{\mu_{2c-1}-2}e_{j,j+1}^{+(s+2c),-(s+2c)} \big{)}\big{)}-\] \[-\frac{\eta}{\mu_{r}}\big{(}e_{\mu_{r}}^{+(s+r)}+\sum_{j=1}^{\mu _{r}-1}e_{j,j+1}^{+(s+r),-(s+r)}\big{)};\] \[sp_{2n}: f_{(\lambda,\mu)}=\sum_{b=1}^{s}\frac{1}{\lambda_{b}}\sum_{i=1}^{ \lambda_{b}-1}e_{i,i+1}^{+b,-b}+\sum_{c=1}^{r}\frac{1}{\mu_{c}}\big{(}e_{\mu_{ c}\mu_{c}}^{+(s+c),+(s+c)}+\sum_{i=1}^{\mu_{c}-1}e_{i,i+1}^{+(s+c),-(s+c)}\big{)};\] \[so_{2n}: f_{(\lambda,\mu)}=\sum_{b=1}^{s}\frac{1}{\lambda_{b}}\sum_{i=1}^{ \lambda_{b}-1}e_{i,i+1}^{+b,-b}-\sum_{c=1}^{\frac{r}{2}}\big{(}\frac{1}{\mu_{ 2c-1}}\big{(}\frac{\gamma_{r}^{-1}}{\sqrt{2}}e_{12}^{+(s+2c-1),-(s+2c-1)}+\] \[+\frac{1}{\sqrt{2}}e_{\mu_{2c-1}\mu_{2c}}^{+(s+2c-1),-(s+2c)}+ \frac{1}{\sqrt{2}}e_{\mu_{2c-1}\mu_{2c}}^{+(s+2c-1),+(s+2c)}+\sum_{j=2}^{\mu_{ 2c-1}-1}e_{j,j+1}^{+(s+2c-1),-(s+2c-1)}\big{)}+\] \[+\frac{1}{\mu_{2c-1}}\big{(}\frac{\sqrt{-1}}{\sqrt{2}}e_{\mu_{2c -1},\mu_{2c}}^{+(s+2c),-(s+2c)}-\frac{\sqrt{-1}}{\sqrt{2}}e_{\mu_{2c}-1,\mu_{ 2c}}^{+(s+2c),+(s+2c)}+\sum_{j=2}^{\mu_{2c-1}-2}e_{j,j+1}^{+(s+2c),-(s+2c)} \big{)}\big{)}.\] In the case of \(sp_{n}\), the nilpotent element \(f_{(\lambda,\mu)}\) has \(2s+r\) Jordan blocks of size \(\lambda_{1},\lambda_{1},\lambda_{2},\lambda_{2},\ldots,\lambda_{s},\,\lambda_ {s},\,2\mu_{1},\,\,2\mu_{2},\ldots,2\mu_{r}\). For \(so_{2n}\) we again have \(2s+r\) Jordan blocks, but now of size \(\lambda_{1},\lambda_{1},\lambda_{2},\lambda_{2},\ldots,\lambda_{s},\lambda_{s},\,2\mu_{1}+1,2\mu_{2}-1,2\mu_{3}+1,2\mu_{4}-1,\ldots,2\mu_{r}-1\). For \(so_{2n+1}\) there are \(2s+r+1\) Jordan blocks of size \(\lambda_{1},\lambda_{1},\lambda_{2},\lambda_{2},\ldots,\lambda_{s},\lambda_{s},\,2\mu_{1}+1,2\mu_{2}-1,2\mu_{3}+1,2\mu_{4}-1,\ldots,2\mu_{r}-1,1\), if \(r\) is even and \(2s+r\) if \(r\) is odd, but then of size \(\lambda_{1},\lambda_{1},\lambda_{2},\lambda_{2},\ldots,\lambda_{s},\,\lambda_ {s},\,2\mu_{1}+1,2\mu_{2}-1,2\mu_{3}+1,2\mu_{4}-1,\ldots,2\mu_{r}+1\). ### A slightly different map Unfortunately the map defined in the previous subsection for \(so_{n}\) is not injective if we restrict to the set of all elliptic conjugacy classes, i.e. all conjugacy classes with \(\lambda=\emptyset\). For instance in \(so_{28}\), both \(\mu=(5,5,3,1)\) and \(\mu=(5,4,4,1)\) are mapped onto a nilpotent element with Jordan blocks of size \(11\), \(9\), \(7\), \(1\). We can however correct this for \(so_{2n}\) by using the following observation: **Remark 6.1**: _The construction in Section 5 for \(so_{n}\) also holds if we permute the parts of \(\mu\). This can be seen as follows. Recall, that in order to obtain the construction of \(\tilde{w}_{(\lambda,\mu)}\), we combine each time two blocks of the size \(2\mu_{2a-1}\) and \(2\mu_{2a}\) (two consecutive parts of \(\mu\)). In principle one can combine any two parts of \(\mu\). Hence we can shuffle the parts of \(\mu\). This can be achieved by a permutation \(\pi\) of \(1,2,\ldots r\), such that \(\pi(\mu)=(\mu_{\pi(1)},\mu_{\pi(2)},\ldots,\mu_{\pi(r)})\), then instead of \(f_{(\lambda,\mu)}\), we can consider \(f_{(\lambda,\pi(\mu))}\). Note, that the element in the conjugacy class of the Weyl group of \(so_{n}\) does not change, but the nilpotent element changes. It now has Jordan blocks of size \(\lambda_{1},\lambda_{1},\lambda_{2},\lambda_{2},\ldots,\lambda_{s},\lambda_{s },\,2\mu_{\pi(1)}+1,2\mu_{\pi(2)}-1,2\mu_{\pi(3)}+1,2\mu_{\pi(4)}-1,\ldots,2 \mu_{\pi(r)}-1\), for \(so_{2n}\) (in this case \(r\) is always even). For \(so_{2n+1}\), one thus gets \(\lambda_{1},\lambda_{1},\lambda_{2},\lambda_{2},\ldots,\lambda_{s},\,2\mu_{ \pi(1)}+1,2\mu_{\pi(2)}-1,2\mu_{\pi(3)}+1,2\mu_{\pi(4)}-1,\ldots,2\mu_{\pi(r) }-1,1\), if \(r\) is even, and \(\lambda_{1},\lambda_{1},\lambda_{2},\lambda_{2},\ldots,\lambda_{s},\,2\mu_{ \pi(1)}+1,2\mu_{\pi(2)}-1,2\mu_{\pi(3)}+1,2\mu_{\pi(4)}-1,\ldots,2\mu_{\pi(r) }+1\), if \(r\) is odd._ We now use the above remark and choose the permutation \(\pi\) as follows: \[\pi(2i{-}1)=i,\,\mbox{for}\ i=1,\ldots,\left[\frac{r+1}{2}\right]\ \mbox{and}\ \pi(2i)=\left[\frac{r+1}{2}\right]{+}i,\,\mbox{for}\ i=\left[\frac{r+3}{2} \right],\ldots,r.\] Then the pair \((\lambda,\mu)\) gets mapped to \[\lambda_{1},\lambda_{1},\lambda_{2},\lambda_{2},\ldots,\lambda_{s},\lambda_{s },2\mu_{1}{+}1,2\mu_{2}{+}1,\ldots 2\mu_{\left[\frac{r+1}{2}\right]}{+}1,2\mu_{ \left[\frac{r+3}{2}\right]}{-}1,2\mu_{\left[\frac{r+5}{2}\right]}{-}1,\ldots 2 \mu_{r}{-}1, \tag{6.1}\] in the case of \(so_{2n}\), and \(so_{2n+1}\) with \(r\) odd, and to \[\lambda_{1},\lambda_{1},\lambda_{2},\lambda_{2},\ldots,\lambda_{s},\lambda_{s },2\mu_{1}{+}1,2\mu_{2}{+}1,\ldots 2\mu_{\left[\frac{r+1}{2}\right]}{+}1,2\mu_{ \left[\frac{r+3}{2}\right]}{-}1,2\mu_{\left[\frac{r+5}{2}\right]}{-}1,\ldots 2 \mu_{r}{-}1,1,\] for \(so_{2n+1}\) with \(r\) even. This map is surjective, and its restriction to the elliptic conjugacy classes is injective for \(so_{2n}\). Surjectivity can be shown as follows. Suppose we have a Jordan block decomposition \(n_{1}\geq n_{2}\geq\cdots,\geq n_{k}\) of \(n\). Then blocks of even size always come with even multiplicity. We construct the pair \((\lambda,\mu)\) for which \(\lambda\) has the most parts that gets mapped to this Jordan normal form, so the corresponding \(w\), when acting on the Cartan subalgebra, \(\mathfrak{h}\) has the largest subspace of fixed points \(\mathfrak{h}^{w}\). For this we first make pairs of Jordan blocks which have the same size, say \(n_{j}\) and \(n_{j+1}\). This pair then gives one part \(\lambda_{i}\) of \(\lambda\) such that \(\lambda_{i}=n_{j}\). So assume that we have removed all pairs from this Jordan normal form, then there are only Jordan blocks left of odd sizes and their sizes are all different. Assume that the blocks have size \(m_{1}>m_{2}>\cdots>m_{\ell}\). Then \(\ell\) is even (resp odd) for \(n\) even (resp. odd). This is how we construct \(\mu\): \[\mu_{j}=\frac{m_{j}-1}{2}\ \mbox{for}\ j=1,\ldots,\left[\frac{\ell+1}{2} \right],\ \mbox{and}\ \mu_{j}=\frac{m_{j}+1}{2}\ \mbox{for}\ j=\left[\frac{\ell+3}{2}\right],\ldots,\ell. \tag{6.2}\] Then this pair \((\lambda,\mu)\) is mapped to the Jordan normal form with Jordan blocks \(n_{1}\geq n_{2}\geq\cdots,\geq n_{k}\). To prove injectivity for \(so_{2n}\) on the elliptic conjugacy classes, so all classes (\(\lambda=\emptyset,\mu\)), we do the following. Let \(\mu^{1}\neq\mu^{2}\) be two partitions of \(n\), then clearly when the number of parts of both partitions are different we also obtain different Jordan normal forms. So assume both \(\mu\)'s must have the same number of parts and since we have \(so_{2n}\) the number of parts must be even, say \(r=2k\). The map which gives the Jordan blocks, given in (6.1) produces blocks of sizes \[2\mu_{1}+1\geq 2\mu_{2}+1\geq\ldots\geq 2\mu_{k}+1>2\mu_{k+1}-1\geq 2\mu_{k+2}- 1\geq\cdots\geq 2\mu_{r}-1. \tag{6.3}\] Now clearly \(\mu^{1}\) and \(\mu^{2}\), when different, produce different Jordan blocks (6.3). Unfortunately, this new map is not injective for \(so_{2n+1}\), with \(n>3\). For instance in \(so_{9}\) both \(\mu=(2,2)\) and \(\mu=(2,1,1)\) give Jordan normal form with blocks of sizes 5, 3, 1. Note that the above construction in the case of \(so_{2n}\) also gives that in each fibre of the map, given in (6.1), there is a unique conjugacy class \([w].\), which has the smallest fixed subspace \(\mathfrak{h}^{w}\). One constructs the to \([w]\) corresponding pair \((\lambda,\mu)\) in a similar way as in the proof of surjectivity. The difference is that now we want to have the element \(w\) with smallest subspace \(\mathfrak{h}^{w}\). This means that we want to have the number of parts of \(\lambda\) as small as possible. So we produce \(\lambda\) only from the even sized Jordan blocks. If we remove these we only have an even number of odd sized Jordan blocks left, say \(m_{1}\geq m_{2}\geq\cdots\geq m_{\ell}\). From this we construct \(\mu\) as in (6.2). Since the map is injective on the elliptic conjugacy classes of \(so_{m_{1}+\cdots+m_{\ell}}\), this pair \((\lambda,\mu)\), constructed in this way, is unique and has the desired property. Unfortunately we do not know how to modify the construction of Subsection 6.1 such that it becomes injective when restricted to the set of elliptic conjugacy classes of \(W\) for \(so_{2n+1}\), \(n>3\). A surjective map of the set of conjugacy classes of \(W\) to the set of nilpotent orbits, which is injective on the set of elliptic conjugacy classes, for any simple Lie algebra \(\mathfrak{g}\) was constructed by Lusztig, see [31]. It coincides with our map for \(\mathfrak{g}=sp_{n}\), but not for \(\mathfrak{g}=so_{n}\). ## 7 Hierarchies of KP type related to \(x_{\infty}\) and \(\hat{\mathfrak{g}}\) reductions ### \(\hat{gl}_{n}\) and the \(\lambda\)-KdV hierarchy Instead of considering the group orbit of \(GL_{\infty}\), we want to describe the orbit of \(\hat{SL}_{n}\), which is the central extension of the loop group of type \(SL_{n}\) that is a subgroup of \(A_{\infty}\), leaves the space \(F^{(m)}\) invariant. We find that an \(\hat{SL}_{n}\) tau-function satisfies more Hirota bilinear equations, besides (1.34). Using Lemma 2.4 of [16], an element \(g\in A_{\infty}\), which has the following action on the generators in the Clifford algebra \(C\ell\): \[g\psi_{j}^{+}g^{-1}=\sum_{i}a_{ij}\psi_{i}^{+},\ g\psi_{k}^{-}g^{-1}=\sum_{i }b_{ik}\psi_{i}^{-},\ \mbox{with}\ \sum_{i}a_{ij}b_{-ik}=\delta_{j,-k}.\] If \(g\in\hat{GL}_{n}\), then the \(a_{ij}\) and \(b_{ij}\) satisfy \(a_{i+n,j+n}=a_{ij}\) and \(b_{i+n,j+n}=b_{ij}\). Using this we can show that \(g\otimes g\) commtes with \(\sum_{i}\psi_{i}^{+}\otimes\psi_{pn-i}^{-}\) for \(p=0,1,2,\ldots\): \[\sum_{i}g\psi_{i}^{+}\otimes g\psi_{pn-i}^{-} =\sum_{i}g\psi_{i}^{+}g^{-1}g\otimes g\psi_{pn-i}^{-}g^{-1}g\] \[=\sum_{i,j,k}a_{ij}\psi_{j}^{+}g\otimes b_{pn-i,k}\psi_{k}g\] \[=\sum_{i,j,k}a_{ij}b_{-i,k-pn}\psi_{j}^{+}g\otimes\psi_{k}g\] \[=\sum_{j}\psi_{j}^{+}g\otimes\psi_{pn-j}g.\] Since \(\sum_{i}\psi_{i}^{+}|0\rangle\otimes\psi_{pn-i}^{-}|0\rangle=0\), we have that for all \(p=0,1,2,\ldots\) (\(p=0\) is the \(s\)-component KP hierarchy (1.34)): \[\sum_{i\in\frac{1}{2}+\mathbb{Z}}\psi_{i}^{+}g|0\rangle\otimes\psi_{pn-i}^{-} g|0\rangle=0,\] which turns into the following equation in the \(\lambda\)-reduction \[\sum_{j=1}^{s}\sum_{i\in\frac{1}{2}+\mathbb{Z}}\psi_{i}^{+j}\tau\otimes\psi_{ p\lambda_{j}-i}^{-j}\tau=0,\quad p=0,1,2,\ldots,\] and, after using the isomorphism \(\sigma\), into \[\begin{split}\operatorname{Res}_{z=0}&\sum_{j=1}^{s }(-1)^{|\underline{k}+\underline{\ell}|_{j-1}}z^{p\lambda_{j}+k_{j}-\ell_{j}-2 +2\delta_{js}}e^{z\cdot(t^{(j)}-\overline{t}^{(j)})}e^{z^{-1}\cdot(\tilde{ \mathfrak{T}}_{\bar{t}^{(j)}}-t^{(j)}-\tilde{\mathfrak{d}}_{t^{(j)}})}\tau_{ \underline{k}+\underline{e}_{s}-\underline{e}_{j}}(t)\times\\ &\tau_{\underline{\ell}+\underline{e}_{j}-\underline{e}_{s}}( \overline{t})dz=0.\end{split} \tag{7.1}\] This is the Hirota bilinear equation for the \(\lambda\)-KdV. The special case when \(s=1\) and \(\lambda_{1}=n\) is the \(n\)-Gelfand-Dickey hierarchy, and \(n=2\) is the Korteweg-de Vries (KdV) hierarchy. **Proposition 7.1**: _Equation (7.1) for \(p=0,1,\ldots\) is equivalent to Equation (1.34), which is equal to (7.1) for \(p=0\), and_ \[\begin{split}\operatorname{Res}_{z=0}&\sum_{j=1}^{s }(-1)^{|\underline{k}+\underline{\ell}|_{j-1}}z^{k_{j}-\ell_{j}-2+2\delta_{js}}e ^{z\cdot(t^{(j)}-\overline{t}^{(j)})}e^{z^{-1}\cdot(\tilde{\mathfrak{d}}_{ \bar{t}^{(j)}}-t^{(j)}-\tilde{\mathfrak{d}}_{t^{(j)}})}\left(\sum_{i=1}^{s} \frac{\partial\tau_{\underline{k}+\underline{e}_{s}-\underline{e}_{j}}(t)}{ \partial t^{(i)}_{p\lambda_{i}}}\right)\times\\ &\tau_{\underline{\ell}+\underline{e}_{j}-\underline{e}_{s}}( \overline{t})dz=0.\end{split} \tag{7.2}\] **Proof** Differentiate Equation (7.1) for \(p=0\) by \(\sum_{j=1}^{s}\frac{\partial}{\partial t^{(j)}_{q\lambda_{j}}}\), for \(q>0\). We thus get that the sum of the left-hand side of (7.1) for \(p=q\) and the left-hand side of (7.2) for \(p=q\) is equal to zero. Hence for all \(p>0\), (7.1) implies (7.2) and vice versa. \(\square\) Note that if equation (7.1) holds for \(p=0\), then the equations \[\sum_{i=1}^{s}\frac{\partial\tau_{m}(t)}{\partial t^{(i)}_{p\lambda_{i}}}=C_{p} \tau_{m}(t),\ \mbox{with}\ C_{p}\in\mathbb{C},\qquad p=1,2,\ldots. \tag{7.3}\] imply (7.2) and hence (7.1) for \(p>1\). In fact one can prove **Proposition 7.2**: _Equation (7.1) is equivalent to equation (7.1) for \(p=0\) together with Equation (7.3)._ Equation (7.1) turns into \((p=0,1,\ldots)\): \[\mbox{Res}_{z=0}\sum_{j=1}^{s}V^{+}(\underline{\alpha};x,t,z)z^{p\lambda_{j}}E _{jj}V^{-}(\underline{\beta};\overline{x},\overline{t},z)^{t}dz=0. \tag{7.4}\] Using the fundamental lemma (see [15], Lemma 4.1), we deduce the following expression for the matrix pseudo-differential operators \[\sum_{j=1}^{s}(P^{+}(\underline{\alpha};x,t,\partial)R^{+}(\underline{\alpha };\partial)\partial^{p\lambda_{j}}E_{jj}R^{-}(\underline{\beta};x,t,\partial) ^{*}P^{-}(\underline{\beta};x,t,\partial)^{*})_{-}=0. \tag{7.5}\] This gives that \[\sum_{j=1}^{s}(P^{+}(\underline{\beta};x,t,\partial)\partial^{p\lambda_{j}}E _{jj}P^{+}(\underline{\beta};x,t,\partial)^{-1})_{-}=0, \tag{7.6}\] and thus that \[\sum_{j=1}^{s}(L^{p\lambda_{j}}C^{j})_{-}=0. \tag{7.7}\] From this it is obvious that \({\cal L}=\sum_{j=1}^{s}L^{\lambda_{j}}C^{j}\) is a differential operator that also satisfies \[\frac{\partial{\cal L}}{\partial t^{j}_{i}}=[(L^{i}C^{j})_{+},{\cal L}]. \tag{7.8}\] Let \(D_{p}=\sum_{j=1}^{s}\frac{\partial}{\partial t^{(j)}_{p\lambda_{j}}}\), then \[D_{p}(P^{+})=D_{p}(L)=D_{p}(C^{j})=D_{p}({\cal L})=0. \tag{7.9}\] We now have all the ingredients to prove Proposition 7.2. **Proof of Proposition 7.2** We only have to show that (7.2) implies (7.3). So assume that (7.2) holds. We first take \(\underline{k}=\underline{\ell}\), and \(t=\overline{t}\), this gives, see (7.9), that \[\frac{\partial}{\partial t^{(j)}_{1}}\left(\frac{D_{p}(\tau_{\underline{ \alpha}}(x,t))}{\tau_{\underline{\alpha}}(x,t)}\right)=0. \tag{7.10}\] Next take \(j=s\), \(\underline{\ell}=\underline{k}+\underline{e}_{i}-\underline{e}_{j}\), and \(t=\overline{t}\) in (7.2), this gives \[\frac{D_{p}(\tau_{\underline{\alpha}}(x,t))}{\tau_{\underline{\alpha}}(x,t)}= \frac{D_{p}(\tau_{\underline{\alpha}+\underline{e}_{i}-\underline{e}_{j}}(x,t) )}{\tau_{\underline{\alpha}+\underline{e}_{i}-\underline{e}_{j}}(x,t)},\] which means that \[\frac{D_{p}(\tau_{\underline{\alpha}}(x,t))}{\tau_{\underline{\alpha}}(x,t)}= \frac{D_{p}(\tau_{\underline{\beta}}(x,t))}{\tau_{\underline{\beta}}(x,t)} \quad\text{ for all }\underline{\alpha}\text{ and }\underline{\beta}. \tag{7.11}\] Now take (7.2) for \(\underline{k}=\underline{\alpha}-\underline{e}_{s}+\underline{e}_{i}\) and \(\underline{\ell}=\underline{\alpha}+\underline{e}_{s}-\underline{e}_{m}\) and divide by \(\tau(\underline{\alpha};x,t)\tau(\underline{\alpha};\overline{x},\overline{t})\) and use (7.11), to obtain \[\operatorname{Res}_{z=0}\sum_{j=1}^{s}\exp\left(-\sum_{i=1}^{\infty}\frac{ \partial}{\partial\overline{t}_{i}^{(j)}}\frac{z^{-j}}{j}\right)\left(\frac{D _{p}(\tau_{\underline{\beta}}(x,t))}{\tau_{\underline{\beta}}(x,t)}\right)V^{+ }(\underline{\alpha};x,t)E_{jj}V^{-}(\underline{\alpha};\overline{x},\overline {t})^{t}dz=0. \tag{7.12}\] Subtract a multiple of (7.4) for \(p=0\) and \(\underline{\beta}=\underline{\alpha}\), this gives \[\operatorname{Res}_{z=0}\sum_{j=1}^{s}\left\{\exp\left(-\sum_{i=1}^{\infty} \frac{\partial}{\partial\overline{t}_{i}^{(j)}}\frac{z^{-j}}{j}\right)-1 \right\}\left(\frac{D_{p}(\tau_{\underline{\beta}}(x,t))}{\tau_{\underline{ \beta}}(x,t)}\right)V^{+}(\underline{\alpha};x,t)E_{jj}V^{-}(\underline{\alpha };\overline{x},\overline{t})^{t}dz=0. \tag{7.13}\] Introduce the pseudo-differential operator \(S(\underline{\beta};x,t,\partial)\) by \[S(\underline{\beta};x,t,z)=\sum_{j=1}^{s}\left\{\exp\left(-\sum_{i=1}^{\infty }\frac{\partial}{\partial\overline{t}_{i}^{(j)}}\frac{z^{-j}}{j}\right)-1 \right\}\left(\frac{D_{p}(\tau_{\underline{\beta}}(x,t))}{\tau_{\underline{ \beta}}(x,t)}\right)E_{jj}.\] Since (7.10) holds \(\partial S(\underline{\beta};x,t,\partial)=S(\underline{\beta};x,t,\partial)\partial\). Hence, using the fundamental Lemma, i.e. Lemma 4.1 of [15], we deduce that \[0 =(P^{+}(\underline{\alpha};x,t,\partial)R^{+}(\underline{\alpha };\partial)S(\underline{\beta};x,t,\partial)R^{+}(\underline{\alpha}; \partial)^{-1}P^{+}(\underline{\alpha};x,t,\partial)^{-1})_{-}\] \[=(P^{+}(\underline{\alpha};x,t,\partial)S(\underline{\beta};x,t, \partial)P^{+}(\underline{\alpha};x,t,\partial)^{-1})_{-}\] \[=P^{+}(\underline{\alpha};x,t,\partial)S(\underline{\beta};x,t, \partial)P^{+}(\underline{\alpha};x,t,\partial)^{-1}.\] Thus \(S(\underline{\beta};x,t,\partial)=0\), which gives that \[\frac{\partial}{\partial t_{i}^{(j)}}\left(\frac{D_{p}(\tau_{\underline{\beta }}(x,t))}{\tau_{\underline{\beta}}(x,t)}\right)=0,\quad 1\leq j\leq s,\quad i=1,2,\ldots.\] Hence, \[\frac{D_{p}(\tau_{\underline{\beta}}(x,t))}{\tau_{\underline{\beta}}(x,t)}=C_ {p}\in\mathbb{C}\] and because of (7.11), \(C_{p}\) must be the same for all \(\underline{\beta}\). Thus (7.3) holds. ### \(\hat{SO}_{2n}\) and the \((\lambda,\mu)\)-reduced DKP hierarchy The group \(\hat{SO}_{2n}\) leaves the module \(F_{d}^{\overline{a}}\) invariant. It consists of all elements \(g\in D_{\infty}\) that satisfy \(g\phi_{i}g^{-1}=\sum_{i}a_{ji}\phi_{j}\), with \(\sum_{k}a_{ki}a_{-k+1,j}=\delta_{i}\), and hence \(\sum_{k}a_{ik}a_{j,-k+1}=\delta_{i+j,1}\), and \(a_{i+2n,j+2n}=a_{ij}\). Since for \(p=0,1,2,\ldots\) \[(g\otimes g)\sum_{i\in\mathbb{Z}}\phi_{i}\otimes\phi_{2pn-i+1} =\sum_{i\in\mathbb{Z}}g\phi_{i}g^{-1}g\otimes g\phi_{2pn-i+1}g^{-1}g \tag{7.15}\] \[=\sum_{i,j,k\in\mathbb{Z}}a_{ji}a_{k,2pn-i+1}\phi_{j}g\otimes\phi_ {k}g\] \[=\sum_{i,j,k\in\mathbb{Z}}a_{ji}a_{k-2pn,-i+1}\phi_{j}g\otimes\phi _{k}g\] \[=\sum_{j\in\mathbb{Z}}\phi_{j}g\otimes\phi_{2pn-j+1}g\] \[=\sum_{j\in\mathbb{Z}}\phi_{j}\otimes\phi_{2pn-j+1}(g\otimes g),\] and \[\sum_{i\in\mathbb{Z}}\phi_{i}|\overline{a}\rangle\otimes\phi_{2pn-i+1}| \overline{a}\rangle=0,\] we have that elements \(\tau_{a}\in F_{d}^{\overline{a}}\), which belong to the \(\hat{SO}_{2n}\)-group orbit of \(|\overline{a}\rangle\) satisfy \[\sum_{i\in\mathbb{Z}}\phi_{i}\tau_{a}\otimes\phi_{2pn-i+1}\tau_{a}=0,\quad \text{for }p=0,1,2,\ldots.\] This equation can be rewritten, with respect to the relabeling with respect to \((\lambda,\mu)\) to: \[\left(\sum_{b=1}^{s}\sum_{i\in\frac{1}{2}+\mathbb{Z}}\left(\psi_{i}^{+b} \otimes\psi_{p\lambda_{b}-i}^{-b}+\psi_{i}^{-b}\otimes\psi_{p\lambda_{b}-i}^{ +b}\right)+\sum_{c=s+1}^{r+s}\sum_{i\in\mathbb{Z}}(-1)^{i}\tilde{\phi}_{i}^{c }\otimes\tilde{\phi}_{2p\mu_{c}-i}^{c}\right)(\tau_{a}\otimes\tau_{a})=0, \tag{7.16}\] **Note that \(r\) is even in this case.** In terms of the fields we obtain: \[\begin{split}\operatorname{Res}_{z=0}&\left(\sum_{b =1}^{s}z^{p\lambda_{b}}\left(\psi^{+b}(z)\otimes\psi^{-b}(z)+\psi^{-b}(z) \otimes\psi^{+b}(z)\right)+\right.\\ &\qquad\qquad+\left.\sum_{c=s+1}^{r+s}z^{2p\mu_{c}-1}\tilde{\phi}^ {c}(z)\otimes\tilde{\phi}^{c}(-z)\right)(\tau_{a}\otimes\tau_{a})dz=0,\quad \text{for }p=0,1,2,\ldots.\end{split} \tag{7.17}\] Now using \(\sigma\) and substituting the vertex operators (1.32) and (3.71) we get the following hierarchy of differential equations. We write \[\sigma(\tau_{a})=\tau^{a}(q,t,\theta)=\sum_{\underline{k}\in\mathbb{Z}^{s}, \underline{\ell}\in\mathbb{Z}^{\frac{r}{2}},\,|\underline{k}|+|\underline{\ell }|=a\operatorname{mod}2}\tau_{\underline{k},\underline{\ell}}(t)q^{\underline{ k}}\theta^{\underline{\ell}},\] where \(q^{\underline{k}}=q_{1}^{k_{1}}q_{2}^{k_{2}}\cdots q_{s}^{k_{s}}\) and \(\theta^{\underline{\ell}}=\theta_{1}^{\ell_{1}}\theta_{2}^{\ell_{2}}\cdots\theta _{\underline{\ell}}^{\ell_{\underline{\ell}}}\). Then (7.15) turns into the following set of equations for all \(\underline{k},\overline{k}\in\mathbb{Z}^{s}\), \(\underline{\ell},\overline{\ell}\in\mathbb{Z}_{2}^{\frac{r}{2}}\), such that \(|\underline{k}|+|\underline{\ell}|+1=|\overline{k}|+|\overline{\underline{ \ell}}|+1=a\mod 2\) and \(p=0,1,2,\ldots\), \[\operatorname{Res}_{z=0} \big{(}\sum_{b=1}^{s}\big{(}(-1)^{|\underline{k}+\overline{k}|_{b -1}}z^{p\lambda_{b}+k_{b}-\overline{k}_{b}-2+2\delta_{bs}}e^{z\cdot t^{(b)}-z \cdot\overline{t}^{(b)}}e^{z-1\cdot\tilde{\alpha}_{\overline{t}^{(b)}}-z^{-1 }\cdot\tilde{\partial}_{t^{(b)}}}\tau_{\underline{k}+\underline{\epsilon}_{s }-\underline{\epsilon}_{b},\underline{\ell}}(t)\tau_{\overline{k}+\underline{ \epsilon}_{b}-\underline{\epsilon}_{s},\underline{\ell}}(\overline{t})+\] \[\qquad\qquad-(-1)^{|\underline{k}+\overline{k}|_{b-1}}(-z)^{p \lambda_{b}-k_{b}+\overline{k}_{b}-2-2\delta_{bs}}e^{(-z)\cdot\overline{t}^{( b)}-(-z)\cdot t^{(b)}}\times\] \[\qquad\qquad\qquad\qquad\qquad e^{(-z)^{-1}\cdot\tilde{\partial }_{t^{(b)}}-(-z)^{-1}\cdot\tilde{\partial}_{\overline{t}^{(b)}}}\tau_{ \underline{k}+\underline{\epsilon}_{s}+\underline{\epsilon}_{b},\underline{ \ell}}(t)\tau_{\overline{k}-\underline{\epsilon}_{b}-\underline{\epsilon}_{s },\underline{\overline{\ell}}}(\overline{t})\big{)}+\] \[+\frac{1}{2}\sum_{c=1}^{r}(-1)^{|\underline{k}+\overline{k}|+[ \underline{\ell}+\overline{\underline{\ell}}]}\big{[}\tfrac{c-1}{2}\big{]}^{+ (c-1)(\ell_{1}\left[\frac{c+1}{2}\right]}+^{\overline{\ell}}\big{[}\tfrac{c +1}{2}\big{]}^{+1})^{2}z^{2p\mu_{c}-1}e^{z\circ t^{(s+c)}-z\circ\overline{t}^{ (s+c)}}\times\] \[\qquad\qquad\qquad\qquad\qquad e^{2(z^{-1}\circ(\tilde{\partial }_{\overline{t}^{(s+c)}}-c\partial_{t^{(s+c)}})}\tau_{\underline{k}+\underline{ \epsilon}_{s},\underline{\ell}+\underline{\epsilon}_{\overline{\ell}}\left[ \frac{c+1}{2}\right]}(t)\tau_{\overline{k}-\underline{\epsilon}_{s},\underline {\overline{\ell}}+\underline{\epsilon}_{\overline{\ell}}\left[\frac{c+1}{2} \right]}(\overline{t})\big{)}dz=0. \tag{7.16}\] Note, that all the equations for \(p=0\) form the DKP hierarchy. **The case \(r=0\)** Assume that \(\mu=0\) thus \(r=0\). The wave operators \(V^{\pm}(\underline{\alpha}:x,t,z)\) also satisfy \((p=0,1,\ldots)\): \[\operatorname{Res}_{z=0}\sum_{j=1}^{s}V^{+}(\underline{\alpha};x,t,z)\left(z^ {p\lambda_{j}}E_{jj}-(-z)^{p\lambda_{j}}E_{s+j,s+j}\right)V^{-}(\underline{ \beta};\overline{x},\overline{t},z)^{t}dz=0. \tag{7.17}\] Using the fundamental lemma (see [15], Lemma 4.1), we deduce the following expression for the matrix pseudo-differential operators \[\sum_{j=1}^{s}(P^{+}(\underline{\alpha};x,t,\partial)R^{+}(\underline{\alpha}; \partial)\left(\partial^{p\lambda_{j}}E_{jj}+(-\partial)^{p\lambda_{j}}E_{s+j, s+j}\right)R^{-}(\underline{\beta};x,t,\partial)^{*}P^{+}(\underline{\beta};x,t, \partial)^{-1})_{-}=0. \tag{7.18}\] From this it is obvious that \(\mathcal{L}=\sum_{j=1}^{s}L^{\lambda_{j}}E^{j}\) is a differential operator that satisfies \[\frac{\partial\mathcal{L}}{\partial t_{i}^{j}}=[(L^{i}D^{j})_{+},\mathcal{L}].\] If we define \(\mathcal{M}=\sum_{j=1}^{s}L^{\lambda_{j}}D^{j}\), then \(\mathcal{L}^{2}=\mathcal{M}^{2}\). **The case \(r>0\) (first approach)** Recall that \(r\) is always even. Following the construction of Section 4.4, we set all \(t_{j}^{s+c}=\overline{t}_{j}^{s+c}=0\), for \(1\leq c\leq r\) in Equation (7.16) and replace all other \(t_{1}^{a}\) (resp. \(\overline{t}_{1}^{a}\)) by \(t_{1}^{a}+x\) (resp. \(\overline{t}_{1}^{a}=\overline{x}\)). Let \(V^{\pm}(\alpha;x,t,z)\) be given as before (but of course with the substitution \(t_{j}^{s+c}=0\)) and \(W^{\pm}(\alpha;x,t,z)\) as in (4.26), then (7.16) turns into \[\begin{split}\operatorname{Res}_{z=0}&\sum_{j=1}^{s}V^{+ }(\underline{\alpha};x,t,z)\left(z^{p\lambda_{j}}E_{jj}-(-z)^{p\lambda_{j}}E_{s+ j,s+j}\right)V^{-}(\underline{\beta};\overline{x},\overline{t},z)^{t}dz=\\ &\operatorname{Res}_{z=0}&\frac{1}{2}\sum_{i=1}^{r}W^ {+}(\underline{\alpha};x,t,z)z^{2p\mu_{i}-1}E_{ii}W^{-}(\underline{\beta}; \overline{x},\overline{t},z)^{t}dz.\end{split} \tag{7.19}\] Applying the fundamental lemma gives \[\begin{split}\sum_{j=1}^{s}&(P^{+}(\underline{\alpha}; x,t,\partial)R^{+}(\underline{\alpha};\partial)\left(\partial^{p\lambda_{j}}E_{ jj}-(-\partial)^{p\lambda_{j}}E_{s+j,s+j}\right)R^{-}(\underline{\beta};x,t, \partial)^{*}P^{-}(\underline{\beta};x,t,\partial)^{*})_{-}=\\ &(P^{+}(\underline{\alpha};x,t,\partial)R^{+}(\underline{\alpha}; \partial)\left(\partial^{p\lambda_{j}}E_{jj}+(-\partial)^{p\lambda_{j}}E_{s+ j,s+j}\right)R^{-}(\underline{\beta};x,t,\partial)^{*}P^{+}(\underline{\beta};x,t, \partial)^{-1}J)_{-}=\\ &\frac{1}{2}\sum_{i=1}^{r}\sum_{n=0}^{2p\mu_{i}}W^{+}( \underline{\alpha};x,t)^{n}E_{ii}\partial^{-1}\left(W^{-}(\underline{\beta};x,t)^{2p\mu_{i}-n}\right)^{t}.\end{split} \tag{7.20}\] Note that if \(p=0\) and \(\underline{\alpha}=\underline{\beta}\), the right-hand side of (7.20) is equal to \(0\). Set \(\underline{\alpha}=\underline{\beta}\) in (7.20), and define \(L\) and \(D^{j}\) as before, then \(\mathcal{L}=\sum_{j=1}^{s}L^{\lambda_{j}}E^{j}\) satisfies the Lax equation \(\frac{\partial\mathcal{L}}{\partial t_{j}^{2}}=[(L^{j}D^{a})_{+},\mathcal{L}]\) and is equal to \[\mathcal{L}(\underline{\alpha};x,t)=\mathcal{L}(\underline{\alpha};x,t)_{+}+ \frac{1}{2}\sum_{i=1}^{r}\sum_{n=0}^{2\mu_{i}}W^{+}(\underline{\alpha};x,t)^{ n}E_{ii}\partial^{-1}\left(JW^{-}(\underline{\alpha};x,t)^{2\mu_{i}-n}\right)^{t}.\] **The case \(r>0\) (second approach)** We start with the following observation. **Remark 7.3**: _Note that in principle we can permute the elements of \(\mu\) in such a way that \(\mu_{r}\) is no longer the smallest part of \(\mu\). We will do this allowing \(\mu_{r}\) to have any value greater than zero. We will assume that all the other \(\mu_{i}\) are still in decreasing order, hence that \((\mu_{1},\mu_{2},\ldots,\mu_{r-1})\) is a genuine partition. From now on always whenever we use the second approach we will assume this, viz. that \(\mu_{r}\) is no longer the smallest part of \(\mu\)._ As before we assume that \(r\geq 1\), in this case \(s\) can be \(0\) and \(r=2p\) We follow the approach of **The DKP hierarchy (second approach)** and define the \((2s+1)\times(2s+1)\)-matrix valued wave functions by (4.38), where the coefficients \(P^{\pm}(\underline{\alpha};x,t,\pm z)\) are given by (4.24) and (4.39) and the other matrices by (4.25). The \((2s+1)\times(r-1)\) is \(W^{\pm}(\underline{\alpha};x,t,z)\) is again defined as in (4.27) and (4.41) and \(T^{\pm}\underline{\alpha};x,t)=0\) in this case, because \(r=2p\). Equation (7.16) now translates to Then we obtain the following bilinear identity: \[\begin{split}\operatorname{Res}_{z=0}&\tilde{V}^{+ }(\underline{\alpha};x,t,z)J_{k}(z)\tilde{V}^{-}(\underline{\beta};\overline{x },\overline{t},z)^{t}dz=\\ &\operatorname{Res}_{z=0}&\frac{1}{2}W^{+}(\underline{ \alpha};x,t,,z)M_{k}(z)W^{-}(\underline{\beta};\overline{x},\overline{t},z)^ {t}dz,\quad k=0,1,2,\ldots,\end{split} \tag{7.21}\] where \[J_{k}(z)=z^{2\mu_{r}k-1}E_{2s+1,2s+1}+\sum_{j=1}^{s}z^{\lambda_{j}k}E_{jj}-(-z)^{ \lambda_{j}k}E_{s+j,s+j},\quad M_{k}(z)=\sum_{j=1}^{r-1}z^{2\mu_{j}k-1}E_{j,j}. \tag{7.22}\] Note that \(J(\partial)=J_{0}(\partial)\). Using the fundamental lemma, we obtain \[\begin{split}\left(P^{+}(\underline{\alpha};x,t,\partial)J_{k}( \partial)P^{-}(\underline{\alpha};x,t,\partial)^{*}\right)_{-}\\ =\frac{1}{2}\sum_{a=1}^{r-1}\sum_{j=0}^{2k\mu_{a}}W^{+}( \underline{\alpha};x,t)^{j}E_{aa}\partial^{-1}\left(W^{-}(\underline{\alpha}; x,t)^{2k\mu_{a}-j}\right)^{t},\end{split} \tag{7.23}\] Then for \(k=0\) we again obtain (4.46), hence we define \(\hat{V}^{\pm}\) and \(\hat{P}^{\pm}\) as in (4.47), then we obtain (4.49), (4.51) and (4.53). Hence we can define the Lax operators as in (4.54) and they clearly satisfy (4.55). Define now \(\hat{W}^{\pm}(\underline{\alpha};x,t,z)=(P^{\pm}(\underline{\alpha};x,t,z)^{0 })^{-1}W^{\pm}(\underline{\alpha};x,t,z)\), then (7.23) gives \[\begin{split}&\left(\hat{P}^{+}(\underline{\alpha};x,t,\partial)J_{k}( \partial)J(\partial)^{-1}\hat{P}^{-}(\underline{\alpha};x,t,\partial)^{-1}J( \partial)\right)_{-}J(\partial)^{-1}=\\ =\frac{1}{2}\sum_{a=1}^{r-1}\sum_{j=0}^{2k\mu_{a}}\hat{W}^{+}( \underline{\alpha};x,t)^{j}E_{aa}\partial^{-1}\left(\hat{W}^{-}(\underline{ \alpha};x,t)^{2k\mu_{a}-j}\right)^{t}J(\partial)^{-1}\end{split} \tag{7.24}\] and \[\mathcal{L}(\underline{\alpha};x,t,\partial)=L(\underline{\alpha};x,t, \partial)^{2\mu_{r}}D^{s+r}(\underline{\alpha};x,t,\partial)+\sum_{a=1}^{s}L (\underline{\alpha};x,t,\partial)^{\lambda_{a}}E^{a}(\underline{\alpha};x,t, \partial).\] Then clearly \(\mathcal{L}\) satisfies the same Lax equations and \[\mathcal{L}(\underline{\alpha};x,t,\partial)=\mathcal{L}(\underline{\alpha}; x,t,\partial)_{\geq}+\frac{1}{2}\sum_{a=1}^{r-1}\sum_{j=0}^{2\mu_{a}}\hat{W}^{+}( \underline{\alpha};x,t)^{j}E_{aa}\partial^{-1}\left(\hat{W}^{-}(\underline{ \alpha};x,t)^{2\mu_{a}-j}\right)^{t}J(\partial)^{-1}.\] ### \(\hat{SO}_{2n+1}\) and the \((\lambda,\mu)\)-reduced DKP hierarchy The group \(\hat{SO}_{2n+1}\) acts on both \(F_{b}\) and \(F_{d}\) and leaves the module \(F_{d}^{\overline{\alpha}}\) invariant [16]. It consists of all elements \(g\in B_{\infty}\) (resp. \(D_{\infty}\)) that satisfy \(g\tilde{\phi}_{j}g^{-1}=\sum_{i}a_{ji}\tilde{\phi}_{i}\) (N.B. in the \(D_{\infty}\)-case the tildes have to be removed, since in that case this is a genuine field), with \(\sum_{k}a_{ik}a_{j,-k}=\delta_{i,-j}\) and \(a_{i+2n+1,j+2n+1}=a_{ij}\). Let \(\epsilon=0\) for \(F_{b}\) and \(\frac{1}{2}\) for \(F_{d}\). Since for \(p=0,1,2,\ldots\) \[(g\otimes g)\sum_{i\in\epsilon+\mathbb{Z}}\tilde{\phi}_{i}\otimes \tilde{\phi}_{p(2n+1)-i} =\sum_{i\in\epsilon+\mathbb{Z}}g\tilde{\phi}_{i}g^{-1}g\otimes g \tilde{\phi}_{p(2n+1)-i}g^{-1}g\] \[=\sum_{i,j,k\in\epsilon+\mathbb{Z}}a_{ji}a_{k,p(2n+1)-i}\tilde{ \phi}_{j}g\otimes\tilde{\phi}_{k}g\] \[=\sum_{i,j,k\in\epsilon+\mathbb{Z}}a_{ji}a_{k-p(2n+1)-i}\tilde{ \phi}_{j}g\otimes\tilde{\phi}_{k}g\] \[=\sum_{j\in\epsilon+\mathbb{Z}}\tilde{\phi}_{j}g\otimes\tilde{ \phi}_{p(2n+1)-j}g\] \[=\sum_{i\in\epsilon+\mathbb{Z}}\tilde{\phi}_{i}\otimes\tilde{ \phi}_{p(2n+1)-i}(g\otimes g)\] and \[\sum_{i\in\mathbb{Z}}\tilde{\phi}_{i}|0\rangle\otimes\tilde{\phi}_{p(2n+1)-i} |0\rangle=\frac{\delta_{p0}}{2}|0\rangle\otimes|0\rangle,\quad\sum_{i\in\frac{ 1}{2}+\mathbb{Z}}\phi_{i}|\overline{a}\rangle\otimes\phi_{p(2n+1)-i}| \overline{a}\rangle=0,\] we have that elements \(\tau\in F_{b}\) (resp. \(\tau_{a}\in F_{d}^{\overline{a}}\)), which belong to the \(\hat{SO}_{2n+1}\)-group orbit of \(|0\rangle\) (resp. \(|\overline{a}\rangle\)) satisfy \[\sum_{i\in\mathbb{Z}}\phi_{i}T{\otimes}\phi_{p(2n+1)-i}\tau=\frac{\delta_{p0}} {2}\tau{\otimes}\tau\quad(\text{resp.}\quad\sum_{i\in\frac{1}{2}+\mathbb{Z}} \tilde{\phi}_{i}\tau_{a}{\otimes}\tilde{\phi}_{p(2n+1)-i}\tau_{a}=0),\quad \text{for }p=0,1,\ldots. \tag{7.25}\] In this section we want to obtain this as a reduction of the DKP hierarchy, hence we assume that the second equation of (7.25) holds. In terms of the fermions related to \((\lambda,\mu)\), this is \[\begin{split}&\sum_{b=1}^{s}\sum_{i\in\frac{1}{2}+\mathbb{Z}} \left(\psi_{i}^{+b}\tau_{a}\otimes\psi_{p\lambda_{b}-i}^{-b}\tau_{a}+\psi_{i}^ {-b}\tau_{a}\otimes\psi_{p\lambda_{b}-i}^{+b}\tau_{a}\right)+\\ &+\sum_{c=s+1}^{r+s}\sum_{i\in\mathbb{Z}}(-1)^{i}\tilde{\phi}_{i }^{c}\tau_{a}\otimes\tilde{\phi}_{2p\mu_{c}-i}^{c}\tau_{a}+\sum_{j\in\delta+ \mathbb{Z}}\sigma_{j}\tau_{a}\otimes\sigma_{p-j}\tau_{a}=0.\end{split} \tag{7.26}\] For the value of \(\delta\), see the two possibilities below (5.44). \(\delta=0\) if \(r=2r^{\prime}-1\) is odd and \(\delta=-\frac{1}{2}\) if \(r=2r^{\prime}\) is even. This equation for \(p=0\) is in fact the \((s,R)\)-component DKP Equation (4.7) when \(r=2r^{\prime}-1\) and \(R=r+1=2r^{\prime}\) and Equation (4.8) with \(p\) replaced by \(r^{\prime}+1\) if \(r=2r^{\prime}\) and \(R=r+1=2r^{\prime}+1\). In that case, we have the following relation, for \(R=r+1\): \[r=2r^{\prime}-1:\quad\sigma_{j}=\begin{cases}\tilde{\phi}_{j}^{R}&\text{if $j$ is even,}\\ \sqrt{-1}\tilde{\phi}_{j}^{R}&\text{if $j$ is odd,}\end{cases}\] \[r=2r^{\prime}:\quad\sigma_{j+\frac{1}{2}}=\begin{cases}\frac{\tilde{\phi}_{0} ^{R}-\sqrt{-1}\sigma_{0}}{\sqrt{2}}&\text{if $j=-1$,}\\ \frac{\tilde{\phi}_{0}^{R}+\sqrt{-1}\sigma_{0}}{\sqrt{2}}&\text{if $j=0$,}\\ \tilde{\phi}_{j}^{R}&\text{if $0<j$ is even or $j<-1$ odd,}\\ \sqrt{-1}\tilde{\phi}_{j}^{R}&\text{if $0<j$ is odd or $j<-1$ even.}\end{cases} \tag{7.27}\] We do not bosonize the field \(\sigma(z)=\tilde{\phi}^{R}(z)=\tilde{\phi}^{r+1}(z)\) with a vertex operator, because the corresponding Heisenberg algebra, which is given by \(\frac{1}{2}:\tilde{\phi}^{r+1}(z)\tilde{\phi}^{r+1}(-z):\) does not belong to \(\hat{so}_{2n+1}\). Instead we define \(\sigma\sigma(z)\sigma^{-1}=\Xi(z)\), see (5.52). We first rewrite the Equations (7.26) using the fermionic fields \[\begin{split}&\operatorname{Res}_{z=0}\big{\{}\sum_{b=1}^{s}z^{ \lambda_{b}}\left(\psi^{+b}(z)\tau_{a}\otimes\psi^{-b}(z)\tau_{a}+\psi^{-b}(z) \tau_{a}\otimes\psi^{+b}(z)\tau_{a}\right)+\\ &+\sum_{c=s+1}^{r+s}z^{2p\mu_{c}-1}\tilde{\phi}^{c}(z)\tau_{a} \otimes\tilde{\phi}^{c}(-z)\tau_{a}+z^{p-1-2\delta}\sigma(z)\tau_{a}\otimes \sigma(z)\tau_{a}\big{\}}dz=0,\end{split} \tag{7.28}\] and then apply the bosonization \(\sigma\). Note that \[\sigma(\tau_{a})=\sum_{\underline{k}\in\mathbb{Z}^{s},\underline{\ell}\in \mathbb{Z}_{2}^{r^{\prime}},\alpha\in\mathbb{Z}_{2};k+\underline{\ell}+ \alpha=a\operatorname{mod}2}q^{\underline{k}}\theta^{\underline{\ell}}\tau_{ \underline{k},\underline{\ell},\alpha}(t,\xi).\] where \(q^{\underline{k}}=q_{1}^{k_{1}}q_{2}^{k_{2}}\cdots q_{s}^{k_{s}}\), \(\theta^{\underline{\ell}}=\theta_{1}^{\ell_{1}}\theta_{2}^{\ell_{2}}\cdots \theta_{r^{\prime}}^{\ell_{r^{\prime}}}\). Let \(N_{\xi}(f(t)\xi_{m_{1}}\xi_{m_{2}}\cdots\xi_{m_{p}})=p\), Then \((-1)^{N_{\xi}}(\tau_{\underline{k},\underline{\ell},\alpha}(t,\xi)=(-1)^{ \alpha}(\tau_{\underline{k},\underline{\ell},\alpha}(t,\xi)\). This gives for (7.28): \((p=0,1,2,\ldots)\) \[\begin{split}\operatorname{Res}_{z=0}&\big{(}\sum_{b= 1}^{s}\big{(}(-1)^{\lfloor\underline{k}+\overline{k}\rfloor_{b-1}}z^{p\lambda _{b}+k_{b}-\overline{k}_{b}-2+2\delta_{bs}}e^{z(\cdot t^{(b)}-\overline{t}^{(b )})}e^{z-1\cdot(\tilde{\phi}_{\mathrm{T}^{(b)}}-\tilde{\phi}_{t^{(b)}})}\tau_{ \underline{k}+\underline{e}_{s}-\underline{e}_{b},\underline{\ell},\alpha}(t,\xi)\tau_{\overline{k}+\underline{e}_{b}-\underline{e}_{s},\underline{\ell},\overline{\alpha}}(\overline{t},\overline{\xi})\\ &+(-1)^{\lfloor\underline{k}+\overline{k}\rfloor_{b-1}}z^{p\lambda _{b}-k_{b}+\overline{k}_{b}-2-2\delta_{bs}}e^{(-z)\cdot(t^{(b)}-\overline{t}^ {(b)}}e^{(-z)^{-1}(\tilde{\phi}_{t^{(b)}}-\tilde{\phi}_{t^{(b)}})}\tau_{ \underline{k}+\underline{e}_{s}+\underline{e}_{b},\underline{\ell},\alpha}(t,\xi)\tau_{\overline{k}-\underline{e}_{b}-\underline{e}_{s},\overline{\ell},\overline{\alpha}}(\overline{t},\overline{\xi})\\ &+\frac{1}{2}\sum_{c=1}^{r}(-1)^{\lfloor\underline{k}+\overline{k }\rfloor+[\underline{\ell}+\overline{\ell}]\big{[}\frac{c-1}{2}\big{]}^{+(c-1 )}(\ell_{\llbracket\frac{c+1}{2}\rrbracket}^{+\overline{\ell}}\big{[}\frac{c +1}{2}\big{]}^{+1})}z^{2p\mu_{c}-1}e^{z\circ(t^{(s+c)}-\overline{t}^{(s+c)})} \times\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad e^{2(z^{-1}\circ( \tilde{\partial}_{\overline{\xi}(s+c)}-\tilde{\partial}_{t(s+c)}))}\tau_{ \underline{k}+\underline{e}_{s},\underline{\ell}+\underline{e}_{\lceil\frac{c+1 }{2}\rrbracket},\alpha}(t,\xi)\tau_{\overline{k}-\underline{e}_{s},\underline{ \ell}+\underline{e}_{\lceil\frac{c+1}{2}\rrbracket}}\overline{\alpha}(\overline {t},\overline{\xi})\big{)}dz\\ &+(-1)^{\lfloor\underline{k}+\overline{k}\rfloor+[\underline{\ell} +\overline{\ell}]}\big{\{}\sum_{j=1}^{\infty}\xi_{j+\delta}\tau_{\underline{k}+ \underline{e}_{s},\underline{\ell},\alpha+1}(t,\xi)\frac{\partial\tau_{ \overline{k}-\underline{e}_{s},\underline{\ell},\overline{\alpha}+1}( \overline{t},\overline{\xi})}{\partial\overline{\xi}_{j+p+\delta}}\\ &\quad+\frac{\partial\tau_{\underline{k}+\underline{e}_{s}, \underline{\ell},\alpha+1}(t,\xi)}{\partial\xi_{j+\delta}}\overline{\xi}_{j+p +\delta}\tau_{\underline{k}-\underline{e}_{s},\underline{\ell},\overline{\alpha }+1}(\overline{t},\overline{\xi})+\sum_{j=1}^{p-1-2\delta}\frac{\partial\tau_ {\underline{k}+\underline{e}_{s},\underline{\ell},\alpha+1}(t,\xi)}{\partial \xi_{j+\delta}}\frac{\partial\tau_{\overline{k}-\underline{e}_{s},\underline{ \ell},\overline{\ell},\overline{\alpha}+1}(\overline{t},\overline{\xi})}{ \partial\overline{\xi}_{p-j-\delta}}\\ &\quad+(1+2\delta)(\delta_{p0}-1)\frac{\sqrt{-1}}{\sqrt{2}}\bigg{(} (-1)^{\ell_{\frac{r+1}{2}}}\tau_{\underline{k}+\underline{e}_{s},\underline{ \ell}+\underline{e}_{\frac{r+1}{2},\alpha}}(t,\xi)\frac{\partial\tau_{\overline{ k}-\underline{e}_{s},\underline{\ell},\overline{\alpha}+1}(\overline{t},\overline{\xi})}{ \partial\overline{\xi}_{p}}\\ &\qquad\qquad\qquad\qquad\qquad\qquad+(-1)^{\tilde{\tau}_{\frac{r+1 }{2}}}\frac{\partial\tau_{\underline{k}+\underline{e}_{s},\underline{\ell}, \alpha+1}(t,\xi)}{\partial\xi_{p}}\tau_{\underline{k}-\underline{e}_{s},\underline{ \ell}+\underline{e}_{\frac{r+1}{2},\overline{\alpha}}(\overline{t},\overline{\xi}) }\bigg{)}\\ &-\frac{\delta_{p0}}{2}(1+2\delta)(-1)^{\ell_{\frac{r+1}{2}}+ \underline{\ell}_{\frac{r+1}{2}}}\tau_{\underline{k}+\underline{e}_{s}, \underline{\ell}+\underline{e}_{\frac{r+1}{2},\alpha}}(t,\xi)\tau_{\overline{ k}-\underline{e}_{s},\underline{\ell}+\underline{e}_{\frac{r+1}{2},\overline{\alpha}}}( \overline{t},\overline{\xi})\big{\}}=0.\end{split} \tag{7.29}\] As for \(\hat{so}_{2n}\), we will consider several cases. **Case 1, assume that \(\mu=\emptyset\), thus that \(r=0\).** In this case \(\delta=-\frac{1}{2}\) and \[\sigma(\tau_{a})=\sum_{\underline{k}\in\mathbb{Z}^{s},\nu\in\mathbb{Z}_{2}; \,\underline{k}+\nu=a\operatorname{mod}2}q^{\underline{k}}\tau_{\underline{k}, \nu}(t,\xi),\quad\text{with }(-1)^{N_{\xi}}(\tau_{\underline{k},\nu}(t,\xi))=(-1)^{\nu}(\tau_{ \underline{k},\nu}(t,\xi)),\] we find when substituting \(\xi=\overline{\xi}=0\), that \(\nu=\overline{\nu}=\overline{0}\): (\(p=0,1,2,\ldots\)) \[\begin{split}\operatorname{Res}_{z=0}&\big{(}\sum_{b= 1}^{s}\big{(}(-1)^{\underline{|k+\overline{k}}\underline{\overline{k}}_{b-1}}z ^{p\lambda_{b}+k_{b}-\overline{k}_{b}-2+2\delta_{bs}}e^{z\cdot(t^{(b)}- \overline{t}^{(b)})}\times\\ &\quad e^{z^{-1\cdot(\tilde{\alpha}_{\overline{t}^{(b)}}-\tilde{ \beta}_{t^{(b)}})}}\tau_{\underline{k+\varepsilon}_{s}-\underline{\varepsilon }_{b},\underline{\ell},\underline{\alpha}}(t,0)\tau_{\overline{k}+ \underline{\varepsilon}_{b}-\underline{\varepsilon}_{s},\overline{\ell}, \underline{\alpha}}(\overline{t},0)+(-1)^{\underline{|k+\overline{k}|}_{b-1} }z^{p\lambda_{b}-k_{b}+\overline{k}_{b}-2-2\delta_{bs}}\times\\ &\quad e^{(-z)\cdot(t^{(b)}-\overline{t}^{(b)})}e^{(-z)^{-1} \cdot\tilde{\beta}_{t^{(b)}}-(-z)^{-1}\cdot\tilde{\alpha}_{\overline{t}^{(b)} }}\tau_{\underline{k+\varepsilon}_{s}+\underline{\varepsilon}_{b},\overline{0 }}(t,0)\tau_{\overline{k}-\underline{\varepsilon}_{b}-\underline{\varepsilon }_{s},\overline{0}}(\overline{t},0)\big{\}}\big{)}dz\\ &\quad\quad\quad=-(-1)^{\underline{|k+\overline{k}|}}\sum_{j=1}^{ p}\frac{\partial\tau_{\underline{k+\varepsilon}_{s},\overline{1}}(t,\xi)}{ \partial\xi_{j-\frac{1}{2}}}\big{|}_{\xi=0}\frac{\partial\tau_{\overline{k}- \underline{\varepsilon}_{s},\overline{1}}(\overline{t},\overline{\xi})}{ \partial\overline{\xi}_{p-j+\frac{1}{2}}}\big{|}_{\overline{\xi}=0}.\end{split} \tag{7.30}\] **Remark 7.4**: _Note that for \(p=0\), the right hand side is equal to 0. One obtains a similar equation in the DKP case when \(r=1\) and one substitutes \(t_{j}^{s+1}=0\), for all \(j=1,3,\ldots\), in Equation (4.13). In particular, we have that \(P^{-*}=JP^{+-1}\) for \(J\) as in (4.32)._ Define, now for \(j=\frac{1}{2},\frac{3}{2},\frac{5}{2},\ldots\), \[\begin{split} T^{\pm j}(\underline{\alpha};x,t)_{i}=& \sqrt{-1}\frac{(-1)^{\underline{|\underline{\alpha}|}}}{\tau_{ \underline{\alpha},\overline{0}}(x,t,0)}\frac{\partial\tau_{\underline{\alpha }\pm\underline{\varepsilon}_{i},\overline{1}}(x,t,\xi)}{\partial\xi_{j}}\big{|} _{\xi=0},\\ T^{\pm j}(\underline{\alpha};x,t)_{s+i}=&\sqrt{-1} \frac{(-1)^{\underline{|\underline{\alpha}|}}}{\tau_{\underline{\alpha}, \overline{0}}(x,t,0)}\frac{\partial\tau_{\underline{\alpha}\mp\underline{ \varepsilon}_{s},\overline{1}}(x,t,\xi)}{\partial\xi_{j}}\big{|}_{\xi=0}\end{split} \tag{7.31}\] and \(V^{\pm}(\underline{\alpha},;x,t,z)\) as before. For which one has \[\begin{split}\operatorname{Res}_{z=0}&\sum_{j=1}^{ s}V^{+}(\underline{\alpha};x,t,z)\left(z^{p\lambda_{j}}E_{jj}-(-z)^{p\lambda_{j}}E_{s+j,s+ j}\right)V^{-}(\underline{\beta};\overline{x},\overline{t},z)^{t}dz=\\ &=\sum_{j=\frac{1}{2}}^{p-\frac{1}{2}}T^{+j}(\underline{\alpha};x, t)T^{-(p-j)}(\underline{\beta};\overline{x},\overline{t})^{t}.\end{split} \tag{7.32}\] Note that \[T^{\pm j}(\underline{\alpha};x,t)_{i}=T^{\mp j}(\underline{\alpha};x,t)_{s+i}.\] Thus \[P^{\pm}(\underline{\alpha};x,t,\partial)=NP^{\mp}(\underline{\alpha};x,t, \partial)N,\quad NT^{\pm j}(\underline{\alpha};x,t)=T^{\mp j}(\underline{ \alpha};x,t) \tag{7.33}\] where \(N\) is defined in (4.36). Let \(\partial=\partial_{x}\) and using the fundamental lemma (see [15], Lemma 4.1), we deduce the following expression for the matrix pseudo-differential operators \[\sum_{j=1}^{s}(P^{+}(\underline{\alpha};x,t,\partial)R^{+}(\underline{\alpha}; \partial)\left(\partial^{p\lambda_{j}}E_{jj}-(-\partial)^{p\lambda_{j}}E_{s+j,s+ j}\right)R^{-}(\underline{\beta};x,t,\partial)^{*}P^{-}(\underline{\beta};x,t, \partial)^{*})_{-}\] \[=\sum_{j=\frac{1}{2}}^{p-\frac{1}{2}}T^{+j}(\underline{\alpha};x,t)\partial^{- 1}T^{-(p-j)}(\underline{\beta};x,t)^{t}.\] Take \(p=0\) and \(\underline{\alpha}=\underline{\beta}\) and \(J=\sum_{j=1}^{s}E_{jj}-E_{s+j,s+j}=J^{-1}\), as in (4.32), we obtain again (4.33). However, taking \(p\neq 0\), we get \[\sum_{j=1}^{s}(P^{+}(\underline{\alpha};x,t,\partial)\left( \partial^{p\lambda_{j}}E_{jj}+(-\partial)^{p\lambda_{j}}E_{s+j,s+j}\right)P^{+ }(\underline{\alpha};x,t,\partial)^{-1})_{-}= \tag{7.35}\] \[\sum_{k=\frac{1}{2}}^{p-\frac{1}{2}}T^{+k}(\underline{\alpha};x,t )\partial^{-1}(JT^{-(p-k)}(\underline{\alpha};x,t))^{t}.\] Then \(P^{\pm}(\alpha;x,t,\partial)\), \(L(\alpha;x,t,\partial)\), \(D^{j}(\alpha;x,t,\partial)\), \(E^{j}(\alpha;x,t,\partial)\) and \(\mathcal{L}(\alpha;x,t,\partial)\) do not change and their relations also do not change, except that \(\mathcal{L}(\alpha;x,t,\partial)_{-}\) is no longer equal to zero. More general we have \[\sum_{j=1}^{s}(L^{p\lambda_{j}}E^{j})_{-}=\sum_{k=\frac{1}{2}}^{p-\frac{1}{2} }T^{+k}(\underline{\alpha};x,t)\partial^{-1}(JT^{-(p-k)}(\underline{\alpha}; x,t))^{t}\] and in particular \[\mathcal{L}_{-}=T^{+\frac{1}{2}}(\underline{\alpha};x,t)\partial^{-1}(JT^{- \frac{1}{2}}(\underline{\alpha};x,t))^{t}.\] The adjoint of \(\mathcal{L}\) is \[\mathcal{L}^{*}=K\mathcal{L}K^{-1},\] for \(K\) as in (4.37). If we differentiate in this case (7.32) for \(\underline{\alpha}=\underline{\beta}\) and \(p=1\) by \(t_{i}^{j}\) and use the fundamental Lemma of [15], we obtain \[\left(\frac{\partial P^{+}}{\partial t_{i}^{j}}(P^{+})^{-1}\mathcal{L}+L^{i}D ^{j}\mathcal{L}\right)_{-}=\frac{\partial T^{+\frac{1}{2}}}{\partial t_{i}^{j }}\partial^{-1}T^{-\frac{1}{2}}{}^{t}J.\] Now substitute the second equation of quation (4.33), this gives that \[\frac{\partial T^{+\frac{1}{2}}}{\partial t_{i}^{j}}\partial^{-1} T^{-\frac{1}{2}}{}^{T}J =\left(\left(L^{i}D^{j}\right)_{+}\mathcal{L}\right)_{-}\] \[=\left(\left(L^{i}D^{j}\right)_{+}\mathcal{L}_{-}\right)_{-}\] \[=\left(\left(L^{i}D^{j}\right)_{+}T^{+\frac{1}{2}}\partial^{-1}T^ {-\frac{1}{2}}J\right)_{-}\] \[=\left(L^{i}D^{j}\right)_{+}\left(T^{+\frac{1}{2}}\right) \partial^{-1}T^{-\frac{1}{2}}{}^{t}J.\] Thus \[\frac{\partial T^{+\frac{1}{2}}}{\partial t_{i}^{j}}=\left(L^{i}D^{j}\right)_{+} \left(T^{+\frac{1}{2}}\right).\] If we differentiate (7.32) for \(\underline{\alpha}=\underline{\beta}\) and \(p=1\) by \(\overline{t}_{i}^{j}\), we obtain in a similar way that \[\frac{\partial T^{-\frac{1}{2}}J}{\partial t_{i}^{j}}=-\left(L^{i}D^{j}\right) _{+}^{*}\left(T^{-\frac{1}{2}}J\right).\] Thus \(T^{+\frac{1}{2}}\), resp. \(T^{-\frac{1}{2}}J\), is an eigenfunction, resp. adjoint eigenfunction, for the operators \(L\) and \(D^{j}\), \(j=1,2,\ldots,s\). **Second case, \(\mu\neq\emptyset\) (first approach).** In this case, \[\sigma(\tau_{\underline{k}\in\mathbb{Z}^{s},\,\underline{\xi}\in\mathbb{Z}_{ 2}^{r},\nu\in\mathbb{Z}_{2};\,\underline{k}+\xi+\nu=a\,\mathrm{mod}\,2}\,q^{ \underline{k}}\theta^{\underline{\ell}}\tau_{\underline{k},\nu}(t,\xi),\quad \text{with }(-1)^{N_{\xi}}(\tau_{\underline{k},\nu}(t,\xi))=(-1)^{\nu}(\tau_{ \underline{k},\nu}(t,\xi)).\] We substitute \(\xi=\overline{\xi}=t_{j}^{c}=\overline{t}_{j}^{c}=0\), for \(c>s\) in (7.29). As before, replace all \(t_{1}^{a}\) by \(t_{1}^{a}+x\) and define the \(2s\times 2s\)-matrix and \(V^{\pm}(\underline{\alpha};x,t,z)\) in the usual way, but clearly only for \(\nu=0\). In this case we also have an \(2s\times r\)-matrix \(W^{\pm}(\underline{\alpha},x,t,z)\), defined as in (4.26) and (4.27) and the \(2s\times 1\)-matrixes \[\begin{split} T^{\pm(j+\delta)}(\underline{\alpha};x,t)_{i}=& \sqrt{-1}\frac{(-1)^{|\underline{\alpha}|}}{\tau_{\underline{ \alpha},\overline{0}}(x,t,0)}\frac{\partial\tau_{\underline{\alpha}\pm \underline{\epsilon}_{i},\overline{1}}(x,t,\xi)}{\partial\xi_{j+\delta}}\big{|} _{\xi,t^{>s}=0},\\ T^{\pm(j+\delta)}(\underline{\alpha};x,t)_{s+i}=& \sqrt{-1}\frac{(-1)^{|\underline{\alpha}|}}{\tau_{\underline{\alpha},\overline {0}}(x,t,0)}\frac{\partial\tau_{\underline{\alpha}\mp\underline{\epsilon}_{i },\overline{1}}(x,t,\xi)}{\partial\xi_{j+\delta}}\big{|}_{\xi,t^{>s}=0}\quad \text{for }j+\delta>0,\end{split} \tag{7.36}\] and if \(\delta=0\), we also have \[T^{\pm 0}(\underline{\alpha};x,t)_{i}=\frac{(-1)^{|\underline{\alpha}|}\tau_{ \underline{\alpha}\pm\underline{\epsilon}_{i},\overline{0}}(x,t,0)}{\sqrt{2} \tau_{\underline{\alpha},\overline{0}}(x,t,0)}\big{|}_{t^{>s}=0},\quad T^{ \pm 0}(\underline{\alpha};x,t)_{s+i}= \tag{7.37}\] For \(p=0\), we thus obtain \[\begin{split}\mathrm{Res}_{z=0}\sum_{j=1}^{s}& V^{+}(\underline{\alpha};x,t,z)\left(E_{jj}-E_{s+j,s+j}\right)V^{-}( \underline{\beta};\overline{x},\overline{t},z)^{t}dz=\\ &\frac{1}{2}W^{+}(\underline{\alpha};x,t)^{0}W^{-}(\underline{ \beta};\overline{x},\overline{t})^{0t}+(1+2\delta)T^{+0}(\alpha;x,t)T^{-0}( \beta;\overline{x},\overline{t})^{t},\end{split} \tag{7.38}\] which is similar to (4.29). Now substituting \(\alpha=\beta\), gives that the right-hand side of (7.38) is equal to \(0\). And we are exactly in the situation of the first approach of the BKP hierarchy of Subsection 4.4, and we can continue as in that section and defne the Lax operators in the same way as in that subsection. For \(p>0\), we have \[\begin{split}&\mathrm{Res}_{z=0}\sum_{j=1}^{s}V^{+}(\underline{ \alpha};x,t,z)\left(z^{p\lambda_{j}}E_{jj}-(-z)^{p\lambda_{j}}E_{s+j,s+j} \right)V^{-}(\underline{\alpha};\overline{x},\overline{t},z)^{t}dz=\\ &\frac{1}{2}\sum_{a=1}^{r}\sum_{j=0}^{2p\mu_{a}}\hat{W}^{+}( \underline{\alpha};x,t)^{j}E_{aa}\left(\hat{W}^{-}(\underline{\alpha}; \overline{x},\overline{t})^{2p\mu_{a}-j}\right)^{t}+\sum_{j=-\delta}^{p+ \delta}T^{+j}(\underline{\alpha};x,t)T^{-(p-j)}(\underline{\alpha}; \overline{x},\overline{t})^{t}.\end{split} \tag{7.39}\] Which gives that the integral part of the Lax operator \(\mathcal{L}=\sum_{j=1}^{s}L^{\lambda_{j}}E^{j}\) is equal to \[\mathcal{L}_{-}=\frac{1}{2}\sum_{a=1}^{r}\sum_{j=0}^{2\mu_{a}}\hat{W}^{+}(x,t)^{ j}E_{aa}\partial^{-1}\left(\hat{W}^{-}(x,t)^{2\mu_{a}-j}\right)^{t}J+\sum_{j=- \delta}^{1+\delta}T^{+j}(x,t)\partial^{-1}T^{-(1-j)}(x,t)^{t}J.\] **Third case, \(\mu\neq\emptyset\) (second approach).** As before, we substitute \(\xi=\overline{\xi}=t_{j}^{c}=\overline{t}_{j}^{c}=0\), for \(s<c<s+r\) in (7.48). Here also Remark 7.3 holds. Replace all \(t_{1}^{a}\) by \(t_{1}^{a}+x\), for \(a=1,2,\ldots,s,s+r\), and define the \((2s+1)\times(2s+1)\)-matrix \(V^{\pm}(\underline{\alpha};x,t,z)\) as in (4.38), (4.24) and (4.39), but now with \(\tau_{\underline{\alpha}}\) replaced by \(\tau_{\underline{\alpha},\overline{0}}\). In this case we also have a \((2s+1)\times(r-1)\)-matrix \(W^{\pm}(\underline{\alpha},x,t,z)\), defined as in (4.26) and \((2s+1)\times 1\)-matrices \(T^{\pm(j+\delta)}(\underline{\alpha},x,t,z)\), defined as in (7.36), (7.37), except that we do not put \(t_{j}^{s+r}=0\), and by \[\begin{split} T^{\pm(j+\delta)}(\underline{\alpha};x,t)_{2s+1}=& \sqrt{-1}\frac{(-1)^{|\underline{\alpha}|}}{\tau_{\underline{ \alpha},\overline{0}}(x,t,0)}\frac{\partial\tau_{\underline{\alpha},\overline {1}}(x,t,\xi)}{\partial\xi_{j+\delta}}\big{|}_{\xi,t^{(s<a<s+r)}=0},\\ T^{\pm 0}(\underline{\alpha};x,t)_{2s+1}=&\frac{(-1)^{| \underline{\alpha}|}\tau_{\underline{\alpha},\overline{0}}(x,t,0)}{\sqrt{2} \tau_{\underline{\alpha},\overline{0}}(x,t,0)}\big{|}_{t^{(s<a<s+r)}=0}, \quad\text{only for $\delta=0$}.\end{split} \tag{7.40}\] This resembles the second approach of the reduction of the BKP hierarchy (cf. Subsection 4.4). We obtain thw following equation for \(p=0\), which is similar to Equation (4.44): \[\begin{split}\big{(}P^{+}(\underline{\alpha};x,t,\partial)R^{+} (\underline{\alpha}-\underline{\beta},\partial)J(\partial)P^{-}(\underline{ \beta};x,t,\partial)^{*}\big{)}_{-}=\\ \frac{1}{2}W^{+}(\underline{\alpha};x,t)^{0}\partial^{-1}W^{-}( \underline{\beta};x,t)^{0t}+(1+2\delta)T^{+0}(\alpha;x,t)\partial^{-1}T^{-0} (\beta;x,t)^{t}.\end{split} \tag{7.41}\] Now take \(\underline{\alpha}=\underline{\beta}\). Again, the right-hand side is equal to \[\frac{1}{2}P^{+}(\underline{\alpha};x,t)^{0}E_{2s+1,2s+1}\partial^{-1}P^{-}( \underline{\alpha};x,t)^{0t},\] which gives Equation (4.46). So we can continue in exactly the same way as in the case of the second approach to the BKP hierarchy in Subsection 4.4 and we refer the reader to that subsection for the details. For \(p=1\) and \(\underline{\alpha}=\underline{\beta}\), we find \[\begin{split}\big{(}P^{+}(\underline{\alpha};x,t,\partial)J_{1}( \partial)P^{-}(\underline{\alpha};x,t,\partial)^{*}\big{)}_{-}=\frac{1}{2} \sum_{a=1}^{r-1}\sum_{j=0}^{2\mu_{a}}W^{+}(\underline{\alpha};x,t)^{j}E_{aa} \partial^{-1}\left(W^{-}(\underline{\alpha};x,t)^{2\mu_{a}-j}\right)^{t}\\ +(1+2\delta)(T^{+0}(\alpha;x,t)\partial^{-1}T^{-1}(\alpha;x,t)^{t} +T^{+1}(\alpha;x,t)\partial^{-1}T^{0}(\alpha;x,t)^{t})\\ -2\delta T^{+\frac{1}{2}}(\alpha;x,t)\partial^{-1}T^{-\frac{1}{ 2}}(\alpha;x,t)^{t},\end{split} \tag{7.42}\] where \(J_{1}(\partial)\) is given by (7.22). Note that \(J(\partial)=J_{0}(\partial)\). Define as before \(\hat{W}^{\pm}(\underline{\alpha};x,t,z)=(P^{\pm}(\underline{\alpha};x,t,z)^{0})^{- 1}W^{\pm}(\underline{\alpha};x,t,z)\) and define \(\hat{T}^{\pm}(\underline{\alpha};x,t,z)=(P^{\pm}(\underline{\alpha};x,t,z)^{0}) ^{-1}T^{\pm}(\underline{\alpha};x,t,z)\) then (7.42) gives \[\begin{split}&\Big{(}\hat{P}^{+}(\underline{\alpha};x,t,\partial)J_ {k}(\partial)J(\partial)^{-1}\hat{P}^{-}(\underline{\alpha};x,t,\partial)^{- 1}\Big{)}_{<}=\\ &\qquad\frac{1}{2}\sum_{a=1}^{r-1}\sum_{j=0}^{2\mu_{a}}\hat{W}^{+ }(\underline{\alpha};x,t)^{j}E_{aa}\partial^{-1}\left(\hat{W}^{-}(\underline{ \alpha};x,t)^{2\mu_{a}-j}\right)^{t}J(\partial)^{-1}\\ &\qquad+(1+2\delta)(\hat{T}^{+0}(\alpha;x,t)\partial^{-1}\hat{T}^ {-1}(\alpha;x,t)^{t}+\hat{T}^{+1}(\alpha;x,t)\partial^{-1}\hat{T}^{0}(\alpha; x,t)^{t})J(\partial)^{-1}\\ &\qquad\qquad-2\delta\hat{T}^{+\frac{1}{2}}(\alpha;x,t)\partial^ {-1}\hat{T}^{-\frac{1}{2}}(\alpha;x,t)^{t}J(\partial)^{-1}.\end{split} \tag{7.43}\] Define \[\mathcal{L}(\underline{\alpha};x,t,\partial)=L(\underline{\alpha};x,t,\partial )^{2\mu_{r}}D^{s+r}(\underline{\alpha};x,t,\partial)+\sum_{a=1}^{s}L( \underline{\alpha};x,t,\partial)^{\lambda_{a}}E^{a}(\underline{\alpha};x,t, \partial), \tag{7.44}\] then clearly \(\mathcal{L}\) satisfies the same Lax equations as \(L\) viz. (4.55) and \[\begin{split}\mathcal{L}(\underline{\alpha};x,t,\partial)& =\mathcal{L}(\underline{\alpha};x,t,\partial)_{\geq}+\frac{1}{2} \sum_{a=1}^{r-1}\sum_{j=0}^{2\mu_{a}}\hat{W}^{+}(\underline{\alpha};x,t)^{j}E _{aa}\partial^{-1}\left(\hat{W}^{-}(\underline{\alpha};\overline{x},\overline {t})^{2\mu_{a}-j}\right)^{t}J(\partial)^{-1}\\ &+(1+2\delta)(\hat{T}^{+0}(\alpha;x,t)\partial^{-1}\hat{T}^{-1}( \alpha;x,t)^{t}+\hat{T}^{+1}(\alpha;x,t)\partial^{-1}\hat{T}^{0}(\alpha;x,t)^ {t})J(\partial)^{-1}\\ &\qquad-2\delta\hat{T}^{+\frac{1}{2}}(\alpha;x,t)\partial^{-1} \hat{T}^{-\frac{1}{2}}(\alpha;x,t)^{t}J(\partial)^{-1}.\end{split} \tag{7.45}\] ### \(\hat{so}_{2n+1}\) and the \((\lambda,\mu)\)-reduced BKP hierarchy The group \(\hat{SO}_{2n+1}\) leaves the module \(F_{b}\) invariant. In this case one gets that \(\tau\) satisfies the first equation of (7.25), see Section 7.3. In terms of the fermions related to \((\lambda,\mu)\), this is \[\begin{split}\sum_{b=1}^{s}\sum_{i\in\frac{1}{2}+\mathbb{Z}}& \big{(}\psi_{i}^{+b}\tau\otimes\psi_{p\lambda_{b}-i}^{-b}\tau+\psi_{i}^{-b} \tau\otimes\psi_{p\lambda_{b}-i}^{+b}\tau\big{)}+\sum_{c=s+1}^{r+s}\sum_{i\in \mathbb{Z}}(-1)^{i}\tilde{\phi}_{i}^{c}\tau\otimes\tilde{\phi}_{2p\mu_{c}-i}^ {c}\tau+\\ &\qquad+\sum_{j\in\delta+\mathbb{Z}}\sigma_{j}\tau\otimes\sigma_{ p-j}\tau=\frac{\delta_{p0}}{2}\tau\otimes\tau.\end{split} \tag{7.46}\] For the value of \(\delta\) see the two possibilities below (5.44). Again we have a similar relation between \(\psi\), \(\tilde{\phi}_{j}^{R}\) and \(\sigma_{k}\) as in (7.27), where \(R\) is again equal to \(R=r+1\), but now the first case appears when \(r=2r^{\prime}\) and the latter when \(r=2r^{\prime}-1\). Hence, for \(p=0\) this equation is the same as the BKP Equation (4.4) or (4.5). We can rewrite (7.46) using the fermionic fields: \[\begin{split}\text{Res}_{z=0}\big{\{}&\sum_{b=1}^{s}z^{p \lambda_{b}}\left(\psi^{+b}(z)\tau\otimes\psi^{-b}(z)\tau+\psi^{-b}(z)\tau\otimes \psi^{+b}(z)\tau\right)+\\ &+\sum_{c=s+1}^{r+s}z^{2p\mu_{c}-1}\tilde{\phi}^{c}(z)\tau\otimes \tilde{\phi}^{c}(-z)\tau+z^{p-1-2\delta}\sigma(z)\tau\otimes\sigma(z)\tau\big{\}} dz=\frac{\delta_{p0}}{2}\tau\otimes\tau.\end{split} \tag{7.47}\] We write \[\sigma(T)=\sum_{\underline{k}\in\mathbb{Z}^{s},\underline{\ell}\in\mathbb{Z}_{ r}^{s^{\prime}}}q^{\underline{\ell}}\tau_{\underline{k},\underline{\ell}}(t, \xi),\] where \(q^{\underline{k}}=q_{1}^{k_{1}}q_{2}^{k_{2}}\cdots q_{s}^{k_{s}}\), \(\underline{\theta}^{\underline{\ell}}=\theta_{1}^{\ell_{1}}\theta_{2}^{ \ell_{2}}\cdots\theta_{r^{\prime}}^{\ell_{r^{\prime}}}\). Let \(N_{\xi}(f(t)\xi_{m_{1}}\xi_{m_{2}}\cdots\xi_{m_{p}})=p\), then (7.47), for the embedding in \(b_{\infty}\), which is the highest weight module with weight \(\Lambda_{n}\), turns into the following equations (\(p=0,1,2,\dots\)) \[\begin{split}&\text{Res}_{z=0}\big{(}\sum_{b=1}^{s}\big{(}(-1)^{ \lfloor\underline{k}+\overline{k}\rfloor_{b-1}}z^{p\lambda_{b}+k_{b}-\overline{ k}_{b}-2+2\delta_{bs}}e^{z\cdot(t^{(b)}-\overline{t}^{(b)})}e^{z^{-1}\cdot( \tilde{\mathcal{A}}_{\overline{t}^{(b)}}-\tilde{\mathcal{Q}}_{t^{(b)}})} \tau_{\underline{k}+\underline{e}_{a}-\underline{e}_{b},\underline{\ell}} \alpha(t,\xi)\tau_{\overline{k}+\underline{e}_{b}-\underline{e}_{a},\underline{ \ell}}\overline{\alpha}(\overline{t},\overline{\xi})+\\ &+(-1)^{\lfloor\underline{k}+\overline{k}\rfloor_{b-1}}z^{p \lambda_{b}-k_{b}+\overline{k}_{b}-2-2\delta_{bs}}e^{(-z)\cdot(t^{(b)}- \overline{t}^{(b)})}e^{(-z)^{-1}\cdot(\tilde{\mathcal{Q}}_{t^{(b)}}-\tilde{ \mathcal{Q}}_{t^{(b)}})}\tau_{\underline{k}+\underline{e}_{a}+\underline{e}_{b },\underline{\ell}}\alpha(t,\xi)\tau_{\overline{k}-\underline{e}_{b}- \underline{e}_{a},\underline{\ell}}\overline{\alpha}(\overline{t},\overline{ \xi})+\\ &+\frac{1}{2}\sum_{c=1}^{r^{\prime}}(-1)^{\lfloor\underline{k}+ \overline{k}\rfloor+\lfloor\underline{\ell}+\overline{l}\rfloor\left[ \underline{\varepsilon}-1\right]}+(c-1)(\lceil\underline{\varepsilon}+1 \rfloor+\overline{\ell}\lfloor\underline{\varepsilon}+1\rfloor)}z^{2p\mu_{c}- 1}e^{z\circ(t^{(s+c)}-\overline{t}^{(s+c)})}\times\\ &\qquad\qquad\qquad\qquad e^{2(z^{-1}\circ(\tilde{\mathcal{Q}}_{ \overline{t}^{(s+c)}}-\tilde{\mathcal{Q}}_{t^{(s+c)}}))}\tau_{\underline{k}+ \underline{e}_{a},\underline{\ell}+\underline{e}_{\left[\underline{c}+1 \right]},\alpha}(t,\xi)\tau_{\overline{k}-\underline{e}_{a},\underline{\ell }+\underline{e}_{\left[\underline{c}+1\right]}}\overline{\alpha}(\overline{t},\overline{\xi})\\ &-\delta(-1)^{\lfloor\underline{k}+\overline{k}\rfloor+\lfloor \underline{\ell}+\overline{l}\rfloor+N_{\xi}+N_{\overline{\xi}}}z^{2p\mu_{r} }-1e^{z\circ(t^{(s+r)}-\overline{t}^{(s+r)})}\times\\ &\qquad\qquad\qquad\qquad\qquad\qquad e^{2(z^{-1}\circ(\tilde{ \mathcal{Q}}_{\overline{t}^{(s+c)}}-\tilde{\mathcal{Q}}_{t^{(s+c)}}))}\tau_{ \underline{k}+\underline{e}_{a},\underline{\ell}}(t,\xi)\tau_{\overline{k}- \underline{e}_{a},\underline{\ell}}\overline{(\overline{t},\overline{\xi})} \big{)}dz\\ &+(-1)^{\lfloor\underline{k}+\overline{k}\rfloor+\lfloor\underline{ \ell}+\overline{k}\rfloor}\big{\{}\sum_{j=1}^{\infty}\xi_{j+1}\xi_{j+1} \xi_{\underline{e}_{a},\underline{\ell}}(t,\xi)\frac{\partial\tau_{\overline{ k}-\underline{e}_{a},\underline{\ell}}\overline{(t},\overline{\xi})}{ \partial\overline{\xi}_{j+p+\delta}}+\frac{\partial\tau_{\overline{k}+ \underline{e}_{a},\underline{\ell}}(t,\xi)}{\partial\xi_{j+\delta}}\overline{ \xi}_{j+p+\delta}\tau_{\underline{k}-\underline{e}_{a},\underline{\ell}} \overline{(\overline{t},\overline{\xi})}\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+(1-\delta_{p0} )\frac{1+2\delta}{\sqrt{2}}\big{(}(-1)^{N_{\xi}}\tau_{\underline{k}+ \underline{e}_{a},\underline{\ell}}(t,\xi)\frac{\partial\tau_{\overline{k}- \underline{e}_{a},\underline{\ell}}\overline{(\overline{t},\overline{\xi})}}{ \partial\overline{\xi}_{p}}\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+(-1)^{N_ {\overline{\xi}}}\frac{\partial\tau_{\underline{k}+\underline{e}_{a},\underline{ \ell}}(t,\xi)}{\partial\overline{\xi}_{p}}\tau_{\underline{k}-\underline{e}_{a },\underline{\ell}}\overline{(\overline{t},\overline{\xi})}\big{)}\big{\}}=\\ &=\delta_{p0}[\frac{1}{2}-(\frac{1}{2}+\delta)(-1)^{\lfloor \underline{k}+\overline{k}\rfloor+\lfloor\underline{\ell}+\overline{l}+N_{ \xi}+N_{\overline{\xi}}}]\tau_{\underline{k}+\underline{e}_{a},\underline{ \ell}}(t,\xi)\tau_{\overline{k}-\underline{e}_{a},\underline{\ell}}(t,\overline{ \xi}).\end{split} \tag{7.48}\] We will now turn the \((\lambda,\mu)\)-reduced BKP hierarchy into a hierarchy of pseudo-differential operators. Again, we will consider several cases. **Case 1, \(\mu=\emptyset\).** Thus \(r=0\). We substitute in equation (7.48) for all Grass mann variables \(\xi_{j}=0\). The first equation of (7.48), where \(\delta=0\) and \(r=r^{\prime}=0\) is then equal to \((p=0,1,2,\ldots)\) \[\begin{split}&\operatorname{Res}_{z=0}\sum_{b=1}^{s}\big{(}(-1)^{ \lfloor\underline{k}+\overline{\underline{k}}\rfloor_{b-1}}z^{p\lambda_{b}+k_ {b}-\overline{k}_{b}-2+2\delta_{bs}}e^{z\cdot(t^{(b)}-\overline{t}^{(b)})}e^{ z^{-1}\cdot(\tilde{\underline{\alpha}}_{\overline{t}}(b)-\tilde{\delta}_{t^{(b)}})} \tau_{\underline{k}+\underline{e}_{s}-\underline{e}_{b},\underline{\ell}, \alpha}(t,0)\tau_{\overline{\underline{k}}+\underline{e}_{b}-\underline{e}_{s },\overline{\underline{\ell}},\alpha}(\overline{t},0)+\\ &+(-1)^{\lfloor\underline{k}+\overline{\underline{k}}\rfloor_{b- 1}}z^{p\lambda_{b}-k_{b}+\overline{k}_{b}-2-2\delta_{bs}}e^{(-z)\cdot(t^{(b)} -\overline{t}^{(b)})}e^{(-z)^{-1}\cdot(\tilde{\partial}_{t^{(b)}}-t\tilde{ \underline{\alpha}}_{\overline{t}}(b))}\tau_{\underline{\underline{k}}+ \underline{e}_{s}+\underline{e}_{b},\underline{\ell},\alpha}(t,0)\tau_{ \overline{\underline{k}}-\underline{e}_{b}-\underline{e}_{s},\overline{\underline {\ell}}}\overline{\alpha}(\overline{t},0)\big{)}dz\\ &=-(-1)^{\lfloor\underline{k}+\overline{\underline{k}}\rfloor} \big{\{}\sum_{j=1}^{p-1}\frac{\partial\tau_{\underline{k}+\underline{e}_{s}}(t,\xi)}{\partial\xi_{j}}\big{|}_{\xi=0}\frac{\partial\tau_{\underline{k}- \underline{e}_{s}}(\overline{t},\overline{\xi})}{\partial\overline{\xi}_{p-j }}\big{|}_{\overline{\xi}=0}\\ &\qquad+(1-\delta_{p0})\frac{1}{\sqrt{2}}\big{(}\tau_{\underline{ k}+\underline{e}_{s}}(t,0)\frac{\partial\tau_{\overline{\underline{k}}- \underline{e}_{s}}(\overline{t},\overline{\xi})}{\partial\overline{\xi}_{p}} \big{|}_{\overline{\xi}=0}+\frac{\partial\tau_{\underline{k}+\underline{e}_{s }}(t,\xi)}{\partial\xi_{p}}\big{|}_{\xi=0}\tau_{\overline{\underline{k}}- \underline{e}_{s}}(\overline{t},0)\big{)}\big{\}}\\ &\quad+\delta_{p0}[\frac{1}{2}-\frac{1}{2}(-1)^{\lfloor\underline {k}+\overline{\underline{k}}\rfloor}]\tau_{\underline{k}+\underline{e}_{s}}(t,0)\tau_{\overline{\underline{k}}-\underline{e}_{s}}(\overline{t},0).\end{split} \tag{7.49}\] As before, replace all \(t_{1}^{a}\) by \(t_{1}^{a}+x\) and define \(V^{\pm}(\underline{\alpha};x,t,z)\) as before, and define the column vector \(T^{\pm j}(\underline{\alpha};x,t)=(W^{\pm j}(\underline{\alpha};x,t)_{i})_{1 \leq i\leq 2s}\) again by (7.36) and (7.37) Note that again (7.33) holds. Then \[\begin{split}\operatorname{Res}_{z=0}&\sum_{j=1}^{s}V ^{+}(\underline{\alpha};x,t,z)\left(z^{p\lambda_{j}}E_{jj}-(-z)^{p\lambda_{j}} E_{s+j,s+j}\right)V^{-}(\underline{\beta};\overline{x},\overline{t},z)^{t}dz=\\ &=-T^{+0}(\underline{\alpha};x,t)T^{-0}(\underline{\beta}; \overline{x},\overline{t})^{t}+\sum_{j=0}^{p}T^{+j}(\underline{\alpha};x,t)T ^{-(p-j)}(\underline{\beta};\overline{x},\overline{t})^{t}.\end{split} \tag{7.50}\] Let \(\partial=\partial_{x}\) and using the fundamental lemma (see [15], Lemma 4.1), we deduce the following expression for the matrix pseudo-differential operators \[\begin{split}\sum_{j=1}^{s}&(P^{+}(\underline{ \alpha};x,t,\partial)R^{+}(\underline{\alpha};\partial)\left(\partial^{p \lambda_{j}}E_{jj}-(-\partial)^{p\lambda_{j}}E_{s+j,s+j}\right)R^{-}( \underline{\beta};x,t,\partial)^{*}P^{-}(\underline{\beta};x,t,\partial)^{*})_ {-}=\\ &-T^{+0}(\underline{\alpha};x,t)\partial T^{-0}(\underline{\beta}; x,t)^{t}J\ +\sum_{j=0}^{p}T^{+j}(\underline{\alpha};x,t)\partial T^{-(p-j)}(\underline{\beta}; x,t)^{t}J.\end{split} \tag{7.51}\] As in the previous section, if we take take \(p=0\) and \(\underline{\alpha}=\underline{\beta}\) and \(J=\sum_{j=1}^{s}E_{jj}-E_{s+j,s+j}=J^{-1}\), we obtain again (4.33) However, taking \(p\neq 0\), we get \[\begin{split}\sum_{j=1}^{s}&(P^{+}(\underline{ \alpha};x,t,\partial)\left(\partial^{p\lambda_{j}}E_{jj}+(-\partial)^{p\lambda _{j}}E_{s+j,s+j}\right)P^{+}(\underline{\alpha};x,t,\partial)^{-1})_{-}=\\ &\sum_{k=0}^{p}T^{+k}(\underline{\alpha};x,t)\partial^{-1}T^{-(p- k)}(\underline{\alpha};x,t)^{t}J.\end{split} \tag{7.52}\] We deduce from (7.35) that \[\sum_{j=1}^{s}(L^{p\lambda_{j}}E^{j})_{-}=\sum_{k=0}^{p}T^{+k}(\underline{ \alpha};x,t)\partial^{-1}T^{-(p-k)}(\underline{\alpha};x,t)^{t}J. \tag{7.53}\] From this it is obvious that the integral part of \({\cal L}=\sum_{j=1}^{s}L^{\lambda_{j}}E^{j}\) is equal to \[{\cal L}_{-}=T^{+0}(\underline{\alpha};x,t)\partial^{-1}T^{-1}(\underline{ \alpha};x,t)^{t}J+T^{+1}(\underline{\alpha};x,t)\partial^{-1}T^{-0}(\underline {\alpha};x,t)^{t}J. \tag{7.54}\] As before, \[{\cal L}^{*}=K{\cal L}K^{-1}.\] Again we have \[\frac{\partial T^{+k}}{\partial t_{i}^{j}}=\left(L^{i}D^{j}\right)_{+}\left(T ^{+k}\right),\quad\frac{\partial T^{-k}J}{\partial t_{i}^{j}}=-\left(L^{i}D^{ j}\right)_{+}^{*}\left(T^{-k}J\right),\quad\text{for $k=0,1$.} \tag{7.55}\] Thus \(W^{+k}\), resp. \(W^{-k}J\), for \(k=0,1\), is an eigenfunction, resp. adjoint eigenfunction, for the operators \(L\) and \(D^{j}\), \(j=1,2,\ldots,s\). We obtain this from \[\frac{\partial W^{+0}}{\partial t_{i}^{j}}\partial^{-1}W^{-1^{t}}+\frac{ \partial W^{+1}}{\partial t_{i}^{j}}\partial^{-1}W^{-0^{t}}=\left(L^{i}D^{j} \right)_{+}(W^{+0})\partial^{-1}W^{-1^{t}}+\left(L^{i}D^{j}\right)_{+}(W^{+1} )\partial^{-1}W^{-0^{t}}.\] Namely, we can eliminate one of the two terms, by letting the matrices of this equation act on a specific vector of functions of the variables \(t\). Thus obtaining the first equation of (7.55). The second formula of (7.55) is obtained in a similar fashion. **Case 2, \(\mu\neq\emptyset\) (first approach).** This case is similar to the \(\mu\neq\emptyset\) (first approach) case for the embedding in \(d_{\infty}\). The only difference is the value of \(\delta\). If \(r\) is even (resp. odd), then in the reduction of the DKP hierarchy, \(\delta=-\frac{1}{2}\) (resp. \(\delta=0\)), while in this case, for the reduction of the BKP hierarchy, \(\delta=0\) (resp. \(\delta=-\frac{1}{2}\)) when \(r\) is even (resp. odd). **Case 3, \(\mu\neq\emptyset\) (second approach).** Here also Remark 7.3 holds. This case is similar to the \(\mu\neq\emptyset\) (second approach) case for the embedding in \(d_{\infty}\). In this case, however, \(\delta=0\) (resp. \(\delta=-\frac{1}{2}\)) when \(r\) is even (resp. odd). Again \({\cal L}\) is given by (7.44) is equal to (7.45) and satisfies the same Lax equations as \(L\,\) viz. (4.55). ### \(\hat{sp}_{2n}\) and the \((\lambda,\mu)\)-reduced CKP hierarchy The group \(\hat{SP}_{2n}\) acs on \(F_{c}\) and leaves the module \(F_{c}^{\overline{\alpha}}\) invariant. It consists of all elements \(g\in C_{\infty}\) that satisfy \(gb_{j}g^{-1}=\sum_{i}a_{ij}b_{i}\), with \(\sum_{k}(-1)^{k-\frac{1}{2}}a_{ki}a_{-k,j}=\sum_{i}a_{ij}b_{i}\). The \((\lambda,\mu)\)-reduced CKP hierarchy is \[\hat{SP}_{2n}=\sum_{i}a_{ij}b_{i},\quad\text{for $k=0,1$.} \tag{7.56}\] The \((\lambda,\mu)\)-reduced CKP hierarchy is \[\hat{SP}_{2n}=\sum_{i}a_{ij}b_{i},\quad\text{for $k=0,1$.} \tag{7.57}\] The \((\lambda,\mu)\)-reduced CKP hierarchy is \[\hat{SP}_{2n}=\sum_{i}a_{ij}b_{i},\quad\text{for $k=0,1$.} \tag{7.58}\] The \((\lambda,\mu)\)-reduced CKP hierarchy is \[\hat{SP}_{2n}=\sum_{i}a_{ij}b_{i},\quad\text{for $k=0,1$.} \tag{7.59}\] The \((\lambda,\mu)\)-reduced CKP hierarchy is \[\hat{SP}_{2n}=\sum_{i}a_{ij}b_{i},\quad\text{for $k=0,1$.} \tag{7.60}\] The \((\lambda,\mu)\)-reduced CKP hierarchy is \[\hat{SP}_{2n}=\sum_{i}a_{ij}b_{i},\quad\text{for $k=0,1$.} \tag{7.61}\] The \((\lambda,\mu)\)-reduced CKP hierarchy is \[\hat{SP}_{2n}=\sum_{i}a_{ij}b_{i},\quad\text{for $k=0,1$.} \tag{7.62}\] The \((\lambda,\mu)\)-reduced CKP hierarchy is \[\hat{SP}_{2n}=\sum_{i}a_{ij}b_{i},\quad\text{for $k=0,1$.} \tag{7.63}\] The \((\lambda,\mu)\)-reduced CKP hierarchy is \[\hat{SP}_{2n}=\sum_{i}a_{ij}b_{i},\quad\text{for $k=0,1$.} \tag{7.64}\] The \((\lambda,\mu)\)-reduced CKP hierarchy is \[\hat{SP}_{2n}=\sum_{i}a_{ij}b_{i},\quad\text{for $k=0,1$.} \tag{7.65}\] The \((\lambda,\mu)\)-reduced CKP hierarchy is \[\hat{SP}_{2n}=\sum_{i}a_{ij}b_{i},\quad\text{for $k=0,1$.} \tag{7.66}\] The \((\lambda,\mu)\)-reduced CKP hierarchy is \[\hat{SP}_{2n}=\sum_{i}a_{ij}b_{i},\quad\text{for $k=0,1$.} \tag{7.67}\] The \((\lambda,\mu)\)-reduced CKP hierarchy is \[\hat{SP}_{2n}=\sum_{i}a_{ij}b_{i},\quad\text{for $k=0,1$.} \tag{7.68}\] The \((\lambda,\mu)\)-reduced CKP hierarchy is \[\hat{SP}_{2n}=\sum_{i}a_{ij}b_{i},\quad\text{for $k=0,1$.} \tag{7.69}\] The \((\lambda,\mu)\)-reduced CKP hierarchy is \[\hat{SP}_{2n}=\sum_{i}a_{ij}b_{i},\quad\text{for $k=0,1$.} \tag{7.70}\] The \((\lambda,\mu)\)-reduced CKP hierarchy is \[\hat{SP}_{2n}=\sum_{i}a_{ij}b_{i},\quad\text{for $k=0,1$.} \tag{7.71}\] The \((\lambda,\mu)\)-reduced CKP hierarchy is \[\hat{SP}_{2n}=\sum_{i}a_{ij}b_{i},\quad\text{for $k=0,1$.} \tag{7.72}\] The \((\lambda,\mu)\)-reduced CKP hierarchy is \[\hat{SP}_{2n}=\sum_{i}a_{ij}b_{i},\quad\text{for $k=0,1$.} \tag{7.73}\] The \((\lambda,\mu)\)-reduced CKP hierarchy is \[\hat{SP}_{2n}=\sum_{i}a_{ij}b_{i},\quad\text{for $k=0,1$.} \tag{7.74}\] The \((\lambda,\mu)\)-reduced CKP hierarchy is \[\hat{SP}_{2n}=\sum_{i}a_{ij}b_{i},\quad\text{for $k=0,1$.} \tag{7.75}\] The \((\lambda,\mu)\)-reduced CKP hierarchy is \[\hat{SP}_{2n}=\sum_{i}a_{ij}b_{i},\quad\text{for $k=0,1$.} \tag{7.76}\] The \((\lambda,\mu)\)-reduced CKP hierarchy is \[\hat{SP}_{2n}=\sum_{i}a_{ij}b_{i},\quad\text{for $k=0,1$.} \tag{7.77}\] The \((\lambda,\mu)\)-reduced CKP hierarchy is \[\hat{SP}_{2n}=\sum_{i}a_{ij}b_{i},\quad\text{for $k=0,1$.} \tag{7.78}\] The \((\lambda,\mu)\)-reduced CKP hierarchy is \[\hat{SP}_{2n}=\sum_{i}a_{ij}b_{i},\quad\text{for $k=0,1$.} \tag{7.79}\] The \((\lambda,\mu)\)-reduced CKP hierarchy is \[\hat{SP}_{2n}=\sum_{i}a_{ij}b_{i},\quad\text{for $k=0,1$.} \tag{7.80}\] The \((\lambda,\mu)\)-reduced CKP hierarchy is \[\hat{SP}_{2n}=\sum_{i}a_{ij}b_{i},\quad\text{for $k=0,1$.} \tag{7.81}\] The \((\lambda,\mu)\)-reduced CKP hierarchy is \[\hat{SP}_{2n}=\sum_{i}a_{ij}b_{i},\quad\text{for $k=0,1$.} \tag{7.82}\] The \((\lambda,\mu)\)-reduced CKP hierarchy is \[\hat{SP}_{2n}=\sum_{i}a_{ij}b_{i},\quad\text{for $k=0,1$.} \tag{7.83}\] The \((\lambda,\mu)\)-reduced CKP hierarchy is \[\hat{SP}_{2n}=\sum_{i}a_{ij}b_{i},\quad\text{for $k=0,1$.} \tag{7.84}\] The \((\lambda,\mu)\ \((-1)^{i-\frac{1}{2}}\delta_{i,-j}\), hence \(\sum_{k}(-1)^{k+\frac{1}{2}}a_{ik}a_{j,-k}=(-1)^{i+\frac{1}{2}}\delta_{i,-j}\), and \(a_{i+2n,j+2n}=a_{ij}\). Then \[(g\otimes g)\sum_{i\in\frac{1}{2}+\mathbb{Z}}(-1)^{i+\frac{1}{2}} b_{i}\otimes b_{p(2n)-i} =\sum_{i\in\frac{1}{2}+\mathbb{Z}}(-1)^{i+\frac{1}{2}}gb_{i}g^{-1}g \otimes gb_{p(2n+1)-i}g^{-1}g\] \[=\sum_{i,j,k\in\frac{1}{2}+\mathbb{Z}}(-1)^{i+\frac{1}{2}}a_{ji}a_ {k,p(2n+1)-i}b_{j}g\otimes b_{k}g\] \[=\sum_{i,j,k\in\frac{1}{2}+\mathbb{Z}}(-1)^{i+\frac{1}{2}}a_{ji}a_ {k-p(2n+1),-i}b_{j}g\otimes b_{k}g\] \[=\sum_{j\in\delta+\mathbb{Z}}(-1)^{j+\frac{1}{2}}b_{j}g\otimes b _{p(2n+1)-j}g\] \[=\sum_{j\in\delta+\mathbb{Z}}(-1)^{j+\frac{1}{2}}b_{j}\otimes b _{p(2n+1)-j}(g\otimes g).\] Since \(\sum_{i\in\frac{1}{2}+\mathbb{Z}}(-1)^{i+\frac{1}{2}}b_{i}|0\rangle\otimes b _{p(2n)-i}|0\rangle=0\), we obtain \[\sum_{i\in\frac{1}{2}+\mathbb{Z}}(-1)^{i+\frac{1}{2}}b_{i}\tau_{0}\otimes b_{ p(2n)-i}\tau_{0}=0,\quad p=0,1,2,\ldots \tag{7.56}\] In a similar way we deduce that \[\sum_{i\in\frac{1}{2}+\mathbb{Z}}(-1)^{i+\frac{1}{2}}b_{i}\tau_{1}\otimes b_{ p(2n)-i}\tau_{1}=0,\quad p=1,2,\ldots,\quad\text{and }S_{c}^{2}(\tau_{1}\otimes\tau_{1})=-4T_{1}\otimes T_{1}.\] In terms of \((\lambda,\mu)\) equation (7.56) turns into \[\sum_{a=1}^{s}\sum_{i\in\frac{1}{2}+\mathbb{Z}}\left(b_{i}^{-a}\tau_{0} \otimes b_{p\lambda_{a}-i}^{+a}\tau_{0}-b_{i}^{+a}\tau_{0}\otimes b_{p\lambda_ {a}-i}^{-a}\tau_{0}\right)+\sum_{c=s+1}^{r+s}\sum_{i\in\frac{1}{2}+\mathbb{Z} }(-1)^{i+\frac{1}{2}}b_{i}^{c}\tau_{0}\otimes b_{2p\mu_{c-s}-i}^{c}\tau_{0}=0. \tag{7.57}\] In terms of the bosonic fields, this is \[\begin{split}\operatorname{Res}_{z=0}&\big{\{}\sum _{a=1}^{s}& z^{p\lambda_{a}}\left(b^{-a}(z)\tau_{0}\otimes b^{+a}(z )\tau_{0}-b^{+a}(z)\tau_{0}\otimes b^{-a}(z)\tau_{0}\right)+\\ &+\sum_{c=s+1}^{r+s}z^{2p\mu_{c-s}}b^{c}(z)\tau_{0}\otimes b^{c}( -z)\tau_{0}\big{\}}dz=0.\end{split} \tag{7.58}\] We can rewrite (7.57) for the wave function introduced in (4.61) (\(p=0,1,\ldots\)) \[\operatorname{Res}_{z=0}V(x,t,z)\left(\sum_{a=1}^{s}\left(z^{p\lambda_{a}}E_{ aa}+(-z)^{p\lambda_{a}}E_{s+a,s+a}\right)+\sum_{i=1}^{r}z^{2p\mu_{i}}E_{2s+i,2s+i} \right)V(\overline{x},\overline{t},-z)^{t}dz=0. \tag{7.59}\] This equation for \(p=1\) is equivalent to the fact that \[\mathcal{L}: =\sum_{a=1}^{s}L^{\lambda^{a}}E^{a}+\sum_{i=1}^{r}L^{2\mu_{i}}C^{2s +i}=(\mathcal{L})_{+}. \tag{7.60}\] This \({\cal L}\) also satisfies the Lax equations \[\frac{\partial{\cal L}}{\partial t_{k}^{a}}=[(L^{k}D^{a})_{+},{\cal L}],\quad \frac{\partial{\cal L}}{\partial t_{k}^{c}}=[(L^{k}C^{s+c})_{+},{\cal L}],\quad a =1,\ldots s,\ c=s+1,\ldots,s+r.\] Using (7.60) (4.63), (4.66) and (4.68), we find that \[{\cal L}^{*}=(N+I){\cal L}(N+I).\] ## 8 Hierarchies of KP type related to \(x_{\infty}\) and \(\hat{\mathfrak{g}}^{(2)}\) reductions The Lie group that corresponds to \(\hat{gl}_{n}^{(2)}\) (resp. \(\hat{so}_{2n}^{(2)}\) ) is the twisted loop group \(\hat{GL}_{n}^{(2)}\) (resp. \(\hat{SO}_{2n}^{(2)}\)) that preserves the bilinear form on \({\mathbb{C}}[t,t^{-1}]^{n}\) (resp. \({\mathbb{C}}[t,t^{-1}]^{2n}\)): \[({\bf v}(t),{\bf w}(t))={\rm Res}\,{\bf v}(-t)^{t}J_{n}{\bf w}(t)\frac{dt}{t},\quad({\rm resp.}\ ({\bf v}(t),{\bf w}(t))={\rm Res}\,{\bf v}(-t)^{t}J_{2n}O{\bf w}(t) \frac{dt}{t}).\] Hence, an element \(G(t)\in\hat{GL}_{n}^{(2)}\) (resp. \(G(t)\in\hat{SO}_{2n}^{(2)}\) satisfies \[J_{n}G(-t)^{t}J_{n}G(t)=I_{n},\quad({\rm resp.}\ OJ_{2n}G(-t)^{t}J_{2n}OG(t)=I_ {2n}).\] Date, Jimbo, Kashiwara and Miwa obtain in [5], via the embedding of \(\hat{\mathfrak{g}}^{(2)}\subset d_{\infty}\) or \(b_{\infty}\), so-called reduced DKP (in [5] this hierarchy corresponds to the 2-component BKP) and BKP hierarchies. This means in the one component case that one obtains the additional equations \[\sum_{i\in{\mathbb{Z}}}\sigma^{k}(\tilde{\phi}_{i})\tau\otimes\tilde{\phi}_{ kP-i}\tau=0,\quad({\rm resp.}\ \sum_{i\in{\mathbb{Z}}}\tilde{\phi}_{i}\tau\otimes\tilde{\phi}_{kP-i}\tau=0), \quad k=1,2,\ldots \tag{8.1}\] in the BKP case, with \(P=2m+1\) (resp. \(P=2m\)) for \(\hat{gl}_{2m+1}^{(2)}\) (resp. \(\hat{so}_{2m}^{(2)}\)), and \[\sum_{i\in\frac{1}{2}+{\mathbb{Z}}}\sigma^{k}(\phi_{i})\tau_{a}\otimes\phi_{ 2km-i}\tau_{a}=0,\quad k=1,2,\ldots \tag{8.2}\] in the DKP case for \(\hat{gl}_{2m}^{(2)}\). In the case of \(\hat{so}_{2n}^{(2)}\) this is clear. Because the matrices in \(b_{\infty}\) that correspond to \(\hat{so}_{2n}^{(2)}\) are \(2n\) periodic, the same proof as in the untwisted case can be used. For \(\hat{gl}_{n}^{(2)}\), one can use the following observation. We consider \(\hat{GL}_{2m}^{(2)}\). This group leaves the module \(F_{d}^{\overline{d}}\) invariant. It consists of all elements \(g\in D_{\infty}\) that satisfy \(g\phi_{i}g^{-1}=\sum_{i}a_{ji}\phi_{j}\), with \(\sum_{k}a_{ki}a_{-k,j}=\delta_{i,-j}\), and hence \(\sum_{k}a_{ik}a_{j,-k}=\delta_{i,-j}\), and \(a_{i+2pm,j+2pm}=J^{p}a_{ij}J^{p}=\sigma_{i}^{p}\sigma_{j}^{p}a_{ij}\). Recall from Remark 3.2 that \(\sigma_{i}=\sigma_{-i}\). Thus, for \(p=1,2,\ldots\), we find that \[(g\otimes g)\sum_{i\in\frac{1}{2}+\mathbb{Z}}J^{p}\phi_{i}\otimes \phi_{2pm-i} =\sum_{i\in\frac{1}{2}+\mathbb{Z}}\sigma_{i}^{p}g\phi_{i}g^{-1}g \otimes g\phi_{2pm-i}g^{-1}g\] \[=\sum_{i,j,k\in\frac{1}{2}+\mathbb{Z}}\sigma_{i}^{p}a_{ji}a_{k,2pm -i}\phi_{j}g\otimes\phi_{k}g\] \[=\sum_{i,j,k\in\frac{1}{2}+\mathbb{Z}}\sigma_{i}^{p}\sigma_{k-2pm }^{p}\sigma_{-i}^{p}a_{ji}a_{k-2pm,-i}\phi_{j}g\otimes\phi_{k}g\] \[=\sum_{i,j,k\in\frac{1}{2}+\mathbb{Z}}\sigma_{k-2pm}^{p}a_{ji}a_{ k-2pm,-i}\phi_{j}g\otimes\phi_{k}g\] \[=\sum_{j\in\frac{1}{2}+\mathbb{Z}}\sigma_{j}^{p}\phi_{j}g\otimes \phi_{2pm-j}g\] \[=\sum_{j\in\frac{1}{2}+\mathbb{Z}}J^{p}\phi_{j}\otimes\phi_{2pm-j }(g\otimes g).\] Since also \[\sum_{i\in\frac{1}{2}+\mathbb{Z}}J^{p}\phi_{i}|\overline{a}\rangle\otimes\phi _{2pm-i}|\overline{a}\rangle=0,\] we have that elements \(\tau_{a}\in F_{d}^{\overline{a}}\), which belong to the \(\hat{GL}_{2m}^{(2)}\)-group orbit of \(|\overline{a}\rangle\) satisfy \[\sum_{i\in\frac{1}{2}+\mathbb{Z}}J^{p}\phi_{i}\tau_{a}\otimes\phi_{2pm-i}\tau_ {a}=0,\quad\text{for $p=0,1,2,\ldots$}\] Finally, let \(J^{p}\otimes 1\) act on this equation, this gives (8.1). In a similar way we find that \(\tau\in F_{b}\), which belong to the \(\hat{GL}_{2m+1}^{(2)}\)-group orbit (resp. \(\hat{SO}_{2n}^{(2)}\)-group orbit) of \(|0\rangle\), satisfies (4.3) and the first equation (resp. second equation) of (8.1). ### \(\hat{gl}_{2m}^{(2)}\) and the \((\lambda,\mu)\)-reduced DKP hierarchy In this case we assume that \(\lambda=(\lambda_{1},\lambda_{2},\ldots,\lambda_{s})\), where all parts \(\lambda_{i}=2\ell_{i}\) are even, and \(\mu=(\mu_{1},\mu_{2},\ldots,\mu_{r})\), where all parts \(\mu_{i}=2m_{i}+1\) are odd, such that \(|\lambda|+|\mu|=2m\). This means that \(r\) can only be \(r=2R\) even. Transforming the equations (4.6) and (8.2) to the \((r,s)\) multicomponent description we obtain \[\begin{split}\operatorname{Res}_{z=0}&\big{\{}\sum _{b=1}^{s}z^{k\ell_{b}}\left(\psi^{+b}(z)\tau_{a}\otimes\psi^{-b}(z)\tau_{a}+ \psi^{-b}(z)\tau_{a}\otimes\psi^{+b}(z)\tau_{a}\right)+\\ &\qquad+\frac{1}{z}\sum_{c=s+1}^{2R+s}z^{(2m_{c-s}+1)k}\tilde{ \phi}^{c}(z)\tau_{a}\otimes\tilde{\phi}^{c}(-z)\tau_{a}\big{\}}dz=0,\quad k=0,1, \ldots.\end{split} \tag{8.3}\] This resembles the \(\hat{so}_{2n}\) reduction (7.15) of the DKP hierarchy. However in this case the factor \(z^{(2m_{c-s}+1)k}\) in front of the neutral twisted fields has an odd power of \(z\) while this power in the \(so_{2n}\)-case is always even. This means that the same approach works as in subsection 7.2, but then with all \(\lambda_{b}\) and \(\mu_{i}\)'s replaced by \(\ell_{b}\) and \(\frac{2m_{c-s}+1}{2}\), respectively. We refer the reader to that subsection for the construction of wave function and Lax operators. In the **first approach to the DKP hierarchy**, we find that the operator \[\mathcal{L}(\underline{\alpha};x,t)=\sum_{b=1}^{s}L(\underline{ \alpha};x,t)^{\ell_{b}}E^{b}(\underline{\alpha};x,t)\] \[\quad=\mathcal{L}(\underline{\alpha};x,t)_{+}+\frac{1}{2}\sum_{c =s+1}^{s+r}\sum_{n=0}^{2m_{c-s}+1}W^{+}(\underline{\alpha};x,t)^{n}E_{c-s,c-s} \partial^{-1}\left(JW^{-}(\underline{\alpha};x,t)^{2m_{c-s}-n+1}\right)^{t},\] satisfies \(\frac{\partial\mathcal{L}}{\partial t_{j}^{\alpha}}=[(L^{j}D^{a})_{+}, \mathcal{L}]\). In the **second approach to the DKP hierarchy**, where again Remark 7.3 holds, we find that \[\mathcal{L}(\underline{\alpha};x,t,\partial)=L(\underline{\alpha};x,t, \partial)^{m_{r}}D^{s+r}(\underline{\alpha};x,t,\partial)+\sum_{b=1}^{s}L( \underline{\alpha};x,t,\partial)^{\ell_{b}}E^{b}(\underline{\alpha};x,t, \partial),\] satisfies the same Lax equation as \(L\), viz. (4.55), and \[\mathcal{L}(\underline{\alpha};x,t,\partial)=\mathcal{L}( \underline{\alpha};x,t,\partial)_{\geq}+\\ +\frac{1}{2}\sum_{c=s+1}^{s+r-1}\sum_{j=0}^{2m_{c-s}+1}\hat{W}^{ +}(\underline{\alpha};x,t)^{j}E_{c-s,c-s}\partial^{-1}\left(\hat{W}^{-}( \underline{\alpha};x,t)^{2m_{c-s}-j+1}\right)^{t}J(\partial)^{-1}.\] (\hat{gl}_{2m+1}^{(2)}\) and \(\hat{so}_{2n}^{(2)}\) and the \((\lambda,\mu)\)-reduced BKP hierarchy In the case of \(\hat{gl}_{2m+1}^{(2)}\) (resp. \(\hat{so}_{2n}^{(2)}\)) we assume that \(\lambda=(\lambda_{1},\lambda_{2},\ldots,\lambda_{s})\), where all parts \(\lambda_{i}=2\ell_{i}\) are even, and \(\mu=(\mu_{1},\mu_{2},\ldots,\mu_{r})\), where all parts \(\mu_{i}=2m_{i}+1\) are odd (resp. \(\mu_{i}=2m_{i}\) are even), such that \(|\lambda|+|\mu|=2m+1\) (resp. \(|\lambda|+|\mu|=n\) and \(r=2R+1\) odd). This means in the \(\hat{gl}_{2m+1}^{(2)}\) that \(r\) can only be \(r=2R+1\) odd. Transforming the equations (4.3) and (8.1) to the \((r,s)\) multicomponent description we obtain for \(\hat{gl}_{2m+1}^{(2)}\) and \(\hat{so}_{2n}^{(2)}\): \[\begin{split}\mathrm{Res}_{z=0}\big{\{}&\sum_{b=1}^{s} z^{pn_{b}}\left(\psi^{+b}(z)\tau\otimes\psi^{-b}(z)\tau+\psi^{-b}(z)\tau\otimes \psi^{+b}(z)\tau\right)+\\ &+\frac{1}{z}\sum_{c=s+1}^{2R+s+1}z^{pn_{c}}\tilde{\phi}^{c}(z) \tau\otimes\tilde{\phi}^{c}(-z)\tau\big{\}}dz=\frac{\delta_{p,0}}{2}\tau \otimes\tau,\quad p=0,1,\ldots,\end{split} \tag{8.4}\] where \(n_{b}=\ell_{b}\) and \(n_{c}=2m_{c-s}+1\) for \(\hat{gl}_{2m+1}^{(2)}\), and \(n_{b}=\lambda_{b}\) and \(n_{c}=2\mu_{c-s}\) for for \(\hat{so}_{2n}^{(2)}\), with \(1\leq b\leq s<c\leq r+s\). This equation resembles (7.47), the \(\hat{so}_{2n+1}\) reduction of the BKP hierarchy, however in this case we have odd powers of \(z\), in front of the neutral twisted fields, viz. we have \(z^{(2m_{c-s}+1)k}\), for \(\hat{gl}^{(2)}_{2m+1}\) while in the \(\hat{so}_{2n+1}\) case these powers are even. In the \(\hat{so}^{(2)}_{2n}\) case, this resembles also the \(\hat{so}_{2n+1}\) reduction of the BKP hierarchy. However, in this case there can only be an odd number of neutral twisted fields. **Remark 8.1**: _The automorphism \(\operatorname{Ad}O\) which defines the twisted Lie algebra \(\hat{so}^{(2)}_{2n}\), has as fixed points the Lie algebra \(so_{2n-1}\). Hence \(\hat{so}_{2n-1}\) appears as subalgebra inside of \(\hat{so}^{(2)}_{2n}\), viz. as_ \[\mathbb{C}K\oplus\bigoplus_{j\in\mathbb{Z}}t^{2j}so_{2n-1}.\] _Equation (8.4) is in fact the same as the description of the reduced BKP equation, when we reduce to this \(\hat{so}_{2n-1}\), with \(s\) pairs of charged fermion fields and \(2R\) neutral twisted fermion fields. The additional field \(\sigma(z)\), which is needed in that case, is proportional to \(\hat{\phi}^{2R+1}(z)\). However, for \(\hat{so}_{2n-1}\), the corresponding to \(\sigma(z)\) Heisenberg algebra does not lie in \(\hat{so}_{2n-1}\). This is no problem now, the modes of \(\alpha^{2R+1}(z)=\frac{1}{2}:\tilde{\phi}^{2R+1}(z)\tilde{\phi}^{2R+1}(-z):\) lie in \(\hat{so}^{(2)}_{2n}\), they correspond to the elements \(t^{2k+1}(e_{nn}-e_{nn})\)._ We again find the two cases of the BKP hierarchy of Subsection 4.4. However since \(r\) is odd in both cases, the case \(\mu=\emptyset\) does not hold. In the **first approah to the BKP hierarchy**, we find that the operator \(\mathcal{L}=\sum_{b=1}^{s}L^{n_{b}}E^{b}\) satisfies the Lax equation \(\frac{\partial\mathcal{L}}{\partial t_{j}^{b}}=[(L^{j}D^{b})_{+},\mathcal{L}]\) and is equal to \[\mathcal{L}(\underline{\alpha};x,t)=\mathcal{L}(\underline{\alpha};x,t)_{+}+ \frac{1}{2}\sum_{c=s+1}^{s+2R+1}\sum_{j=0}^{n_{c}}W^{+}(\underline{\alpha};x, t)^{j}E_{c-s,c-s}\partial^{-1}\left(JW^{-}(\underline{\alpha};x,t)^{n_{c}-j} \right)^{t}.\] In the **second approach to the BKP hierarchy**, where again Remark 7.3 holds, we find that the operator is equal to \[\mathcal{L}(\underline{\alpha};x,t,\partial)=L(\underline{\alpha};x,t, \partial)^{n_{s+2R+1}}D^{s+2R+1}(\underline{\alpha};x,t,\partial)+\sum_{b=1}^ {s}L(\underline{\alpha};x,t,\partial)^{n_{b}}E^{b}(\underline{\alpha};x,t, \partial),\] Then clearly \(\mathcal{L}\) satisfies the same Lax equation as \(L\), viz. (4.55), and \[\mathcal{L}(\underline{\alpha};x,t,\partial)=\mathcal{L}(\underline{\alpha};x, t,\partial)_{\geq}+\frac{1}{2}\sum_{c=s+1}^{s+2R}\sum_{j=0}^{n_{c}}\hat{W}^{+}( \underline{\alpha};x,t)^{j}E_{c-s,c-s}\partial^{-1}\left(\hat{W}^{-}( \underline{\alpha};x,t)^{n_{c}-j}\right)^{t}J(\partial)^{-1}.\]
2304.02058
A Compositional Resilience Index for Computationally Efficient Safety Analysis of Interconnected Systems
Interconnected systems such as power systems and chemical processes are often required to satisfy safety properties in the presence of faults and attacks. Verifying safety of these systems, however, is computationally challenging due to nonlinear dynamics, high dimensionality, and combinatorial number of possible faults and attacks that can be incurred by the subsystems interconnected within the network. In this paper, we develop a compositional resilience index to verify safety properties of interconnected systems under faults and attacks. The resilience index is a tuple serving the following two purposes. First, it quantifies how a safety property is impacted when a subsystem is compromised by faults and attacks. Second, the resilience index characterizes the needed behavior of a subsystem during normal operations to ensure safety violations will not occur when future adverse events occur. We develop a set of sufficient conditions on the dynamics of each subsystem to satisfy its safety constraint, and leverage these conditions to formulate an optimization program to compute the resilience index. When multiple subsystems are interconnected and their resilience indices are given, we show that the safety constraints of the interconnected system can be efficiently verified by solving a system of linear inequalities. We demonstrate our developed resilience index using a numerical case study on chemical reactors connected in series.
Luyao Niu, Abdullah Al Maruf, Andrew Clark, J. Sukarno Mertoguno, Radha Poovendran
2023-04-04T18:12:35Z
http://arxiv.org/abs/2304.02058v1
A Compositional Resilience Index for Computationally Efficient Safety Analysis of Interconnected Systems ###### Abstract Interconnected systems such as power systems and chemical processes are often required to satisfy safety properties in the presence of faults and attacks. Verifying safety of these systems, however, is computationally challenging due to nonlinear dynamics, high dimensionality, and combinatorial number of possible faults and attacks that can be incurred by the subsystems interconnected within the network. In this paper, we develop a compositional resilience index to verify safety properties of interconnected systems under faults and attacks. The resilience index is a tuple serving the following two purposes. First, it quantifies how a safety property is impacted when a subsystem is compromised by faults and attacks. Second, the resilience index characterizes the needed behavior of a subsystem during normal operations to ensure safety violations will not occur when future adverse events occur. We develop a set of sufficient conditions on the dynamics of each subsystem to satisfy its safety constraint, and leverage these conditions to formulate an optimization program to compute the resilience index. When multiple subsystems are interconnected and their resilience indices are given, we show that the safety constraints of the interconnected system can be efficiently verified by solving a system of linear inequalities. We demonstrate our developed resilience index using a numerical case study on chemical reactors connected in series. ## I Introduction Safety-critical interconnected systems are widely seen in real-world applications such as power systems [1] and chemical processes [2]. Safety violations can lead to significant economic losses and severe damage to the system and/or human operators engaged with the system [3, 1, 4, 5, 6]. Therefore, it is of critical importance to verify safety properties for such large-scale or even societal-scale systems. One approach to verify safety is to use reachability analysis. Computing reachable sets for nonlinear systems is known to be undecidable [7]. Alternatively, solutions to safety verification by ensuring forward invariance of safety sets [8, 9, 10, 11] or approximating reachable sets [12, 13, 14] have been developed. However, these approaches do not scale to interconnected systems of high dimensions. Large-scale systems such as power systems generally consist of multiple interconnected subsystems, motivating the development of compositional approaches [15, 16, 17, 18]. These approaches decompose the safety verification problem into a set of problems of smaller scales formulated on the subsystems, and thus are more tractable. The approaches in [15, 16, 17, 18] assume that the systems are operated under benign environments, making the verified safety properties invalid for systems under faults and attacks. For interconnected systems, an error from one faulty or compromised subsystem could propagate and accumulate through interconnections and impact the safety of other subsystems. A naive approach to safety verification for interconnected systems operated under adversarial environments is to enumerate all possible faulty or compromised subsystems, and perform safety analysis. However, the number of possible faults or attacks that can be incurred by the interconnected system is combinatorial. At present, scalable safety verification of large-scale interconnected systems under faults and attacks has been less studied. In this paper, we develop a compositional safety verification approach for large-scale interconnected systems whose subsystems can be faulty or compromised by attacks. Each subsystem is subject to a safety constraint. We derive a set of conditions on the dynamics of a subsystem to guarantee its safety. We parameterize these conditions using a tuple of real numbers, termed resilience index. Our resilience index defines the amount of time that the system can safely remain in a faulty state, the amount of time required to recover from faults, and constraints on the system dynamics that must be satisfied during faulty as well as normal operation. The resilience index allows us to convert the problem of safety verification of large-scale interconnected systems to a set of algebraic computations, and thus makes safety verification feasible for large-scale systems. To summarize, this paper makes the following contributions. * We formulate a resilient index for a subsystem that experiences faults or attacks. We prove safety guarantees for a subsystem based on the resilience index. We develop a sum-of-squares optimization to compute the resilience index for a subsystem. * We derive a system of linear inequalities to quantify how the resilience index of a subsystem changes due to interconnections. Using the derived linear inequalities, we develop the conditions on the interconnections so that all subsystems are safe under faults and attacks. * We demonstrate the proposed resilience indices and their usage for safety analysis by using a numerical case study on chemical process. The rest of this paper is organized as follows. Section II
2308.13280
AtmoRep: A stochastic model of atmosphere dynamics using large scale representation learning
The atmosphere affects humans in a multitude of ways, from loss of life due to adverse weather effects to long-term social and economic impacts on societies. Computer simulations of atmospheric dynamics are, therefore, of great importance for the well-being of our and future generations. Here, we propose AtmoRep, a novel, task-independent stochastic computer model of atmospheric dynamics that can provide skillful results for a wide range of applications. AtmoRep uses large-scale representation learning from artificial intelligence to determine a general description of the highly complex, stochastic dynamics of the atmosphere from the best available estimate of the system's historical trajectory as constrained by observations. This is enabled by a novel self-supervised learning objective and a unique ensemble that samples from the stochastic model with a variability informed by the one in the historical record. The task-independent nature of AtmoRep enables skillful results for a diverse set of applications without specifically training for them and we demonstrate this for nowcasting, temporal interpolation, model correction, and counterfactuals. We also show that AtmoRep can be improved with additional data, for example radar observations, and that it can be extended to tasks such as downscaling. Our work establishes that large-scale neural networks can provide skillful, task-independent models of atmospheric dynamics. With this, they provide a novel means to make the large record of atmospheric observations accessible for applications and for scientific inquiry, complementing existing simulations based on first principles.
Christian Lessig, Ilaria Luise, Bing Gong, Michael Langguth, Scarlet Stadtler, Martin Schultz
2023-08-25T10:02:26Z
http://arxiv.org/abs/2308.13280v2
# AtmoRep: A stochastic model of atmosphere dynamics using large scale representation learning ###### Abstract The atmosphere affects humans in a multitude of ways, from loss of life due to adverse weather effects to long-term social and economic impacts on societies. Computer simulations of atmospheric dynamics are, therefore, of great importance for the well-being of our and future generations [1, 2]. Classical numerical models of the atmosphere, however, exhibit biases due to incomplete process descriptions and they are computationally highly demanding [1]. Very recent AI-based weather forecasting models [3, 4, 5, 6, 7] reduce the computational costs but they lack the versatility of conventional models and do not provide probabilistic predictions. Here, we propose AtmoRep, a novel, task-independent stochastic computer model of atmospheric dynamics that can provide skillful results for a wide range of applications. AtmoRep uses large-scale representation learning from artificial intelligence [8, 9] to determine a general description of the highly complex, stochastic dynamics of the atmosphere from the best available estimate of the system's historical trajectory as constrained by observations [10]. This is enabled by a novel self-supervised learning objective and a unique ensemble that samples from the stochastic model with a variability informed by the one in the historical record. The task-independent nature of AtmoRep enables skillful results for a diverse set of applications without specifically training for them and we demonstrate this for nowcasting, temporal interpolation, model correction, and counterfactuals. We also show that AtmoRep can be improved with additional data, for example radar observations, and that it can be extended to tasks such as downscaling. Our work establishes that large-scale neural networks can provide skillful, task-independent models of atmospheric dynamics. With this, they provide a novel means to make the large record of atmospheric observations accessible for applications and for scientific inquiry, complementing existing simulations based on first principles. atmospheric dynamics, large scale representation learning, stochastic dynamical systems, foundational models ## Main The atmosphere and its dynamics have a significant impact on human well-being. Adverse weather effects led to the loss of over 2 million lives in the last 50 years and caused economic damages of more than 4.3 trillion dollars [11]. The weather also influences many daily aspects of our societies, such as agricultural decision making, the efficiency of industrial processes, or the availability of renewable energies such as solar and wind power. The atmosphere, furthermore, plays a critical role for Earth's climate and hence for our understanding of and adaptation to climate change. An accurate and equitable modeling of atmospheric dynamics is consequently of critical importance to allow for evidence-based decision making that improves human well-being and minimizes adverse impacts for current and future generations [12]. Classical models for atmospheric dynamics are based on the fundamental laws of physics, e.g. conservation of mass and energy [13, 14]. Because the resulting equations cannot be solved analytically, computer simulations play a central role in describing the dynamics [15, 16, 17, 18]. Despite tremendous progress in the last decades [1], current simulations still exhibit deficiencies in describing relevant physical processes [19, 20, 21, 22], leading, for example, to inaccurate representations of extreme events with strong adverse impacts. Current simulations also suffer from high computational and energy costs. In our work, we develop a novel yet powerful approach to modeling atmospheric dynamics that combines the large observational record with the rapid advances in artificial intelligence [23, 24]. We use large scale representation learning [8, 9], a state-of-the-art methodology in machine learning, to train a statistical model of atmospheric dynamics that is task-agnostic and can be used for a wide range of applications with either no or only little task-specific additional training. The model, called AtmoRep, is given by a large generative neural network with 3.5 billion parameters and trained using the ERA5 reanalysis [10], which provides the most complete assimilation of observations into an estimate of the historical trajectory of the atmosphere that is available. Through the training with observation-based data, AtmoRep can learn effects and dynamics that are present in these but very complex or computationally expensive to model using traditional approaches. We demonstrate the versatility and utility of AtmoRep through skillful nowcasting, temporal interpolation, and model correction as well as the generation of counterfactuals, for example how an atmospheric state would have evolved in a different year or region. These intrinsic capabilities of AtmoRep can be achieved without task-specific training and they hence provide an analogue of the zero-shot abilities that have first been observed for foundational models in natural language processing [25]. AtmoRep generalizes these for the first time to Earth system science. We also demonstrate how our model can be extended for other tasks, e.g. with a task-specific tail network, by using it for downscaling where we achieve highly competitive results. Finally, we show that AtmoRep can be bias corrected using observational data to further improve the representation of the dynamics in the network. AtmoRep uses a flexible and versatile neural network architecture that can be employed regionally or globally and with different physical fields. This improves over existing large-scale AI-based weather forecasting models that are inherently global Figure 1: The AtmoRep model provides a numerical representation \(p_{\theta}(y|x,\alpha)\) of the conditional probability \(p(y|x,\alpha)\) for atmospheric states \(x\),\(y\) subject to external conditions \(\alpha\), e.g. the time of \(x\),\(y\) or their location on the globe. It is implemented as a transformer neural network with 3.5 billion parameters and trained from the ERA5 reanalysis (top left). For training, local space-time neighborhoods are randomly sampled. The neighborhoods are subdivided into smaller patches, called tokens, and the self-supervised learning task is to reconstruct randomly masked or distorted patches (bottom left, gray patches). An ensemble of prediction heads is used to sample from the AtmoRep core model and provide probabilistic predictions for possible states consistent with the un-masked tokens (bottom right). The ensemble spread that is learned during training arises from the intrinsic variability of the data, i.e. that similar atmospheric states \(x(t)\) have different associated states \(y\), for example with a fixed offset in time (top right). and use a fixed number of variables. An accurate and robust representation of the dynamics is learned in AtmoRep using a novel self-supervised training protocol that extends existing ones [8, 26] to four-dimensional space-time and that is one of the keys to AtmoRep's intrinsic capabilities. A further innovation is AtmoRep's ensemble that has a variability that derives from those in the training data. Through this, it differs in an essential way from existing, perturbation-based ensemble methods in both conventional and AI-based models, e.g. [1, 20, 27, 28]. AtmoRep demonstrates, for the first time, the principle and potential of AI-based, task-agnostic atmospheric models as a complement to traditional ones, such as general circulation models. As a statistical approach, AtmoRep generates samples from the learned distribution, which is derived from the available observational record and hence can include effects and phenomena that are not modeled by existing equation-based simulations. We believe that a further development of AtmoRep will allow for the methodology to become an important tool in a wide range of applications where atmospheric dynamics play a role. ### Stochastic modeling of atmospheric dynamics For our work, we build on the description of the atmosphere as a stochastic dynamical system, cf. [19, 29, 30, 31, 32, 33]. The dynamics are, in principal, determined by the deterministic laws of classical mechanics and thermodynamics. A stochastic modeling is, however, appropriate because of the strong sensitivity of the time evolution on the initial conditions, practical limits on the availability of observations to constrain these [34], and a wide range of small scale process whose feedback is best represented stochastically [30, 31]. Given an approximate atmospheric input state \(\tilde{x}\), we thus model physically consistent states \(\tilde{y}\) with the probability distribution \[\tilde{p}\big{(}\tilde{y}|\tilde{x},\tilde{\alpha}\big{)}. \tag{1}\] For example, \(\tilde{y}\) can be a future state for a given initial condition \(\tilde{x}\); alternatively, \(\tilde{y}\) and \(\tilde{x}\) can be local states defined at the same time but at different locations. The external conditions \(\tilde{\alpha}\) complement \(\tilde{x}\) and can describe, for instance, its year or boundary conditions such as global forcings. Eq. 1 is more abstract than, for example, models based on partial differential equations. However, this allows the model to also include processes that are difficult to capture with other approaches. Since Eq. 1 is a highly complex, instationary probability distribution with no known analytic description, we introduce the approximation \[p_{\theta}\big{(}y|x,\alpha\big{)}\approx\tilde{p}\big{(}\tilde{y}|\tilde{x},\tilde{\alpha}\big{)} \tag{2}\] where \(p_{\theta}(y\,|x,\alpha)\) is a large, generative transformer neural network [35] with 3.5 billion parameters \(\theta\). The network provides a general, task-agnostic, stochastic model of atmospheric dynamics that we refer to as AtmoRep, see Fig. 1 for an overview. The AtmoRep model \(p_{\theta}(y\,|x,\alpha)\) is determined by pre-training on observation-based data, specifically the ERA5 reanalysis [10]. This enables \(p_{\theta}(y\,|x,\alpha)\) to include effects and dynamical behavior that are contained in the data but are difficult or computationally expensive to model using first principles. To learn a physical and stochastically consistent model, AtmoRep provides ensemble predictions for the state \(y\). The ensemble is trained from only the single, high-resolution trajectory in ERA5 so that its spread reflects the intrinsic variability in the training data (Fig. 1 top-right). For AtmoRep to be an unbiased estimate of the true distribution, we employ a self-supervised pre-training objective that minimizes the distance \(\mathcal{D}(\tilde{p}(y,x,\alpha),p_{\theta}(y,x,\alpha))\) between the data distribution \(\tilde{p}(y,x)\) and \(p_{\theta}(y,x,\alpha)\) with a Monte Carlo estimate over the training data set. The input to AtmoRep is an atmospheric state \(x\) given by wind velocity (or vorticity and divergence), vertical velocity, temperature, specific humidity and total precipitation in a local space-time neighborhood of, for example, \(36\,\mathrm{h}\times 5\,\mathrm{vertical\ levels}\times 1800\,\mathrm{km}\times 3600\, \mathrm{km}\), respectively (Fig. 1, left). In applications, the network can operate on different neighborhood sizes than during pre-training and the modular design of AtmoRep allows for task-specific configurations with different physical fields, see the Methods section. For processing by the transformer-based neural network, the space-time neighbourhood is tiled into smaller patches, which are known as tokens. The label-free, self-supervised pre-training objective is to provide ensemble predictions for a randomly selected subset of the tokens that are masked or distorted (see Fig. 1, bottom-left). The 4-dimensional masking with large masking ratios of up to 0.9 enables the network to learn the general relationship of local atmospheric information in space and time, and hence of atmospheric dynamics. Further details on the network architecture of AtmoRep and its training are presented in the Methods section and in the supplementary material. ### Intrinsic Capabilities AtmoRep's model formulation as \(p_{\theta}\big{(}y\left|x,\alpha\right)\), i.e. as a numerical approximation for the probability distribution \(\tilde{p}\big{(}\tilde{y}|\tilde{x},\tilde{\alpha}\big{)}\) over atmospheric states, intrinsically includes a variety of relevant applications that can be implemented directly using a pre-trained model. For example, when \(y\) is in the future with respect to \(x\) then \(p_{\theta}\big{(}y\left|x,\alpha\right)\) becomes a forecasting model; when \(y\) corresponds to missing information within \(x\) in space or time, then the model performs spatio-temporal interpolation; and when \(x\) is output from an equation-based simulation, AtmoRep can be used to correct it towards the observationally better-constrained ERA5. The task is in each case implemented through the masked token model by specifying the tokens corresponding to the sought after information \(y\) as masked in the input to AtmoRep, see cf. Fig. 1, bottom left. The model's prediction then provides the estimate for \(y\). When an input state \(x\) is used with incorrect but statistically consistent external information \(\alpha\), then AtmoRep also allows for the generation of counterfactuals, that is, for example, a prediction of how \(x\) would have evolved in a different historical regime or at a different location. Atmorep serves in this case as a statistical sample generator whose distribution is controlled by the external conditions \(\alpha\). The foregoing applications can be realized with AtmoRep with only a pre-trained model and without task-specific training since they are implicitly contained in the pre-training objective, which is designed to learn \(p_{\theta}(y\left|x,\alpha\right)\) using the extended masked token model. We therefore refer to these tasks as intrinsic capabilities. They are the analogue of the zero-shot abilities of large language models [25] that are also tasks implicitly contained in the training objective (e.g. translation or text completion). A summary of AtmoRep's skill for different intrinsic capabilities is presented in Fig. 2 and they are discussed below. Experimental protocols and more detailed evaluations are provided in the Extended Data section and the supplementary material. #### 3.2.2 Nowcasting AtmoRep can be used for probabilistic nowcasting, i.e. short-term forecasting, when \(y\) is a future state with respect to \(x\). This is implemented by masking all tokens at the last time step(s) in the space-time cube that forms the input to the network, see Fig. 2. AtmoRep has skill for the task directly after pre-training through the masked token model but the skill can be improved by fine-tuning. To quantify AtmoRep's nowcasting abilities, we compared to ECMWF's Integrated Forecasting System (IFS) and Pangu-Weather [4]. For deterministic forecasting skill, root mean square error (RMSE) and the anomaly correlation coefficient (ACC) were computed. Fig. 2 shows that AtmoRep attains performance comparable to Pangu-Weather with better performance in particular for very short forecast horizons and in selected variables such as specific humidity. Zero-shot performance is thereby worse than after fine-tuning but still improves over the IFS at very short times. Fig. 2 also shows the continuous ranked probability score (CRPS) for AtmoRep and ERA5, the latter computed using the ERA5 ensemble that is available at 3-hour time resolution. The results demonstrate that AtmoRep has comparable or slightly better probabilistic nowcasting skill. Results for other variables as well as for spread-skill ratio (SSR) are available in the Extended Data section in Figs. 7- 9 where we also show visualizations of a forecast. Overall, our results demonstrate that AtmoRep has state-of-the-art nowcasting performance with no or very little task specific training. Compared to the IFS, AtmoRep has computational and energy costs that are significantly lower. #### 3.2.3 Temporal interpolation Temporal interpolation refers to the task of (re-)creating atmospheric state data with a higher temporal resolution than the input. It is of importance, for example, for the compression of weather and climate datasets. With AtmoRep, it can be realized by masking tokens within the space-time cube that is the input to the network. As presented in Fig. 2, the model shows substantially better skill, quantified as one order of magnitude lower RMSE, in reconstructing the 3 time steps within a 3 h wide token compared to linear interpolation. In the supplementary material we show a comparison for additional variables and also for different Multiformer configurations. #### 3.2.4 Model correction AtmoRep's internal representation of atmospheric dynamics is sufficiently robust and general that the model can take as input data from a related but different distribution than those seen during pre-training. We demonstrate this with data from ECMWF's **Fig. 2**: AtmoRep can be used for a diverse set of applications without task-specific training (shaded areas depict one standard deviation). _Nowcasting:_ Short-term forecasting can be realized by masking tokens at the future-most time step(s) (bottom right inset). Skill is compared to Pangu-Weather and ECWMF's IFS for zonal velocity, temperature and specific humidity. AtmoRep results are shown for a pre-trained model and one with modest fine-tuning for the task. _Model correction:_ AtmoRep is robust for out-of-distribution input. We exploit it for model correction by using output from IFS as input to AtmoRep. Our model faithfully handles the data, preserving the higher frequency content (top left), and shifts the distribution towards the ERA5 one (right). _Temporal interpolation:_ Temporal interpolation is accomplished by masking tokens in the middle of the temporal domain. Performance is compared to linear interpolation. _Counterfactuals:_ Using initial conditions from, e.g., the period (2017, 2022) but prescribed as being from (1979, 1984) by using the external conditions \(\alpha\) allows for the generation of counterfactuals. The plot shows the difference between the original and the counterfactual distributions, as well as the shape of the full distributions. operational Integrated Forecast System (IFS), which has a substantially higher resolution than the training data and whose distribution differs also in other aspects. Since AtmoRep is trained to predict ERA5, it will as output provide data that is consistent with ERA5 given the input. This amounts to model correction of IFS data towards the observationally better constrained ERA5 reanalysis. In Fig. 2, bottom left, we show that the AtmoRep prediction with IFS input is corrected towards ERA5. The correction has deficiencies, e.g. due to the imprint of the initial conditions and since the training is imperfect, but a clear trend can be observed. The figure also shows the substantially higher frequency content of IFS data and that AtmoRep partially reproduces these higher frequencies, despite not having encountered such high-frequency content during pre-training. Further results are provided in the Extended Data section in Fig. 4. #### Counterfactuals In weather and climate research, counterfactuals are a methodology to answer "what if" questions. They play a central role for example for the attribution of human impacts on extreme weather events [36] or to obtain more robust statistics on the possible evolution and outcomes of such events [37]. In AtmoRep, next to an initial condition \(x\) also the external conditions \(\alpha\) are provided to the model. This can be used for the generation of counterfactual scenarios by using together with a given physical \(x\) (e.g. from ERA5) an alternative external information \(\hat{\alpha}\). For AtmoRep \(p_{\theta}(y|x,\alpha)\) to be applicable, the initial condition has to be statistically consistent with \(\hat{\alpha}\), i.e. it should be possible that \(x\) occurred in, for example, the year specified by \(\hat{\alpha}\). Furthermore, the AtmoRep network must have learned a robust representation of the dependence of atmospheric dynamics on \(\alpha\), which is not an explicit training objective. To demonstrate the generation of counterfactuals with AtmoRep, we consider vorticity close to the surface, i.e. at model level 137. This variable shows a clear distributional shift in the ERA5 dataset between the early ERA5 years, i.e. 1979-1984, and the later ones, i.e. 2017-2022, but without a fundamental change in the support of the distribution. We perform the counterfactual experiment with nowcasting with initial conditions from the late years and \(\hat{\alpha}\) that prescribes the earlier time range. We denote this as \((2017,2022)\rightarrow(1979,1984)\). As control experiment, we also perform nowcasting for both time ranges with the correct \(\alpha\), see Extended Data Fig. 5 for a visualization of the methodology. If AtmoRep had not learned a dependence on \(\alpha\), the result of the counterfactual experiments would be statistically identical to the control experiments for \((2017,2022)\), i.e. no distributional shift could be observed; in a histogram difference plot, as shown, the difference would vanish. In contrast, Fig. 2, bottom right, shows that the counterfactual experiment leads to a distributional shift which is similar to the actual one in the ERA5 data, albeit with a smaller magnitude. The deficiencies are likely due to the imprint from the initial conditions that cannot be fully removed in a short-term forecast and from the learned model that not perfectly captures the target distribution. Nonetheless, our results demonstrate, for the first time, that AI-based models can be used for counterfactuals within the learned distribution. ### Extension of AtmoRep to other applications Next to the tasks that AtmoRep can perform intrinsically, the model can also be extended to accomplish other applications, e.g. by adding a task-specific tail network. This still allows to exploit the pre-trained model and its skill and, for example, reduces the task-specific training time. To demonstrate the principle, we consider downscaling, i.e. mapping a low resolution spatio(-temporal) distribution to a higher resolution one. With AtmoRep we can realize downscaling by factoring the target distribution as \[p_{\theta}\big{(}y_{h}|x\big{)}=p_{\theta^{\prime}}\big{(}y_{h}|y\big{)}\,p_{ \theta}\big{(}y|x,\alpha\big{)}. \tag{3}\] where \(p_{\theta}\big{(}y|x,\alpha\big{)}\) is the pre-trained AtmoRep model and \(p_{\theta^{\prime}}\big{(}y_{h}|y\big{)}\) is the downscaling-specific network that maps to samples \(y_{h}\) from the high-resolution target distribution. We also use a transformer for \(p_{\theta^{\prime}}\big{(}y_{h}|y\big{)}\), see the supplement for details. To demonstrate the approach, we consider \(2\,\mathrm{m}\) temperature from the COSMO REA6 reanalysis [38] that has 4-times the resolution of ERA5 and that shows improved physics in particular over steep terrain. As baseline we use the GAN-based statistical downscaling model by Stengel et al. [39]. The results in Fig. 3 show that AtmoRep outperforms the GAN by Stengel et al. substantially in RMSE although its spectrum is slightly too low for very high frequencies. The three examples in the figure, furthermore, establish that AtmoRep not only increases the resolution but also adjusts the distribution, see e.g. the left most example in Fig. 3 where the front over Eastern Europe is substantially further East in ERA5 than in COSMO REA6 and AtmoRep corrects this to high accuracy. ### Bias correction of AtmoRep with observational data ERA5 has known deficiencies and biases, e.g. [40, 41, 42, 10], and this limits the potential of AI-based models that are trained on reanalyses or simulation data [43]. With AtmoRep, we can use observational data to improve the model and remove biases introduced through the ERA5 training distribution. To demonstrate this, we use precipitation radar data from the RADKLIM dataset [44], pre-processed to match the ERA5 resolution. Since the RADKLIM domain is smaller than the one used during pre-training, we use \(12\times 6\times 6\) tokens instead of \(12\times 6\times 12\) as input per level, exploiting the flexibility of AtmoRep's token-sized input. Missing values in the observational data were ignored in the loss computation for the bias correction and the training task was the prediction of the RADKLIM precipitation field given ERA5 data as input. This is equivalent to training for precipitation nowcasting. We evaluate the model by comparing the output distribution to the RADKLIM one using different metrics. As shown in Fig. 4, left, the precipitation fields of AtmoRep show enhanced skill compared to the original ERA5 predictions and the areal extent and shape of the precipitation fields that are forecast by AtmoRep are much closer to RADKLIM than the original ERA5 data. The maximum rainfall intensity remains lower than in RADKLIM, but it is improved compared to ERA5. **Fig. 3**: Results for downscaling from ERA5 to temperature at 2 m in the COSMO REA6 dataset for a region in central-eastern Europe. For AtmoRep, zonal and meridional velocities as well as temperature were used as input (at model level 137, approximately 1000 hPa). The top row shows the RMSE as well as the spectrum compared to the results obtained with the GAN proposed by Stengel et. al [39] and retrained for our setup (see the supplementary material for details). At the bottom we show three examples for downscaled fields (third row) as well as the ERA5 input (top row) and the COSMO REA6 reference (second row). Also shown is the difference between COSMO REA6 and the downscaled field provided by AtmoRep (bottom row). ### Conclusion AtmoRep is a novel, task-independent stochastic model of atmospheric dynamics. It provides an alternative methodology to make the observational record available and has skill for a range of applications with no or only little task-specific training. It is realized by a large generative neural network pre-trained on the ERA5 reanalysis using a new self-supervised training objective. AtmoRep, furthermore, employs a novel ensemble that samples from the stochastic model and whose spread reflects the variability of the training data distribution. We demonstrated the intrinsic zero-shot capabilities of AtmoRep with nowcasting, temporal interpolation, and model correction. With moderate fine-tuning, our nowcasting performance is comparable to existing forecasting models, including ECMWF's IFS and Pangu-Weather. We also demonstrated, for the first time, the ability to perform counterfactuals with an AI-based model, exemplifying AtmoRep's ability to serve as sample generator from the highly complex and instationary learned distribution. AtmoRep opens up many avenues for future work. We believe that, through their generality and computational efficiency, large scale representation models can play a significant role in the next generations of Earth system models, complementing existing ones for example when large ensembles are needed or for tasks such as counterfactuals. Figure 4: _Left: precipitation forecast for ERA5 (left), AtmoRep fine-tuned (center) and RADKLIM (right) for a 3h forecast in 2019. Right: Comparison between the mean square error (MSE), Equitable Threat Score (ETS), Peirce Skill Score (PSS) and Frequency Bias Indicator (FBI) in ERA5 and the fine-tuned AtmoRep, using RADKLIM data as ground truth obtained averaging yearly predictions from 2019. The bottom right part shows the distribution of hourly accumulated total precipitations for AtmoRep, ERA5 and RADKLIM._ AtmoRep can also be extended as a parameterization for general circulation models where, through its training on observation-based data, it has the potential to help address the closure problem that is a major source of uncertainties in existing weather and climate simulations [1, 22]. Its training on data makes AtmoRep also amenable for the assimilation of different datasets into a coherent representation. This is a particular promising direction when AtmoRep is developed further so that learning from nearly unprocessed observations becomes possible, extending what we already demonstrated for precipitation bias correction. The masked token model used for pre-training requires AtmoRep to fill in missing values in the physical fields that are input to the model. When the masking is modified so that it is no longer per-token, this amounts to data assimilation as required, for example, for the initialisation of numerical weather prediction models. We also believe that AtmoRep can become an important tool for scientific inquiry [45], similar to how general circulation models are currently a central tool in atmospheric science. For example, the counterfactuals introduced in the present work provide a means to study how the temporal distributional shift in the training dataset affects specific weather patterns. AtmoRep demonstrates the potential of large scale representation learning in atmospheric science and the results in the present work are, in our opinion, only a first step towards the possibilities that are enabled by the methodology. ## 1 Methods ### Datasets To train the AtmoRep model, the ERA5 reanalysis dataset [10] was used with an hourly temporal resolution and ERA5's default equi-angular grid with \(721\times 1440\) grid points in space. In the vertical dimension, we employed model levels 96, 105, 114, 123, 137, corresponding approximately to pressure levels 546, 693, 850, 947, and 1012 hPa. We used model levels so that the physical fields are valid everywhere and they do not cut through orographic features. As variables, zonal and meridional wind components, vorticity, divergence, vertical velocity, temperature, specific humidity, and total precipitation were employed. For model correction, we employed output of the operational Integrated Forecasting System (IFS) by ECMWF as of 2020. The COSMO REA6 [38] provided higher resolutions data for downscaling. This dataset was remapped onto an equiangular grid with 4-times the resolution of the ERA5 reanalysis dataset. For precipitation bias correction, we employed the RADKLIM dataset [46], which represents gauge-adjusted precipitation estimates from the German radar network. It was remapped to the ERA5 equiangular grid on its domain with an hourly temporal resolution. For the remappings we employed a first order, conservative re-mapping method. All datasets were normalized to zero mean and unit variance either globally per month or on a per grid point basis. See the supplementary material for further details on data handling and pre-processing. ### Model formulation AtmoRep provides a task-independent, numerical stochastic model \(p_{\theta}(y|x,\alpha)\) of the dynamics of the atmosphere, i.e. a foundational model [47] for atmospheric dynamics. It is realized by an encoder-decoder transformer neural network with dense attention [35], see Extended Data Fig. 1 for an overview of the architecture. The input to the model is an atmospheric state \(x\) in a local neighborhood in \(4D\) space-time, subdivided into smaller sub-regions that form the tokens the transformer operates on (Fig. 1, bottom-left). Working with a local input from anywhere on the globe allows the model to learn position-independent, general principles of atmospheric dynamics. Through the external information \(\alpha\) that include the global position, the network is, nonetheless, able to also learn location-dependent effects, see Extended Data Fig. 2 and Fig. 3 for specific examples of such local effects. The temporal information in \(\alpha\) enables the model to, furthermore, learn instationary behavior in time, for example shifts in the training data distribution or seasonal effects. Because the network input consists of a set of tokens, the trained AtmoRep model can be flexibly used for space-time regions that are smaller or larger than the training ones. For example, the spatial extent of the RADKLIM radar dataset is smaller than those spanned by the \(6\times 12\) tokens used during pre-training. Therefore, when fine-tuning for bias correction we use \(6\times 6\) tokens in space. Similarly, since different vertical levels correspond to different tokens, also the number of levels at evaluation time can differ from that during training. We exploit this for downscaling where we only employ the model level closest to the surface. The supplementary material contains a quantitative evaluation of how changes in the neighborhood size effect the model performance; improvements are possible by fine-tuning for a change in size. Global forecasts, such as the ones shown in Extended Data Fig. 9, can be realized with the local AtmoRep model by tiling the globe. With overlap between adjacent regions, we can avoid artifacts in forecasts due to the tiling. The flexibility to employ AtmoRep as a local or a global model is a unique feature of our approach compared to other large scale AI-based forecasting models in the literature. In the AtmoRep network architecture, we use one encoder-decoder transformer per physical field to respect the different properties of the fields that comprise an atmospheric state. For instance, temperature in ERA5 changes much more slowly and has less spatial variability than vorticity so that we use a larger token size and smaller embedding dimension for it. The individual per-field transformers are coupled through cross-attention to allow for interactions between fields in the model (Extended Data Fig. 1). We call this architecture the Multiformer. Our approach to couple individual per-field transformers provides the advantage that fields can be pre-trained independently and then combined into a multi-field model. This is more efficient than training a multi-field one from the outset because the computational costs of dense attention scale quadratically with the number of tokens. The Multiformer design also creates flexibility to combine pre-trained per-field transformers into application-specific models. For example, for downscaling we use a 3-field configuration with only the wind components and temperature, which are the most relevant variables for this problem. Usually a few training epochs are sufficient for individually pre-trained per-field transformers to cohere into a skillful multi-former model. Further details on the model can be found in the supplementary material. #### Ensemble AtmoRep uses an ensemble to provide an nonparametric representation of the conditional probability distribution over the output state \(y\). For each physical field, the ensemble is generated by a set of prediction heads, each consisting of only a linear layer. These heads maps from the latent, internal representation in the AtmoRep core model \(p_{\theta}(y|x,\alpha)\) to the grid representation of the physical field in space-time, see Fig. 1, bottom right. Conceptually, the prediction heads hence sample states \(y\) from \(p_{\theta}(y|x,\alpha)\) to provide the nonparametric representation of their distribution. In all of our experiments, we used an ensemble size of 16 except for downscaling where it was 4. The ensemble is trained with a novel statistical loss function (see below) on only the single, deterministic ERA5 high-resolution trajectory. The ensemble spread consequently derives solely from the spatio-temporal variability in the ERA5 training data and hence, at least partially, from the intrinsic one in the observational record, see Fig. 1, top-right. The training methodology with an ensemble is an integral aspect of our approach to obtain a stochastic model that provides an unbiased estimator for the probability distribution of the physical system. AtmoRep's ensemble differs in an essential way from existing ones in numerical weather prediction where perturbations of the initial conditions [1, 3, 28] or the model parameters are employed [1, 27]. The AtmoRep ensemble is thereby computationally inexpensive in both training and inference since it is only generated in the prediction heads, which are defined as simple linear layers. This is in contrast to many ensemble methods in machine learning where a set of different models is combined. #### Training and Loss AtmoRep's training task is the prediction of randomly masked and distorted tokens, see Fig. 1, bottom left. This is an extension of masked token models used in natural language processing [8, 9] and computer vision [26]. In addition to complete masking, we distort some tokens by either adding noise or reducing their resolution. The distortions were inspired by [8] and they encourage the model to not rely on the physical correctness of the non-masked tokens and instead learn a robust and probabilistic representation of the relationship between atmospheric states \(x\) and \(y\). The masking ratio was increased during training from \(0.25\) to values between \(0.5\) and \(0.9\) depending on the field. The increase led to a more difficulty training task over time and to better representations, improving, e.g., the zero-shot forecasting performance of models. The loss used to train the AtmoRep model is a distance function \(\mathcal{D}(\tilde{p}(y,x;\alpha),p_{\theta}(y,x;\alpha))\) between the instationary data distribution \(\tilde{p}(y,x;\alpha)\) and the distribution modeled by AtmoRep. With a Monte Carlo estimate over the training data \(\bar{\mathcal{X}}=\{(\bar{x},\bar{y})\}\), it can be approximated by \[\mathcal{D}\big{(}\tilde{p}(y,x;\alpha),p_{\theta}(y,x;\alpha)\big{)}\approx \frac{1}{N}\sum_{\bar{x}\in\bar{\mathcal{X}}}d\big{(}\bar{y},p_{\theta}(y|\bar {x},\alpha)\big{)} \tag{4}\] where AtmoRep's ensemble prediction provides an nonparametric estimate of \(p_{\theta}(y|x)\), see the supplementary material for the derivation and details. The distance function \(d(\cdot,\cdot)\) above measures the quality of the model's predictions for each individual training example. It includes our novel statistical loss function \(d_{s}(\cdot,\cdot)\) given by \[d_{s}\big{(}\bar{y},\hat{y}\big{)}=\Big{|}1-\int\delta_{\bar{y}}(y)\,G_{\mu, \sigma}(y)\,dy\Big{|}^{2}=\Big{|}1-G_{\mu,\sigma}(\bar{y})\Big{|}^{2} \tag{5}\] where \(\bar{y}\) is the single observed value for each field and grid point, formally described by the Dirac-delta \(\delta_{\bar{y}}(y)\), and \(G_{\mu,\sigma}\) is the unnormalized Gaussian whose mean \(\mu\) and variance \(\sigma\) is given by those of AtmoRep's ensemble prediction \(\hat{y}\). We hence currently only consider the first two statistical moments of the ensemble in the loss computation. We complement the statistical loss with a regularization term that controls the variance \(\sigma\) as well as an MSE loss term per ensemble member \(\hat{y}_{k}\). Therefore \[d\big{(}\bar{y},\hat{y}\big{)}=\sum_{k}|\bar{y}-\hat{y}_{k}|^{2}+d_{s}\big{(}y,\hat{y}\big{)}+\sqrt{\sigma}. \tag{6}\] Training was first performed on individual fields with field-specific transformers. Subsequently, when individual fields were largely converged, the fields were combined into the Multiformer and training was continued. We thereby trained three different Multiformer configurations, one with velocity and all other fields, one with vorticity, divergence and all other fields, and a 3-field configuration with wind and temperature. The improvement through the coupling of the single-field transformers is quantified in the supplementary material. ### Evaluation Each application has been analysed with common and suitable metrics for quantifying AtmoRep's skill. Where applicable, comparisons with existing approaches have been included to relate our results to the state-of-the-art in literature. #### 1.3.1 Nowcasting Results have been obtained using forecasts at 0 and 12 UTC for the entire year 2018. The predictions are compared to the IFS as standard reference for classical numerical weather prediction models as well as Pangu-Weather [28] as an example for a state-of-the-art AI-based model. ERA5 on model levels was used as ground truth for AtmoRep and IFS and ERA5 on pressure levels for Pangu-Weather. For the ensemble analysis, we compared against ERA5 (since IFS ensemble data was not available to us) using the CRPS and assuming a Gaussian distribution. For AtmoRep we employed the 5-field velocity Multiformer fine-tuned for 6 h forecasting as described in the supplementary material as well as the 5-field velocity Multiformer without fine-tuning, the latter to determine the zero-shot performance. To avoid tiling artifacts, an overlap of 18 and 54 grid points between adjacent neighborhoods has been used for both Multiformer configurations. #### 1.3.2 Counterfactuals For the counterfactual experiment, we randomly sampled from the early and late time range, generating in total approximately 8 billion samples for each of the early and late year range and the counterfactual run. The experiments were run with the vorticity-divergence 5-field Multiformer with short term forecasts with 3 h lead time (no fine-tuning for forecasting was applied). The difference histogram in Fig. 2, bottom right, is with respect to normalized distributions while the distributions shown below are unnormalized. #### 1.3.3 Downscaling The downscaling analysis has been performed using hourly predictions for the year 2018 and with the entire downscaled domain, i.e. \([-1.25^{\circ},25.75^{\circ}]\times[42.125^{\circ},55.625^{\circ}]\) in latitude and longitude. The AtmoRep network was the 3-field Multiformer with both velocity components as well as temperature and using only model level 137 as input. The downscaling network had 6 transformer blocks and used an embedding dimension twice the size as for ERA5 for temperature due to the much larger token size in terms of grid points in the predictions. #### 1.3.4 Bias corrections For bias correction, we evaluate the metrics as shown in Fig. 4. These have been computed averaging hourly spaced predictions from 2019. The data from 2018 was used as validation set to determine the best bias corrected model. As reported above, the generator architecture is the 5-field vorticity-divergence Multiformer used with \(6\times 6\) tokens, which aligns well with the spatial extent of the RADKLIM dataset (\([44.5^{\circ},57.75^{\circ}]\times[3.5^{\circ},16.75^{\circ}]\) latitude-longitude). ## 2 Code availability The AtmoRep model code is Open Source under an MIT license ([https://opensource.org/license/mit/](https://opensource.org/license/mit/)). The code used to generate the results presented in the paper as well as pre-trained model weights and analysis code for generating plots will be made publicly available (with persistent identifier) upon acceptance of this manuscript. ## 3 Data availability ERA5 data are openly and freely available from ECMWF ([https://cds.climate.copernicus.eu/cdsapp#](https://cds.climate.copernicus.eu/cdsapp#)!/dataset/reanalysis-era5-complete). For use with the pre-trained AtmoRep model, hourly data for the selected variables and model levels is required; a pre-computed subset of ERA5 data in the format that is used in AtmoRep can also be obtained from the Julich meteocloud (upon acceptance of the paper). The code for data normalization is provided with the AtmoRep source code. IFS data that has been used for model correction experiments were retrieved from ECMWF's MARS archive ([https://confluence.ecmwf.int/display/UDOC/MARS+user+documentation](https://confluence.ecmwf.int/display/UDOC/MARS+user+documentation)). Pangu-Weather data used for the comparisons have been generated using the ai-models tool from ECMWF ([https://github.com/ecmwf-lab/ai-models](https://github.com/ecmwf-lab/ai-models)) COSMO REA6 data were obtained from the open data archive of the German Weather Service (DWD) ([https://opendata.dwd.de/climate_environment/REA/COSMO_REA6/](https://opendata.dwd.de/climate_environment/REA/COSMO_REA6/)). This open data archive also provides the RADKLIM data ([http://dx.doi.org/10.5676/DWD/RADKLIM_YW_V2017.002](http://dx.doi.org/10.5676/DWD/RADKLIM_YW_V2017.002)). Pre-processing scripts for COSMO REA6 and RADKLIM are also available with the AtmoRep source code. ## Acknowledgments IL acknowledges funding by the CERN Knowledge Transfer Fund and the CERN Initiative for Environmental Applications (CIPEA). MGS and SS acknowledge funding from the EU under grant ERC-Adv-787576 "IntelliAQ". BG and ML received funding from the EuroHPC project MAELSTROM (EU grant id 955513 and BMBF grant id 6HPCO29). This research was supported in part by the National Science Foundation under Grant No. NSF PHY-1748958 through the CLIMATE21 workshop in November/December 2021 at the Kavli Institute for Theoretical Physics where initial ideas for the project were developed. Compute time was provided by the Julich Supercomputing Centre under the project atmo-rep. David Hidary contributed the visualisations of the attention maps. The authors are grateful to Matthew Chantry, Mariana Clare, Yi Deng, Robert Brunstein, Maike Sonnewald, Olaf Stein, Aneesh Subramanian, and Max for providing valuable data and discussions. We thank ECMWF for producing ERA5 and the German Weather Service DWD for generating the COSMO REA6 reanalysis and the RADKLIM dataset. Kaifeng Bi is acknowledged for providing the Pangu-Weather source code and data and ECMWF for the ai-models tool. ## Extended Data Figure 2: Spatial distribution of the mean error for 2018 as a function of the season for zonal velocity (top) and specific humidity (bottom). Seasonal changes of the error can be observed and these are well aligned with changes in the atmospheric dynamics. For example, in the North Atlantic the errors are larger in the winter months where one has larger wind velocities compared to the summer months. Also topographic features are apparent in the error maps. The detailed structures were learned solely from the dynamics in the training data and no land-sea mask or orographic information was provided to the network. Comparing to Fig. 3, a strong correlation between the error and the standard deviation can be observed. **Fig. 3**: Spatial distribution of the AtmoRep ensemble standard deviation for 2018 as a function of the season for zonal velocity (top) and specific humidity (bottom). The ensemble spread shows a strong correlation with the physical variability of the system, for example with seasonal storms in the North Atlantic. The ensemble spread is also strongly correlated with the prediction error, cf. Fig. 2. Figure 4: Additional results for model correction. _Left:_ Zoom-ins for AtmoRep predictions with ERA5 and IFS model data as input. The AtmoRep predictions for the IFS data have much finer details than with ERA5, see for example the filaments originating from the storm in the center. This reflects the higher frequency content in the input (top-right), see also the inset spectrum in Fig. 2, bottom left. _Right:_ Histogram for divergence. Analogous to the results for vorticity, a clear shift in the model output towards ERA5 can be observed when the input is IFS data. Figure 5: Depiction of the experimental setup for counterfactuals. A large set of fixed initial conditions, e.g. from the year 2018, are the input to AtmoRep. These are used once with the physical external conditions, i.e. \(\alpha_{y}=2018\), and once with modified but statistically plausible once, e.g. \(\alpha_{y}=1980\). The output by AtmoRep are two different distributions, since the model learned the instationary behavior of the training data. For analysis, we show a difference of the histograms since this makes distributional shifts more apparent (in particular, the difference of the normalized histograms is used so that initial conditions can be sampled). **Fig. 6**: The same methodology as for counterfactuals can be used to study the temporal extrapolation abilities of AtmoRep \(p_{\theta}(y|x,\alpha)\), i.e. to what extent it can provide predictions beyond the training period which are consistent with the ERA5 distribution for the extended time range. For this, we have sampled the AtmoRep vorticity/divergence Multiformer configuration with initial conditions \(x\) from 2017 and the external conditions \(\alpha\) prescribed as 2022 (denoted as ’2017 \(\to\) 2022’). The distribution of the predictions is compared to the true ERA5 distribution in 2022. Top left: the averaged distributions of vorticity for the three evaluations: 2017, 2022, and 2017 \(\to\) 2022. The other plots show the difference between the 2022 (or simulated 2022) and 2017 distributions for each vertical level. Shaded areas depict one standard deviation. No perfect match between distributions can be observed but a clear trend is visible for all vertical levels. The results show the robustness and generality of AtmoRep’s learned distribution \(p_{\theta}(y|x,\alpha)\). Note that AtmoRep would not able to extrapolate under more significant and nonlinear shifts in the data distribution, e.g. those that can be expected from climate change on longer time horizons. Figure 7: Detailed forecast evaluation compared to Pangu-Weather (black) and IFS (blue). The top four rows show ACC for several variables and levels and the bottom four rows RMSE. Also indicated is the standard deviation in each case (shaded areas). **Fig. 8**: Evaluation of the AtmoRep ensemble using SSR (blue) the CRPS (red) for short-term forecasting. The dashed lines represent the respective metrics for the ERA5 forecast ensemble, which is available only with a time step of 3 h. The CRPS has been computed from the ensemble mean and spread assuming a Gaussian distribution. Shaded areas depict one standard deviation. Figure 9: Example for 1h short-term forecast for 15 June 2018 at 12:00 UTC for four variables at model level 137. Left is the ERA5 ground truth and right the AtmoRep prediction in each case. **Fig. 10**: Attention maps provide a direct means to gain insight into what a trained network has learned and this has been used before both in natural language processing (e.g. [35]) and computer vision (e.g. [48]). AtmoRep’s attention maps are well aligned with physically relevant features, which we demonstrate with two examples. To our knowledge, this has not been established before in Earth system science. The top figure shows attention maps of a large wave propagating through the atmosphere (vorticity, model level 96) for multiple time steps. Shown is the average attention over all keys for a fixed head in the final decoder block in the model. The last two source images are masked since the model was evaluated with zero-shot forecasting. The bottom image shows attention for hurricane Katrina on August 26th, 2005 (vorticity, model level 137). The physical field is depicted at the top and the attention maps for the 16 attention heads underneath. Shown is again the attention averaged over all keys and from the final decoder block of the model. Further details on the attention maps can be found in the supplementary material and more examples and the option to interactively explore these at [https://www.atmorep.org/attention/](https://www.atmorep.org/attention/). Supplementary Material ## 1 Related work In the following, we put our work into a wider context with respect to the literature on atmospheric science and machine learning. ### Stochastic modeling of atmospheric dynamics Atmospheric dynamics are, in principle, governed by well-understood equations determined by the fundamental laws of classical physics, such as conservation of mass and energy and the laws of thermodynamics. However, due to the vast range of spatial and temporal scales involved, from tens of thousands of kilometers down to the scale of meters and below, it is impossible to resolve all atmospheric processes explicitly in numerical models. Furthermore, especially on smaller scales, we do not have the necessary data to constrain the initial conditions well enough and there are physical processes for which we do not have a complete understanding, for example for cloud lifecycles, aerosol formation, or the interaction with the biosphere [49]. This closure problem is a major source for the forecast and projection uncertainties in current weather and climate models [1]. Already in 1976, the closure problem was a principle motivation for Hasselmann [50] to introduce his concept of stochastic modeling in climate science. In his work, he described the long-term behavior of the atmosphere by a two-scale stochastic dynamical system with the long-term climate system driven by the "integral response to continuous random excitation by short period 'weather' disturbances." [50]. This view has been refined in recent work and the observed power spectra of atmospheric variables are now understood as a result of cascade processes [49]. This means that memory effects are relevant and climate should be modeled with non-Markovian processes [49]. In numerical weather prediction, stochastic modeling was introduced by Palmer [31] in 2001 but motivated by reasoning similar to the one by Hasselmann, i.e. that a stochastic representation of unresolved, small-scale physical processes is required to obtain the correct large scale behavior. In particular, the abstract continuous dynamical system \[\dot{\tilde{X}}=\tilde{F}\big{[}\tilde{X}\big{]}\] (A1) of states \(\tilde{X}\), which corresponds to the governing partial differential equations, is numerically represented by \[\dot{X}=F\big{[}X\big{]}+P[X,\alpha]\] (A2) where \(X\) is a finite dimensional representation of \(\tilde{X}\), \(F\) is the finite number of retained terms from the Galerkin projection of Eq. A1, and \(P[X,\alpha]\) corresponds to the residual of the projection [31, Sec. 2]. Classically, \(P[X,\alpha]\) is modeled by heuristic formulae, such as parametrizations. Palmer argued, however, that a stochastic model of \(P[X,\alpha]\) is required to obtain physically consistent long term dynamics. This led to the concept of stochastic parametrizations that play an important role in operational numerical weather prediction today [1, 19]. AtmoRep \(p_{\theta}(y|x,\alpha)\) can be seen as a data-driven representation of \(P[X,\alpha]\) that is learned from the processed observations in the ERA5 reanalysis and that includes the large-scale dynamics. However, with modifications, AtmoRep can also be used to only represent the small scale processes. ### Deep learning for weather forecasting The use of neural networks in Earth system science goes back to the early 2000s, e.g. [51, 52, 53]. With the tremendous progress of deep learning methods beginning around 2010 [23], efforts to exploit the methodology also in atmospheric science increased substantially around 2018. Two earlier studies on the use of deep learning to Earth system modeling will be highlighted before briefly discussing recent works that have shown forecasting performance close to or on par with the best operational weather forecasting models. Ham et al. [54] trained a CNN-based model and used transfer learning on simulation datasets and reanalysis data. They generated El Nino-Southern Oscillation projections with a lead time of up to one and a half years based on sea surface temperature and heat content anomaly maps, outperforming state-of-the-art dynamical prediction systems for lead times beyond six months. In 2021, Ravuri et al. [55] proposed a data-driven approach for probabilistic precipitation nowcasting with a lead time of up to two hours. Their deep generative model was trained on radar observations from the UK. The model's performance was comprehensively assessed using various verification metrics and subjective evaluations by operational forecasters. In comparison to earlier approaches based on CNNs, their generative model showed better nowcasting skill and much better performance with respect to capturing the local variability of precipitation. A significant breakthrough in AI-based weather forecasting was achieved by Pathak et al. [56]. Their model, FourCastNet, is based on adaptive Fourier neural operator (AFNO) with a vision transformer [57] as backbone. FourCastNet delivers comparable forecasting results to the IFS model with a lead time of 3 days. Shortly after FourCastNet, Bi et al. [4] introduced Pangu-Weather, a 3D Swin-Trasnformer-based neural network. The model was trained on 39 years of ERA5 reanalysis data and obtained better deterministic 7-day forecasting results than the operational IFS model, while time-to-solution is 10,000 times faster than for the IFS. Concurrently to Pangu-Weather, Lam et al. [58] introduced a graph neural network-based model, GraphCast, for weather forecasting. This model can generate forecasts with six-hour intervals for ten surface variables and six atmospheric variables on 37 vertical pressure levels. The training data spanned 39 years of historical weather data, again from ERA5. GraphCast performed on par and at times better than the IFS with respect to almost all forecasted fields [see also 43]. The work by Chen et al.[7] presents the Fuxi neural network, designed to enhance the global ensemble weather forecasting system's capabilities in generating 15-day forecasts at a spatial resolution of 0.25. This deep-learning neural network utilizes a Swin Transformer-based model with 48 repeated blocks. The results indicate that Fuxi's performance is comparable to that of ECMWF's enhanced range model in the context of 15-day forecasting. In a similar vein, Chen et al. [59] introduced the FengWu deep learning neural network. This model utilizes model-specific encoder-decoder structures and a cross-modal fusion transformer. These innovations further enhance the forecasting capabilities, extending FengWu's deterministic skillful forecast lead time to 10.75 days. The results also demonstrate a superiority over GraphCast in predicting 80% of the 880 reported predictands. Gao et al. [60] proposed the EarthFormer, which is a space-time transformer model for weather forecasting. It uses a cuboid attention mechanism that is adapted for space-time data. The EarthFormer model demonstrates strong performance in both forecasting sea surface temperature anomalies and precipitation nowcasting although it has not been used for high-resolution numerical weather prediction. Another noteworthy recent contribution to the literature is ClimaX [61] that developed a generalized deep learning model for weather and climate science through self-supervised learning. The pre-trained model was fine-tuned for various downstream tasks, in particular forecasting, climate projecting, and climate downscaling. The network used in the work is transformer-based and trained on the CMIP6 climate datasets with fine-tuning on ERA5 reanalysis. While conceptually closest to AtmoRep in that a foundation-type model is developed, ClimaX also differs in fundamental aspects. For example, no zero-shot, intrinsic capabilites were demonstrated in [61]. The results obtained with AtmoRep are also superior to those of ClimaX although we do not yet demonstrate medium-range forecasting with roll-out. ### Representation learning and generative machine learning AtmoRep builds on a substantial amount of work in the machine learning literature on large scale representation learning. The resulting models are sometimes referred to as foundation [47] or frontier models. Representation learning [62] is a machine learning methodology whose primary objective is not to obtain a model that is effective for a specific task but one that provides an effective encoding, or representation, of the data distribution. Next to being of scientific interest, such an encoding can be used for a variety of applications, e.g. by fine-tuning or appending task-specific tail networks. While representation learning has a long history [62], it recently became central to many efforts in machine learning through the introduction of large language models [8, 9, 25]. These are domain-specific but task-independent neural networks for natural language that can be specialized, for example, for translation, as chat bots, or for text auto-completion. Large language models also popularized the use of self-supervised training protocols because a labelling of the very large training data sets would be impractical. Instead, the pre-training objective, i.e. the one used to learn the task-independent representation, is defined based on the dataset itself. A common approach is to mask part of the information and predict it based on unmasked ones, although alternatives are possible [48]. For transformer-based large language models, masking is most commonly used in the form of masked token models [8, 9]. Since transformers are a highly generic architecture once a token has been defined [57], this approach has also been used in computer vision [26] and AtmoRep extended it to space-time neighborhoods. Traditionally, representation learning required additional computations to make a pre-trained model applicable for a specific task. Brown et al. [25] showed that sufficiently large models have skill for many tasks directly after pre-training and without task-specific refinement. This is referred to as zero-shot abilities in the literature [25]. An extension are few-shot abilities where a few examples are input to the model together with the task, again without update to the model weights, and this typically substantially improves performance. Zero-shot abilities are enabled by pre-training on a large and diverse data set and through the pre-training task. For example, when pre-training uses a multi-lingual data set, then translation is a special case of the masked token model [25]. Interestingly, even interspersed foreign words, as are common in many everyday texts, are sufficient for zero-shot translation abilities [63]. A second key for the power of modern large language models is the size of their neural networks with the most advanced ones containing more than a trillion parameters today [64]. The network architecture plays thereby only a secondary role. Instead, the number of trainable parameters is the most significant factor determining the performance of the trained model (assuming sufficient computational resources for training and a sufficiently large training data set). This observation led to the introduction of scaling laws [65, 66] that allow to predict the effectiveness of a trained model a priori. Although large language models are powerful in the zero- and few-shot setting, they also have limitations, for example, in the generation of long coherent outputs, such as texts that are not repetitive and follow a coherent story line. This can, in principle, be addressed by sampling from the discrete probability distribution that is the output of large language models. How to obtain long-term coherent sequences remained, however, unclear for some time. Fine-tuning with human feedback reinforcement learning [67, 68] provided substantial improvements in this respect. Our mathematical formulation of AtmoRep follows closely those for generative AI models, e.g. [9, 69, 70], which include large language models and generative image models. These are commonly also formulated as joint or conditional probability distributions over known and unknown information. Interestingly, the theoretical approach for generative AI has strong similarities to the representation of unresolved processes in the stochastic weather and climate model formulations by Hasselmann and Palmer, cf. Sec. 1.1. AtmoRep, for the first time, makes this connection explicit and the training on observation-based data allows us to capture these processes since, at least in an integrated form, they are present in the observational record. We consider this connection between generative AI models and stochastic atmospheric modeling of significant importance since it provides a physical grounding for our machine learning and its results. ## 2 Detailed model description Below we provide detailed information on the model architecture and discuss design choices. The source code is available with the submission and should serve as a final reference. ### Model architecture The network architecture of AtmoRep is based on transformers [35] since these are known to scale well to very large model sizes, training data sets and to exploit compute hardware very efficiently. Architecturally, transformers differ from other neural networks by processing a sequence of inputs, known as tokens, and relating these to each other in the network through the so-called attention mechanism [35]. In AtmoRep, a token is a small space-time neighbourhood, which extends the conceptualization of tokens that has been introduced for the vision transformer [71]. The entire input of AtmoRep is therefore a local spacetime hypercube subdivided into smaller regions representing the tokens (see Fig. 1 in Main). The model levels are thereby treated as an independent dimension so that one has in total \(n_{v}\times n_{t}\times n_{\theta}\times n_{\phi}\) tokens where \(n_{v}\) is the number of vertical levels and \(n_{t}\), \(n_{\theta}\), \(n_{\phi}\) are, respectively, the number of tokens in time, latitude, and longitude. In contrast to related work [3, 4, 5], our model is spatially local but works on a significantly longer time span. Our primary motivation for the use of a local input domain is to learn general principles of atmospheric dynamics across different regions and to generalize better. It also alleviates memory pressure in the implementation of the neural network. With a local approach, care is, however, required to obtain globally coherent predictions. AtmoRep uses a coupled stack of encoder-decoder transformers, which we call the Multiformer, an overview is provided in the Extended Data section in Fig. 1. The Multiformer consists of one transformer per physical field and the per-field encoders are coupled through the cross-attention mechanism that is classically used to couple the encoder and the decoder [35]. The choice to represent each field with an individual transformer is motivated by the different physical mechanisms that drive the time evolution of the different fields and the resulting widely different signal characteristics. It allows, for example, to use a smaller internal embedding dimension and a larger token size for smoother fields, such as temperature. Having one transformer per field also leads to a modular design where fields can be combined flexibly to yield an overall Multiformer configuration suitable for the task at hand, as we demonstrated for downscaling. Another advantage is that it allows for a field-specific pre-training that speeds-up the overall training process substantially since the dense attention used in our Multiformer scales quadratically in the number of tokens. The transformer-encoders for the different physical fields are coupled through cross-attention to allow different fields to interact in the neural network. In particular, pre-training is performed with a fixed number of self-attention heads, as in a standard transformer-encoder. When a Multiformer is assembled, we preserve these heads and add a user-defined number of cross-attention heads per coupled field. Which fields are coupled is also user-specified, cf. Fig. 10. For example, one can allow for fields to only interact indirectly through a common coupled field, which reduces computational costs. The per-field decoders use cross-attention with the encoder from the same field but it is not coupled to the other fields. In contrast to transformers as used in natural language processing, we do not use the output of the last layer of the encoder but a U-Net type coupling, see again the depiction in Extended Data Fig. 1. For all attentions, we employ qk-layenorms. These are relevant for scaling to very large network sizes [63, 72] and in a Multiformer they also of importance to stabilize the coupling of different fields with different characteristics. _Positional embeddings_ Vaswani et al. [35] developed a linear positional encoding based on trigonometric functions of different frequencies given by \[\operatorname{PE}^{(k)}_{2i} =\sin(k/10000^{2i/d_{\text{model}}})\] \[\operatorname{PE}^{(k)}_{2i+1} =\sin(k/10000^{2i/d_{\text{model}}})\] where \(i\) is the the index along the embedding dimension, \(k\) the linear, 1-dimensional token position, and \(d\) the embedding dimension of the model. We found that this embedding has limitations in that it creates aliasing when the number of tokens exceeds \(\approx 100\) and it makes only inefficient use of the embedding space. Furthermore, it is designed for a linear sequence of tokens, as one has in natural language processing, and does not respect the \(4D\) structure of the token sequence in AtmoRep. Therefore, we developed a modified hamonic positional embedding for AtmoRep's \(4D\) token space that uses frequency modulation to encode all information. It is given by \[\operatorname{PE}^{(k)}_{2i} =\sin(i\,k_{t})+\frac{1}{2}\sin(8\,i\,k_{\theta})\] \[\operatorname{PE}^{(k)}_{2i+1} =\cos(i\,k_{v})+\frac{1}{2}\cos(8\,i\,k_{\phi})\] where the multi-index \(k=(k_{t},k_{v},k_{\theta},k_{\phi})\) provides the local, relative token position in time, vertical level, latitude and longitude, respectively. Note that the local positional encoding is complemented by the external information \(\alpha\) that contains global time, level as well as latitude and longitude. ### Training and Loss The observational record provides the joint occurrence \((x,y)\) of atmospheric states through samples (or instantiations) from the instationary joint distribution \(\tilde{p}(x,y;\alpha)\), cf. Fig. 1, top-right, in Main.1 The parameter \(\alpha\) controls again the instationarity of the distribution, e.g. its shift over time. AtmoRep's training objective is the minimization of the distance measure Footnote 1: For notational simplicity, we do not distinguish here between physical atmospheric states, denoted as \(\tilde{x}\), \(\tilde{y}\) in Main, as well as their discrete representations \(x\), \(y\). \[\mathcal{D}\big{(}\tilde{p}(y,x;\alpha),p_{\theta}(y,x;\alpha) \big{)} =\int_{\mathcal{X}_{x}}\int_{\mathcal{X}_{y}}d\big{(}\,\tilde{p}( y,x;\alpha)\,,\,p_{\theta}(y,x;\alpha)\,\big{)}\,\mathrm{d}x\,\mathrm{d}y\] \[=\int_{\mathcal{X}_{x}}\int_{\mathcal{X}_{y}}d\big{(}\,\tilde{p}( y,x;\alpha)\,,\,p_{\theta}(y,x;\alpha)\big{)}\,\mathrm{d}x\,\mathrm{d}y\] for a suitable distance function \(d(\cdot,\cdot)\), see below. The joint distributions \(\tilde{p}(y,x;\alpha)\) and \(p_{\theta}(y,x;\alpha)\) factor as \(\tilde{p}(y,x;\alpha)=\tilde{p}(y|x;\alpha)\,p(x;\alpha)\) and \(p_{\theta}(y,x;\alpha)=p_{\theta}(y|x;\alpha)\,p(x;\alpha)\) where \(p_{\theta}(y|x;\alpha)\) is the AtmoRep model. Thus, \[\mathcal{D}\big{(}\tilde{p}(y,x;\alpha),p_{\theta}(y,x;\alpha)\big{)}=\int_{ \mathcal{X}_{x}}\int_{\mathcal{X}_{y}}d\big{(}\,\tilde{p}(y|x;\alpha)\,,\,p_{ \theta}(y|x;\alpha)\big{)}\,p(x;\alpha)\,\mathrm{d}x\,\mathrm{d}y\] where we assumed that \(d(\cdot,\cdot)\) only acts on \(y\) and hence \(p(x;\alpha)\) can be factored out (which is satisfied in our case, see again below). The last equation equals \[\mathcal{D}\big{(}\tilde{p}(y,x;\alpha),p_{\theta}(y,x;\alpha)\big{)}=\mathbb{ E}_{p(x;\alpha)}\Bigg{[}\int_{\mathcal{X}_{y}}d\big{(}\tilde{p}(y|x;\alpha),p_{ \theta}(y|x;\alpha)\big{)}\,\mathrm{d}y\Bigg{]}.\] The expected value over \(p(x;\alpha)\) can be estimated by the empirical mean over the large but finite observational record \(\bar{\mathcal{X}}\) that consists of discrete samples \((\bar{y},\bar{x})\). The available distribution \(\tilde{p}(y,x;\alpha)\) is thus a sequence of Dirac-deltas and we therefore have \[\mathcal{D}\big{(}\tilde{p}(y,x;\alpha),p_{\theta}(y,x;\alpha)\big{)}\approx \frac{1}{N}\sum_{\bar{x}\in\bar{\mathcal{X}}}d\big{(}\bar{y},p_{\theta}(y|\bar {x};\alpha)\big{)}.\] As an approximation for the observational record \(\bar{\mathcal{X}}\), we employ in our work the ERA5 reanalysis and due to computational constraints we use a random subset of it in practice, i.e. we use a Monte Carlo estimate of the last equation in the actual computations (as is standard when stochastic gradient descent or one of its variants are employed for training). As discussed in Methods, AtmoRep employs an ensemble to provide an non-parametric representation of the conditional probability distribution over the output state \(y\) and a novel statistical loss function for its training. These are motivated by the cross-entropy loss, which is the de facto standard for the training of discrete probability distributions [73]. In the discrete case, an explicit representation of the probabilities over valid outputs is possible (even when the number of classes is very large, as for large language model, e.g. [768, 32]. In the continuous, regression case, however, this is not possible. One remedy is to work with a parametric probability distribution with a finite number of parameters that then can be learned. Inspired the success of ensemble methods for numerical weather prediction [74, 19], we instead generalize the discrete case using an ensemble. It provides a sample from an non-parametric distribution and therefore no assumptions on the true distribution are required. As discussed in Methods, the ensemble is realized by a set of predictions heads and it is therefore computationally inexpensive. To compute the loss based on the ensemble prediction \(\hat{y}\), we currently consider its first two statistical moments, i.e. mean \(\mu\) and standard deviation \(\sigma\). The corresponding Gaussian \(G_{\mu,\sigma}\) has a value of 1 at its mean when normalized accordingly. With a single observation \(\bar{y}\) given by a per grid point value for a single physical field, a loss function is \[d_{s}\big{(}\bar{y},\hat{y}\big{)}=\Big{|}1-\int\delta_{x}(y)\,G_{\mu,\sigma}( y)\,dy\Big{|}^{2}=\Big{|}1-G_{\mu,\sigma}(y)\Big{|}^{2}.\] (B4) The above loss \(d_{s}(\,\cdot,\cdot\,)\) is very similar to the Gaussian continuous ranked probability function (CRPS), a proper scoring rule; in particular, it can be seen as probability density function form of the CRPS which is formulated in terms of cumulative distribution function, cf. [75]. However, in our experiments, Eq. B4 performed significantly better as loss function than CRPS, see Fig. B12. The figure also shows that our statistical ensemble loss leads to a lower test-set MSE than training with MSE directly. As already discussed in Methods, the full loss function per grid point per physical field is \[d\big{(}\bar{y},\hat{y}\big{)}=\sum_{k}|\bar{y}-\hat{y}_{k}|^{2}+d_{s}\big{(}y, \hat{y}\big{)}+\sqrt{\sigma}.\] (B5) where the \(\hat{y}_{k}\) are the individual ensemble members. The loss is computed independently for each predicted grid point and for each physical field. We thereby use a field-specific weighting but a uniform weight across the vertical levels. The weights are given in Table B1. The states \(x\),\(y\) we use in the training of AtmoRep are based on the tokens that are the input to the transformer-based Multiformer and which are given by tiles of a larger space-time neighborhood, e.g. \(12\times 5\times 6\times 12\) tokens for a \(36\,\mathrm{h}\times 5\,\mathrm{vertical\ levels}\times 1800\,\mathrm{km} \times 3600\,\mathrm{km}\) neighborhood for each physical field. In particular, we randomly designate a subset of tokens as \(y\) and the remainder of tokens as \(x\), see Fig. 1 in Main. The \(y\)- or target tokens are probabilistically masked or distorted. For masking, we set them to 0, which is for all fields a physically valid value due to the data normalization. For distortions, we either add noise with a magnitude that is consistent with the standard deviation of the values in the token or we coarsen it by a factor of 2 or 4. The choice which tokens are masked and which are distorted is made fully randomly by sampling in each case from a uniform distribution over a linear indexing of the tokens. The user-specified masking ratio, cf. Table B1, is thereby a maximum ratio and the actual one used per training example is also sampled randomly. The loss is computed over all tokens in \(y\), which in general includes unmasked tokens due to the random sampling. The masking is thereby performed independently for different physical fields and different levels. Our training strategy with masking and distorting tokens is an analogue of the masked token models used in natural language processing, e.g. [8, 25], which recently have also been adopted to computer vision, e.g. [26]. The distortions are inspired by work in natural language processing [8] that used random word permutations to encourage the learning of a robust and probabilistic representation. We employ our distortions towards the same end. Processing the entire ERA5 training set in multiple epochs was beyond the scope of the compute resources available to us. We therefore formulate the training as a Monte Carlo approximation where in each epoch we processed a randomly sampled sub-set of the full data set. The purpose of the epochs used in our work were therefore to have a manageable number of training periods that can be used to adjust the learning rate and to evaluate the test set for monitoring progress during training. We chose the amount of data for each epoch hence accordingly so that the training would complete within a small number of hours. ### Implementation The AtmoRep model was implemented with PyTorch [76]. Data parallel training was used extensively throughout the project. We employed PyTorch's DDP library for it. The final model configuration was chosen so that a complete Multiformer with six fields can be placed on one large compute node with four A100 GPUs. Different fields thereby reside on different GPUs with temperature and total precipitation on one since they use a smaller embedding dimension and also process a smaller number of tokens. We implemented a custom hierarchical sampling of the per-batch data. This was critical since random access to small parts of a large data set is highly inefficient on standard file systems when implemented naively. A good randomization per batch is, however, required for effective stochastic gradient descent and to obtain an unbiased estimator with AtmoRep. The formulation of training as a Monte Carlo method allowed us thereby to perform the batch assembly per parallel task fully independent from other tasks so that only gradients needed to be communicated in the data parallel training. Concretely, we sampled on each data parallel task first a small number of year-month pairs with the number being controlled by the available CPU RAM. For each time slice in these, a user-specified number of local neighborhoods was sampled. The data chunks corresponding to the year-month pairs were loaded from disk and the spatial sampling from these was performed on the fly in multiple parallel data loaders. The variable number of local neighborhoods per time slice allowed us to hide latencies in disk I/O. ### Model configuration The results presented in the main text were obtained with three different configurations of the AtmoRep Multiformer (Table B2). All of these configurations build on the same pre-trained single-field models, which were used with the configuration given in Table B1. These parameters were not changed for the Multiformers. Relevant parameters can be summarized as follows: * Physical fields: velocity u, v (or vorticity and divergence), vertical velocity, temperature, specific humidity, total precipitation * Model levels: 96, 105, 114, 123, 137 * Resolution: \(0.25^{\circ}\) equi-angular grid (\(721\times 1440\) grid points) * 2017, test period: 2018 * Neighbourhood: 12 x 6 x 12 tokens with 3 x 9 x 9 grid points for pre-training * UNet-like encoder-decoder architecture with 10 transformer layers in each branch * Self-attention: 16 heads in the encoder, 8 in the decoder * Cross-attention (same field): 8 heads in the decoder * Cross attention (inter-field): 2 heads per field in the encoder (in addition to self-attention heads) * Ensemble tail networks: 16 linear layers per field for ensemble generation * \(2\cdot 10^{-5}\), five epochs warm-up, then exponential decay to \(2\cdot 10^{-5}\) * Dropout rate: 0.05 * Optimizer: AdamW (weight decay: 0.05) * Masked token model with distortions; up to 90 % of modified tokens are masked, up to 20 % get noise added, and up to 5 % are smoothed * Multiformer with five fields: 3.5 billion parameters, parallelized across the 4 GPUs available in one node * Training on up to 32 nodes with 4 \(\times\) A100 GPUs on JUWELS-BOOSTER at the Julich Supercomputing Center * Training time: 4.5 hours per epoch, where an epoch is per node given by 2 spatial samples per time step for all time steps in 1 randomly sampled months. * Inference time: \(\approx 10s\) for a global 3-hour forecast on 1 node Due to the large model size, no hyper-parameter optimization was possible. The selected parameters were those that showed the best performance and trade-off in preliminary experiments with smaller scale models and training runs. A summary of the model configurations used for the different application together with an overview of the datasets and the metrics used in the analysis is reported in Table B3. ### Attention maps Attention maps have been used before in natural language processing and computer vision to understand what a transformer neural network has learned and to visualize and exemplify their generalization abilities, e.g. [8, 48, 77, 78]. An advantage compared to other introspection methods is that these are immediately available when evaluating a model without further computations. To the best of our knowledge, attention maps have so far not been studied for large scale transformer models in Earth system science. An attention map \(A_{h}\in R^{N\times N}\) is the matrix derived from multiplying queries \(Q_{h}\in\mathbb{R}^{N\times E_{h}}\) by keys \(K_{h}\in\mathbb{R}^{N\times E_{h}}\) within an attention block, i.e. \[A_{h}=Q_{h}K_{h}^{T}.\] (B6) where \(N\) is the number of tokens and \(E_{h}\) the per-head embedding dimension. The factors are thereby given by \[Q_{h}=W_{Q}^{h}\,T\qquad K_{h}=W_{K}^{h}\,T\qquad V_{h}=W_{V}^{h}\,T\] (B7) with \(T\in\mathbb{R}^{N\times E}\) being the matrix formed by stacking the tokens \(t\in R^{E}\) as rows. \(W_{Q}^{h}\), \(W_{K}^{h}\), \(W_{V}^{h}\) are learnable, head-specific projection matrices for queries, keys, and values, respectively. In cross-attention, a different token matrix \(T\) is used for queries and for keys and values. In the transformer, the per-head attention map \(A_{h}\) is employed as \[T_{h}^{\prime}=\text{softmax}\left(\frac{A_{h}}{\sqrt{dk}}\right)V_{h}=\text{ softmax}\left(\frac{Q_{h}K_{h}^{T}}{\sqrt{E}}\right)V_{h}\] where \(T_{h}^{\prime}\) is the matrix of updated tokens that, after passing through the MLP of the attention block, is processed by the subsequent blocks in the transformer. By the last equation, an attention map hence provides a direct measure of the token mixing within the block, i.e. how much information from the \(j^{\text{th}}\) token is used to update the \(i^{\text{th}}\) one. \(A_{h}\) is thereby determined by the scalar product, i.e. a standard, linear measure for the similarity of two vectors from linear algebra. To reveal space-time structures in AtmoRep's attention maps, we arrange the \(N\) tokens again in their space-time form, e.g. the \(12\times 5\times 6\times 12\) used during pre-training, so that \(A\in\mathbb{R}^{(n_{t}\times n_{v}\times n_{\theta}\times n_{\phi})\times(n_{ t}\times n_{v}\times n_{\theta}\times n_{\phi})}\). We can then generate spatio-temporal plots by either fixing one token that is attended to, i.e. fixing an index in one of the "legs" \(n_{t}\times n_{v}\times n_{\theta}\times n_{\phi}\), or by averaging over one of them. In either case, this enables us to visualize \(n_{t}\times n_{v}\) latitude-longitude plots that are directly interpretable. In Fig. 10 in the Extended Data section, for the top plot we also averaged over the different attention heads in the last layer of the decoder, while in the bottom plot we show the different heads for a fixed time step and level. ### Pre-training results Training was performed on JUWELS-BOOSTER at the Julich Supercomputing Centre. The per-field transformers were pre-trained on 4 nodes each for multiple weeks with a per-node batch size of 6 (i.e. with an effective data-parallel batch size of 96). For the multiformers, we employed 32 nodes with a per-node batch size of 2 (i.e. with an effective data-parallel batch size of 64). The batch size was in all cases controlled by the available GPU memory. To monitor progress during pre-training, we used the validation loss for the masked token model training task. This provides, however, only limited insight into the generalization abilities of a model. We therefore complemented it by the zero-shot forecasting performance. We also regularly evaluated error plots and similar metrics to monitor the progress of the training. To evaluate the suitability of AtmoRep as a statistical model, we also extensively used histograms. Two examples are shown in Fig. 14 and Fig. 15. Fig. 14 shows the global histogram for vorticity at model level 137 for a selected number of training epochs. In early training, deficiencies are in particular present in the tails of the distribution but these improve with more training. Fig. 15 shows histograms for two selected but representative point locations. The highly non-Gaussian nature of the distributions is well captured by our model, even in the tails. For all models, we increased the masking rate during pre-training to make the training task more difficult and this led to a measurable improvement of the zero-shot generalization abilities of AtmoRep. We observed substantially different behavior for different fields during pre-training. For example, for the velocity components the statistical loss was sufficient and no MSE term was needed whereas no convergence was obtained for vorticity without it. For divergence, obtaining convergence was difficult even with both MSE and statistical loss. Using a model pre-trained for vorticity yielded substantially better results although the loss remained higher than for vorticity. We also observed that we obtained sub-optimal predictions with visually apparent artifacts for temperature, despite a very small MSE loss. Using a larger token size helped to alleviate the problem, likely since small tokens are essentially constant and result in an uninformative attention computation. We believe that most of the above observations can be understood through the widely different frequency spectra of the different physical fields, cf. Fig. 13, but we leave a more systematic study to future work. The ability to adapt parameters and computational protocols to the properties of the physical fields is, in our opinion, a significant advantage of the Multiformer architecture. We did not observe convergence for any of the models during pre-training and the loss was continuously decreasing. The pre-training was terminated when the available computational resources were exhausted. ### Design Choices Below, we discuss some of the design choices we made for AtmoRep as well possible alternatives. * _Use of dense attention:_ Dense attention is computationally expensive since its computational costs scale quadratically with the number of tokens. This problem is exaggerated by the four-dimensional domain AtmoRep works on. One alternative to dense attention is axial attention, which has been used before for problems in Earth system science e.g. in [79]. We also implemented it in AtmoRep but it lead to substantially worse results for the intrinsic zero-shot abilities. We believe that sparse attention, as used for example in large language models [25], might provide an alternative that improves computational costs without negatively impacting the skill. * _Training with forecasting instead of a bidirectional masked token model:_ The masked token model in our work is inspired by [8]. An alternative is to train with a forecasting task, which would preserve causality. Preliminary experiments led to worse performance but we believe that also forecasting is a suitable pre-training task when properly tuned. (The difference is analogous to BERT-type pre-training [8] and next-word-prediction pre-training [25] in natural language processing; both provide comparable performance.) * _Masking value:_ In our work, we mask tokens with \(0\). Due to the data normalization, this is leads to the masked tokens being physically valid, at least in a statistical sense. For the training, a statistically valid token is, in our opinion, desirable, e.g. because it facilitates the learning of a robust representations as needed for model correction. It also leads, however, to some downsides. For instances, the reconstruction of a field only from other fields or only from a given external condition \(\alpha\) is not possible in this case, cf. Sec. 5. * _Statistical loss formulation:_ The choice of using the difference to an unnormalized Gaussian is unorthodox from the point of view of probability theory where, e.g. the area between the observation and the mean to a normalized Gaussian would be a more natural choice. However, in our experiments the current loss performed better than alternatives. We leave a theoretical investigation of this to future work. * _Number of statistical moments in loss computation:_ We currently employ only the first two statistical moments in the loss computation. This is not equivalent to assuming a Gaussian distribution but means that we only control the first two moments of the output (somewhat similar to a weak formulation in finite elements). Using the first two moments worked well in all of our experiments and we believe that one reason for this is that these are considered independently per grid point and hence arbitrarily complex inter-grid point distributions are possible. However, with a sufficient number of ensemble members one could also consider higher order moments, e.g.urtosis. * _Velocity versus Vorticity and Divergence:_ The wind velocity vector field can either be specified through its u-v components or through two potentials, which are equivalent to vorticity and divergence. We therefore trained single-field models for both variants and also assembled them into two Multiformer configurations (see Table 2). Vorticity had in our experience thereby the best performance and was in particular better than the velocity components. However, divergence had the worst performance from the 4 fields so that we did not reach a conclusion if either the velocity components or vorticity and divergence are better suited. Note, however, that a direct comparison of, e.g., loss values for the velocity components on the one side and vorticity and divergence on the other is not possible since they are related by differential operators. Instead, a prediction for vorticity and divergence needs to be converted to velocity space, or vice versa, for a fair comparison. In Fig. 16 we present preliminary results for an alternative way to compare the two configurations. Specifically, there we consider the performance of the two large multiformer configurations for temporal interpolation on a common field, in this case specific humidity. There we see a significant benefit for the vorticity/divergence Multiformer, which is also consistent with other preliminary results. A more detailed investigation would, however, be required to obtain a more complete understanding. ## 3 Detailed Experimental Protocols and Further Results Below we provide further details on experimental protocols and evaluation. ### Definition of Metrics The most common metric used by multiple applications to evaluate model skill is the root mean square error (RMSE), defined as \[\text{RMSE}=\frac{1}{N}\sum_{n=1}^{N}\sqrt{\frac{1}{W\cdot H}\sum_{w=1}^{W}\sum_ {h=1}^{H}\Lambda_{w,h}\cdot(x_{w,h}^{n}-\tilde{x}_{w,h}^{n})^{2}}.\] (C8) with \(\Lambda_{w,h}\) either being the identity or \[\Lambda_{w,h}=W\cdot\frac{\cos(\alpha_{w,h})}{\sum_{w^{\prime}=1}^{W}\cos( \alpha_{w^{\prime},h})}\] (C9) for latitude-weighted Root Mean Square Error [59]. Here, \(\tilde{x}\) denotes a prediction and and \(x\) its ground truth (target) value. \(N\) is a sequence length, \(w\) and \(h\) are the latitude and longitude indices of each grid point in the given region, and \(\alpha_{w,h}\) is the latitude value at point \((w,h)\). The (latitude-weighted) anomaly correlation coefficient (ACC) is defined as \[\text{ACC}=\frac{1}{N}\sum_{n=1}^{N}\frac{\sum_{w=1}^{W}\sum_{h=1}^{H}\Lambda_ {w,h}(x_{w,h}^{n}-C_{w,h})(\tilde{x}_{w,h}^{n}-C_{w,h})}{\sqrt{\sum_{w=1}^{W} \sum_{h=1}^{H}\Lambda_{w,h}(x_{w,h}^{n}-C_{w,h})^{2}\cdot\sum_{w=1}^{W}\sum_{h =1}^{H}\Lambda_{w,h}(\tilde{x}_{w,h}^{n}-C_{w,h})^{2}}}.\] A classical measure for ensemble skill is the continuous ranked probability score (CRPS). It is mathematically defined as \[\text{CRPS}=\int_{-\infty}^{+\infty}(F(x)-1_{\{x\geq y\}})^{2}dx\] (C10) and represents a quadratic measure of the difference between the forecast cumulative distribution function, \(F(x)\), and an empirical observation represented by a delta function. The (latitude-weighted) spread-skill ratio (SSR) represents a measure of the correlation between the ensemble spread and the prediction error. It is the ratio between the ensemble spread and the RMSE, with the ensemble spread calculated as \[\text{SSR}=\sqrt{\frac{\sum_{w=1}^{W}\sum_{h=1}^{H}\Lambda_{w,h}\cdot\sigma_{ w,h}^{2}}{W\cdot H}}\] and the RMSE as in Eq. C8. Here \(\sigma_{w,h}^{2}\) is the variance of the ensemble spread. For a well-calibrated ensemble, a larger error on the prediction corresponds to a larger ensemble spread. The SSR is usually between 0 and 1 with values close to zero indicating an undersampling of the ensemble distribution while values above 1 indicate that the ensemble spread is larger than the error on the prediction. To evaluate the bias-corrected precipitation forecasts, we made use of common metrics for dichotomous events [80]. To separate dry and rainy events, a threshold of \(0.1\,\mathrm{mm}/3\,\mathrm{h}\) for the precipitation rates was applied. Letting \(a\), \(b\), \(c\) and \(d\) denote the number of hits, false alarms, misses and correct negatives of the \(2\times 2\) contingency table, the equitable threat score (ETS), the Pierce skill score (PSS) and the frequency bias (FBI) are given by \[\mathrm{ETS}=\frac{1}{N}\sum_{n=1}^{N}\frac{a_{n}-a_{n}^{ref}}{a_{n}-a_{n}^{ ref}+b_{n}+c_{n}}\quad\mathrm{with}\quad a_{n}^{ref}=\frac{(a_{n}+b_{n})(a_{n}+c_{n}) }{a_{n}+b_{n}+c_{n}+d_{n}}\] \[\mathrm{PSS}= \frac{1}{N}\sum_{n=1}^{N}\frac{a_{n}d_{n}-b_{n}c_{n}}{(a_{n}+c_{n })(b_{n}+d_{n})}\] \[\mathrm{FBI}= \frac{1}{N}\sum_{n=1}^{N}\frac{a_{n}+b_{n}}{a_{n}+c_{n}}.\] The ETS thereby constitutes a skill score formulation for which the performance of a random forecast in terms of the treat score serves as reference. The Peirce skill score is based on the proportion of correct events and takes an unbiased random forecast as reference (\(a_{n}^{ref}\)). By contrast, the FBI is not an accuracy measure itself, since it evaluates the marginal event frequency of the forecast and ground truth data. However, the FBI provides information on systematical biases of a forecasting system with \(\mathrm{FBI}>1\) (\(\mathrm{FBI}<1\)) indicating over-confidence (under-confidence). ### Nowcasting The fine-tuning for the \(6\,\mathrm{h}\) forecasting was performed for 4 days, i.e. for approximately 8 epochs where each of them used data from \(32\times 1\) months (distributed across the 32 nodes used in training). The training task was a modified masked token model with always all tokens in the last two rows in the temporal dimension masked (and no randomness). Since each token has a width of \(3\,\mathrm{h}\) this corresponds to training for \(6\,\mathrm{h}\) lead time. CRPS for specific humidity has not been reported since the Gaussian approximation for the ensemble distribution does not hold. Other details can be found in the Methods section. ### Temporal interpolation For temporal interpolation, we report some additional results in Fig. 16. In particular, there we provide a comparison between the single-field models and results obtained with the two different 6-field Multiformers. The results show the clear advantages of the coupled models. By evaluating the skill for temporal interpolation for specific humidity, which is a common field in both large Multiformers, we can also compare the effect of either working with the wind components or with vorticity and divergence, i.e. with the different equivalent ways to represent the wind vector field. A substantially lower RMSE can be observed for the configuration with vorticity and divergence. Experiments for other applications are, however, needed to establish a clear result. The differences in the linear interpolation in the bottom plot in Fig. 16(e) can be attributed to differences in the random sampling for the experiment. ### Counterfactuals For the counterfactuals, we report results for vorticity since the ERA5 distribution has a relatively simple shape (in contrast, e.g., for temperature and specific humidity) and a clear shift between the early and late ERA5 years exists. Furthermore, vorticity is rapidly changing in time so that a 3 h short-term forecasts is more likely to remove imprints from the initial conditions, e.g. compared to temperature that is changing much more slowly. Further details on the experimental setup can be found in Methods and in Fig. 5 in the Extended Data section. ### Extrapolation to 2022 The experimental setup was very similar to that of the counterfactuals in Sec. 3.4, see also Fig. 5 in the Extended Data section. In particular, we used a large, random set of initial conditions from 2017 and evaluated short-term forecasts with these once with the correct external conditions \(\alpha\) and once with modified \(\hat{\alpha}\) where \(\hat{\alpha}_{y}=2022\). To avoid that results are biased by differences between ERA5 and model predictions, the reference distribution for 2022 was also obtained through short-term forecasting with AtmoRep. ### Model correction ECMWF's operational Integrated Forecasting System (IFS) is a state-of-the-art numerical weather prediction model. A version from 2016 with a spectral resolution of T639 was used for the production of the ERA5 reanalysis. The operational IFS, in contrast, works with T1220, which corresponds to an approximately 4-times finer resolution. The additional fine-scale spatial details in IFS data are not fully representable on the \(721\times 1440\) grid used for ERA5 but one still has a substantially higher effective signal resolution compared to ERA5, see the zoom-ins in Fig. 4 in the Extended Data section. Furthermore, the current IFS differs also in other respects (e.g. the parametrizations used) from those from 2016 so that next to the resolution also the overall data distribution is different. With data from the IFS distribution as input to AtmoRep, it is a priori not clear that the model will produce meaningful predictions. While "blow-up" as in conventional models is not typical for AI-based ones, these can show other artefcats, e.g. regular patterns. No such artifacts are observed in AtmoRep's predictions with IFS data as input. Furthermore, our model is able to partially preserve the higher frequency content, despite never having seen data with such a high resolution during training. The robustness of AtmoRep is broadly consistent with what has been observed for other large scale representation models, e.g. large language models, and a sufficiently large and diverse training dataset seems key to achieve these properties. Since AtmoRep is trained to produce data from the ERA5 distribution as output, it will do so also for IFS input. This leads to the distributional shifts towards ERA5 that are presented in Fig. 2 in Main and Fig. 5 in the Extended Data section. We thereby used data from a single month only, namely February, since averaging over months led to a less robust shift in the background distribution. For computing the distributions, we used AtmoRep with the masked token model while for the spectra in Fig. 1 in Main global forecasts where computed analogous to the nowcasting experiments. Spectra have been computed with the spharm python package. ### Downscaling For downscaling, we used the 3-field multiformer configuration with the two wind velocity components and temperature as base model. As discussed in Main, the target field was 2 m temperature from the COSMO REA6 reanalysis. As downscaling network we employed a transformer with six blocks and an embedding dimension of 2048, i.e. twice the size as for temperature in the pre-trained model. The larger embedding dimension was motivated by the much larger number of grid points in the COSMO REA6 target data set for each token (whose spatial extent was preserved). A linear layer was used to map the token for temperature in the AtmoRep core model to the larger embedding dimensions in the downscaling network. In the ensemble tail, we employed only 4 members since each of them had a large number of parameters. The wind components contributed to the downscaling only through the inter-field cross-attention in the encoder. The overall network had about 1.8 billion parameters and it was trained end-to-end for the training task, although fine-tuning of, e.g., only the decoder and the downscaling network is an interesting direction for future work. The task was the direct prediction of the downscaled field, fixed at the center of the COSMO REA6 domain and with an extent determined by the \(6\times 12\) spatial tokens used during pre-training. The loss was the same as during pre-training. A noteworthy observation during training was that a small batch size was critical to obtain convergence. Also, results improved considerably when we changed from a global to local data normalization for the \(2\,\mathrm{m}\) temperature target field. We used the recent work by Stengel et al. [39] as baseline for our downscaling experiments. We retrained the GAN used in this work for our domain and data, using only ERA5 temperature at model level 137 as input. In contrast to Stengel et al., we also performed a single downscaling step since the super-resolution factor was smaller than in the original work. ### Bias correction The RADKLIM dataset contains missing values. These are coherent regions at the boundary of the domain without any observations but also individual grid points within the domain at times. We treated these values by masking them for the loss computation. Despite this masking, the obtained predictions were spatially and temporally coherent, see Fig. 4 in Main. For training we used an MSE loss together with a loss that provided a stronger emphasis on large values to better captures the tails of the highly skewed data distribution. In particular, instead of squaring the per ensemble member difference between prediction and truth and the difference in the statistical loss, we took these values to the sixth power. The training task was the direct prediction of the RADKLIM data from the ERA5 data. The 6-field vorticity and divergence multiformer was used as base model and the entire model was fine-tuned. Data for the year 2018 was used as validation set and those from 2019 for test. This separation was necessary for bias correction since the loss used for training and evaluation scores were substantially different. ## 4 Additional experiments ### Effect of using different domain sizes than during training As described in Main and Methods, AtmoRep's Multiformer is a highly flexible neural network architecture that allows one to change the number of tokens after training and through this to adapt the domain size as well as the number of vertical levels. In Table D4 we show the effect of changing the number of tokens in the input along different dimensions away from the 5 vertical levels and \(12\times 6\times 12\) tokens in time, latitude and longitude used during pre-training. The results demonstrate that reducing the domain size leads to only a limited degradation in the performance, in particular along the vertical dimension. A larger number of tokens, however, incurs a more substantial loss in skill. Note that small amounts of fine-tuning allow one to substantially reduce the degradation with little additional computations. ### Effect of coupling individually pre-trained transformers In Table D5 we show the test error for the extended masked token model task used during training for individual per-field transformers and compare it to the performance when the fields are evaluated together in a Multiformer. The results demonstrate that the model combines the information from the different fields effectively in a Multiformer and that this leads to a substantial improvement of the results. Similar results are obtained for forecasting. ## 5 Directions for future work There are several other potential applications of AtmoRep, some of which might be possible with the existing model and without task-specific training, i.e. they would be further intrinsic capabilities. Others will require some, or even substantial fine-tuning and/or model extensions. To illustrate the generality of AtmoReps concept, we discuss a selected set of potential applications and their realization with AtmoRep below. Possible extensions of the AtmoRep model are also presented. _Medium range forecasting_ Several recent studies focused on medium-range weather forecasting and demonstrated that sufficiently large deep learning models can produce weather forecasts with skill comparable to state-of-the-art conventional models when trained on ERA5 [3, 4, 5, 6, 81]. While AtmoRep does not specifically target medium-range weather forecasting, the model can be easily extended to it. More specifically, forecasting can be formulated as an auto-regressive problem \[p\big{(}x_{N}|x_{0}\big{)}=\prod_{t_{i}=1}^{N}p\big{(}x_{t_{i}}|x_{t_{i-1}} \big{)}\] (E11) where the \(t_{i}\) form a sequence of discrete time steps. Hence, by representing \(p\big{(}x_{t_{i}}|x_{t_{i-1}}\big{)}\) with our numerical stochastic model \(p_{\theta}(y|x,\alpha)\), i.e. \(p\big{(}x_{t_{i}}|x_{t_{i-1}}\big{)}=p_{\theta}(x_{t_{i}}|x_{t_{i-1}},\alpha)\), we can realize forecasting by iterating it. This is known as roll-out in the machine learning literature. For medium-range weather forecasting, AtmoRep's intrinsic ensemble provides an interesting avenue to develop an ensemble forecasting system. This is critical to achieve skillful and practically applicable medium-range forecasts [1, 19] although none of the existing large-scale AI-based forecasting models currently supports this natively. A central question for ensemble forecasting is how to achieve coherent long-term predictions with a practical number of ensemble members and with a physical spread [82]. In natural language processing, a similar challenge has been met using reinforcement learning [67, 68] to generate long, coherent model output. The principal applicability of reinforcement learning for forecasting has already been demonstrated in [6] and we therefore believe that reinforcement learning-based ensemble forecasting methods provide a promising direction for further exploration.. _Error estimates through the ensemble_ A challenge with machine learning-based methods is often their unkown reliabilty, for example since many of the guarantees from classical numerical analysis are not available. Fig. E17 above as well as Figs. 2 and 3 in the Extended Data section show that for AtmoRep, the error in a prediction is strongly correlated to the standard deviation of the ensemble. In a practical application, the error is not available while the standard deviation is provided with any prediction. We hence believe that the standard deviation could be used as a proxy for the error to provide, e.g., information on the reliability of AtmoRep predictions. #### Missing field reconstruction For text and images, generative AI models can be used for conditional state generation, e.g. to obtain an image from a label or a descriptive text only [64, 69]. In the context of AtmoRep \(p_{\theta}(y|x;\alpha)\), analogous capabilities would allow to reconstruct a statistically consistent atmospheric state from just the external conditions \(\alpha\), e.g. by specifying a date and a location. In fact, AtmoRep's ability to reconstruct states with up to 90% of the information being masked is already close to such a state reconstruction. A special case of relevance would also be to reconstruct one physical field form other ones that are provided. Such a field reconstruction would be of great utility for model compression that is a major challenge for state-of-the-art weather and climate models. #### Extensions of the AtmoRep model An important and exciting direction for future work is a further extension of the AtmoRep model itself. Through the training on the ERA5 reanalysis, AtmoRep inherits the biases of the dataset, e.g. [10, 40, 41, 42]. As we demonstrated with the bias correction in Main, observational data can be used to reduce these. Pursuing this at scale and with a wide range of data streams from the observational record could lead to a substantially improved model \(p_{\theta}(y|x,\alpha)\). In turn, this would likely also lead to substantial benefits in applications that build on AtmoRep. A principal challenge is thereby how to train a model like AtmoRep with multiple potentially incoherent and often local data sources to obtain an effective improvement in the the learned representations. The heterogeneity of the data used for training very large text and image models suggests, however, that this can be achieved. A long term objective would be to continuously update AtmoRep, or a similar model, as new observations become available. Such an ingestion of the very large amounts of data (on the order of terabytes) that today's observational network produces would provide a means to process the data into a coherent and informationally rich form that is readily amenable for applications. The current AtmoRep model work on the \(721\times 1440\) equi-angular grid that is the default for the ERA5 reanalysis. Due to the strong distortions at the pole, this is sub-optimal. Furthermore, having the ability to process data from different (grid) representations and at different resolutions would provide a significant benefit for AtmoRep and its applicability. This is challenging in particular at very high-resolutions, e.g. with km-scale data, due to the quadratic scaling in the dense attention that is currently used with AtmoRep. Likely, a multi-resolution approach is required for a model that can flexible handle data up to this resolution. Working with different grids is, in our opinion, significantly easier and can be achieved with representation-specific embedding networks and a suitable training protocol. This would likely also be required for the training on observations where a remapping to a standard grid can already introduce substantial artifacts. A question that is important to all of the above is the size of the AtmoRep model and how it has to scale with the volume of the training data set. A first important aspect in this context would be the required model size to absorb all the information in ERA5 into AtmoRep and how this depends on the accuracy required in the output. ## 6 Data In this section we provide more detailed information on the most important dataset used in our work. ### Data normalization In our work, we used both global and local data normalization, in each case with the aim to map the data for a physical field to a standardized form with zero mean and unit variance. For global normalization, the mean and variance over the entire per-field data set per month was computed, irrespective of the spatial location, and the data was normalized using these values. This removed, e.g., seasonal variations from the data. For the local normalization, mean and variance were computed independently per grid point in the data but irrespective of the time. Global normalization worked well for vorticity and divergence but when used for example with temperature one obtained predictions with biases that did not improve with further training. The use of the local normalization alleviated the problem substantially. We believe that the different behavior results from the different frequency characteristics of the physical fields, see Fig. 13, but more work is required to obtain a fuller understanding of the observations. ### Era5 ERA5 is the fifth generation global atmospheric reanalysis produced by the European Centre for Medium Range Weather Forecasting (ECMWF) [10]. It provides the most consistent and coherent global estimate for the state of the atmosphere, land surface, and ocean waves that is currently available. ERA5 is based on ECMWF's operational Integrated Forecast System (IFS) model version CY41R2. This was the operational weather model of ECMWF in 2016. Observations are assimilated with a hybrid incremental 4D-var system into the atmospheric state that is represented on a reduced Gaussian grid with an approximate grid spacing of 31 km on 137 model levels for the high resolution realisation (HRES). The Copernicus Climate Data Store service provides the ERA5 reanalysis data on a regular (lat,lon)-grid with 0.25\({}^{\circ}\) resolution. In addition to the analysis product, short-range forecasts with a lead time of 18 hours are initialized twice a day at 06 and 18 UTC. These short-range forecasts provide estimates of surface fluxes, accumulated precipitation, and other subgrid-scale variables. ERA5 data ranges from the year 1940 to almost real time. For AtmoRep, only data after 1979 was used because these are better constrained from the inclusion of satellite observations. ERA5 has a comprehensive documentation [10] and has been extensively evaluated in the scientific literature, for example [40, 41, 42, 83, 84, 85, 86, 87, 88, 89]. While mean statistics of several atmospheric variables are generally captured well, some biases have also been noted. For example, [42] found an underestimation of daily maximum temperatures and an overestimation of daily minimum temperatures over Australia as well as an underestimation of the decadal warning trend. With respect to precipitation, ERA5 has a tendency to overestimate low and medium rainfall, while it underestimates heavy precipitation [83]. As described in [10], the quality of the ERA5's data assimilation varies over time because of the changing global observations that are available. In combination with certain model errors (ERA5 uses an IFS model version from 2016), biases may vary over time. It is noteworthy for assessing the quality of AtmoRep and other deep learning models that have been trained on ERA5 data that improvements in model physics, resolution and in the data assimilation scheme of ECMWF result in better quality atmospheric state analyses in recent IFS analyses and forecasts compared to ERA5 results, i.e. after 2019 [43]. ### Cosmo Rea6 The COSMO REA6 constitutes a reanalysis dataset of the atmospheric state for the European CORDEX-domain [38]. The smaller spatial domain allows for much higher spatial resolution of the reanalysis product. The COSMO REA6 grid spacing \(\Delta x_{CREA6}\simeq 6\,\mathrm{km}\) is about five times finer than the ERA5-grid (\(\Delta x_{ERA5}\simeq 31\,\mathrm{km}\) ). The regional NWP model COSMO v4.25 [90], operationally deployed from September to December 2012 at the German Weather Service DWD, was used to generate the dataset. The initial and boundary conditions were provided by the ERA-Interim reanalysis, the predecessor of ERA5. The COSMO REA6 reanalysis data is available between 1995 and August 2019. A continuous nudging scheme (as opposed to the incremental 4D-Var in ERA5) is used for data assimilation which relaxes the model's prognostic variables towards observations over a predefined time window. Due to the rotated pole grid of the COSMO REA6 data, a first-order conservative remapping was applied to align the data with the unrotated ERA5 grid. Specifically, we used the Climate Data Operators (CDO) software [91] to reproject the COSMO data from the rotated pole grid with \(\Delta x_{CREA6}^{rot}=0.055^{\circ}\) onto an unrotated (lat,lon)-grid with \(\Delta x_{CREA6}=0.0625^{\circ}\). While the horizontal wind vector components \((u,v)\) are directly processed, the 2m temperature \(T_{2m}\) is expressed in terms of the dry static energy \(s=c_{pd}T+g(z_{sfc}+2m)\) to account for the change in the surface topography \(z_{sfc}\) during remapping. ERA5 and COSMO REA6 have been compared in a few studies. For example, [92] investigated irradiance estimates from both products and found an overestimation in ERA5 and an underestimation of COSMO REA6. Added value of the COSMO REA6 reanalysis product have been diagnosed mainly over mountainous regions such as the Alps due to the increased spatial resolution, see [93] for a comprehensive evaluation. Notable examples in this context constitute the 2m temperature [94] and the near-surface wind [86, 93] confirming the suitability of COSMO REA6 as a target dataset for statistical downscaling. ### Radklim The RADKLIM dataset provides precipitation observations derived from the German radar network operated since 2001 by the German Weather Service DWD. The reflectivity data measured at 17 ground-based C-Band radar stations of the network is converted to precipitation rates based on the well-known Z-R relation [95]. A sophisticated calibration procedure based on rain gauge observations is deployed to remove systematic observations errors, cf. [96]. Initially, the RADar OnLine AdjustmeNT (RADOLAN) procedure [97] applies several corrections by eliminating clutter pixels due to backscattered signal of non-meteorological targets (e.g. insects, buildings or other solid objects), reducing noise with gradient filters and compensating for orographic shading effects. The RADOLAN-procedure is complemented by further processing techniques to improve correction of clutter artifacts and to account for signal reduction with increasing distance and altitude of the radar beam [46]. Evaluations of RADKLIM have concluded that it provides a reliable dataset for gridded precipitation [98]. In this study, we make use of the so-called YW-product which provides precipitation rates at a temporal resolution of 5 minutes. The precipitation rates are then accumulated to a 3h-precipitation product. Similar to the COSMO REA6-data, first-order conservative remapping with CDO was performed to map the RADKLIM data from a polar-stereographic grid with \(\Delta x_{RDK}^{polar}=1\,\mathrm{km}\) onto the regular, spherical \(0.25^{\circ}\) grid of the ERA5-data.
2306.14043
Machine Learning needs Better Randomness Standards: Randomised Smoothing and PRNG-based attacks
Randomness supports many critical functions in the field of machine learning (ML) including optimisation, data selection, privacy, and security. ML systems outsource the task of generating or harvesting randomness to the compiler, the cloud service provider or elsewhere in the toolchain. Yet there is a long history of attackers exploiting poor randomness, or even creating it -- as when the NSA put backdoors in random number generators to break cryptography. In this paper we consider whether attackers can compromise an ML system using only the randomness on which they commonly rely. We focus our effort on Randomised Smoothing, a popular approach to train certifiably robust models, and to certify specific input datapoints of an arbitrary model. We choose Randomised Smoothing since it is used for both security and safety -- to counteract adversarial examples and quantify uncertainty respectively. Under the hood, it relies on sampling Gaussian noise to explore the volume around a data point to certify that a model is not vulnerable to adversarial examples. We demonstrate an entirely novel attack, where an attacker backdoors the supplied randomness to falsely certify either an overestimate or an underestimate of robustness for up to 81 times. We demonstrate that such attacks are possible, that they require very small changes to randomness to succeed, and that they are hard to detect. As an example, we hide an attack in the random number generator and show that the randomness tests suggested by NIST fail to detect it. We advocate updating the NIST guidelines on random number testing to make them more appropriate for safety-critical and security-critical machine-learning applications.
Pranav Dahiya, Ilia Shumailov, Ross Anderson
2023-06-24T19:50:08Z
http://arxiv.org/abs/2306.14043v2
# Machine Learning needs its own Randomness Standard: Randomised Smoothing and Prng-based attacks ###### Abstract Randomness supports many critical functions in the field of machine learning (ML) including optimisation, data selection, privacy, and security. ML systems outsource the task of generating or harvesting randomness to the compiler, the cloud service provider or elsewhere in the toolchain. Yet there is a long history of attackers exploiting poor randomness, or even creating it - as when the NSA put backdoors in random number generators to break cryptography. In this paper we consider whether attackers can compromise an ML system using only the randomness on which they commonly rely. We focus our effort on Randomised Smoothing, a popular approach to train certifiably robust models, and to certify specific input datapoints of an arbitrary model. We choose Randomised Smoothing since it is used for both security and safety - to counteract adversarial examples and quantify uncertainty respectively. Under the hood, it relies on sampling Gaussian noise to explore the volume around a data point to certify that a model is not vulnerable to adversarial examples. We demonstrate an entirely novel attack against it, where an attacker backdoors the supplied randomness to falsely certify either an overestimate or an underestimate of robustness. We demonstrate that such attacks are possible, that they require very small changes to randomness to succeed, and that they can be hard to detect. As an example, we hide an attack in the random number generator and show that the randomness tests suggested by NIST fail to detect it. We advocate updating the NIST guidelines on random number testing to make them more appropriate for safety-critical and security-critical machine-learning applications. ## 1 Introduction Randomness is crucial in machine learning (ML), serving a number of purposes across different areas. One use case is in federated learning, where it helps with user selection to ensure a diverse representation of participants and prevent bias [34]; it can also provide privacy amplification in the process [3]. It is very heavily used in optimisation, providing the foundation for Stochastic Gradient Descent [56], as well as generally for data sampling, enabling the selection of representative subsets for model training [47]. It enables Monte Carlo methods [48]. It is essential in active learning, a process where an algorithm actively selects the most informative samples for labelling, in order to reduce the labelling cost [31]. It forms the basis of differential privacy, the de facto standard for quantifying privacy in ML, where noise is added to data in such a way as to protect individual privacy while still allowing for accurate analysis [20]. It contributes to the generation of synthetic data, expanding the training set and enhancing the generalisation capabilities of machine learning models [36]. In short, randomness is widely used and highly significant for ML. Yet little thought has been given to the vulnerabilities that might result from weak randomness. We therefore explore the extent to which an attacker can change the safety-critical decision-making of a target system, purely by exploiting or tinkering with random number generators. We focus our efforts on Randomised Smoothing, a standard technique for quantifying uncertainty1 in a given blackbox model prediction [13].This is heavily used in practice to combat adversarial examples; it can even be used to provide robustness certification. It samples isotropic Gaussian noise and adds it to a critical datapoint, in order to measure the probability that the addition of noise may cause the model prediction to change, leading to a spurious decision. Sampling high-quality noise is crucial for this purpose, yet in practice we rarely check how normal our Gaussian noise is. In this paper, we construct two attacks against Randomised Smoothing that assume an attacker can influence the random number generator on which the model relies. The first is a naive attack that simply replaces a Gaussian distribution with a different distribution _e.g._ Laplace noise. This disrupts confidence quantification significantly for both over- or under-estimation, but is relatively easy to detect. We present the attack intuition visually in Figure 1. Following this naive proof of concept, we construct a more powerful and covert attack: a bit-flipping PRNG attacker that changes only a single bit out of every 64 bits deep in the implementation of the random number generator. We show that this can cause mis-quantification of the true confidence by up to a factor of \(\times 81\) and is extremely hard to detect. This change to the PRNG is covert in that it does not cause it to fail the official NIST randomness suite of tests. We argue that similar randomness-based attacks can be devised against other ML techniques, such as differential privacy. It follows that the standards are insufficient to guarantee the security and privacy of machine learning systems in the face of sophisticated adversaries. We then discuss practical ways of tackling these vulnerabilities in practice. We aim to empower researchers and practitioners to understand the extent to which they place their trust in their toolchain's source of randomness, and develop more robust defences against the abuse of this trust. By developing an attack that exploits randomness and demonstrating its impact on the mechanisms most widely used to certify safety properties in ML, we expose the need for improved standards. By exploring potential mitigations, we hope to enable people to build more secure and resilient machine-learning systems. In this paper, we make the following contributions: * We demonstrate a new class of attacks against Randomised Smoothing based on the substitution of an underlying noise distribution. * We show how an attacker can change the underlying randomness generators to defeat it more covertly and even more effectively. * We show that NIST's randomness tests with default parameters fail to catch our attacks and argue for updated randomness standards. ## 2 Related work In this section we cover the work that relates to the attack we developed. Section 2.1 covers adversarial examples and Section 2.3 discusses how Randomised Smoothing can use randomness to counteract them. It is followed by a section on randomness. ### Adversarial Examples Research into adversarial examples took off in 2014 when Szegedy et al. [63] discovered that deep learning models are very susceptible to such attacks. This led to the formulation of an evasion attack as an optimisation problem by Biggio et al. [6]. To determine the optimal adversarial perturbation on a given input \(x^{0}\), an attacker aims to minimise \(L:\;\mathcal{X}\rightarrow\mathbb{R}\), where \(L\) is the model's discriminant function: \[x^{*}=\operatorname*{argmin}_{x:\;d(x,x^{0})\leq d_{max}}L(x). \tag{1}\] The choice of the distance function \(d:\mathcal{X}\times\mathcal{X}\rightarrow\mathbb{R}\) is domain-specific, and the maximum size of the perturbation is limited to \(d_{max}\). More efficient attacks now exist [10, 14]. ### Certification in Machine Learning Certification techniques attempt to invert Equation (1) to determine the maximum perturbation radius around an input, inside which no adversarial examples exist. This is different from what the security community means by certification in the context of formal verification [16]: this refers to programmatic proof that a piece of software performs exactly as intended, matching some high-level abstraction. For example, a certified compiler will always correctly translate legal code correctly into its compiled binary or throw an appropriate error [39]. If the high-level abstraction of a machine learning model is considered to be the function \(f:\mathcal{X}\rightarrow\mathcal{Y}\), mapping inputs to their true classes, then certification here would refer to making sure that, for a given volume around a point from \(\mathcal{X}\), the prediction does not change from \(\mathcal{Y}\) Many certification techniques in current use are approximate and, despite the fact that theorems may be offered around their behaviour, they only provide probabilistic guarantees for the absence of adversarial examples in an \(\epsilon\) ball around a given input [14]. Randomised Smoothing is not the only certification technique. Other mechanisms exist, _e.g._ by constructing a bounding polytope instead of a hypersphere, or using either linear relaxation or interval bound propagation [69]. However, these require changes to the training process and model re-engineering, which in turn makes them hard to scale up to more complex model architectures and higher dimensional data [16]. Therefore, Randomised Smoothing is considered the state-of-the-art certification algorithm for machine learning models. It can be applied to any underlying model in a black-box manner, and scaled up to larger model architectures. It is used both for security, _i.e._ to combat adversarial examples, and safety, _i.e._ to provide confidence in predictions. That is why we made it the target of this work. ### Randomised Smoothing Any classifier \(f\) can be used to construct a smoothed classifier \(g\) such that: \[g(x)=\operatorname*{arg\,max}_{c\in\mathcal{Y}}\mathbb{P}\left\{f(x+\epsilon) =c\right\},\text{ where }\epsilon\sim\mathcal{N}(0,\sigma^{2}I). \tag{2}\] This formulation allows easy approximation of \(g\) for a given confidence bound using a Monte Carlo algorithm by sampling \(\epsilon\) from \(\mathcal{N}(0,\sigma^{2}I)\)[13]. While previous work used differential privacy [19, 37] and Renyi divergence [66, 40] to determine the radius of the hypersphere around an input inside which the absence of adversarial examples can be provably verified, Cohen et al. [13] used the Neyman-Pearson lemma [50] to obtain a certification radius which is also provably maximal. The certified radius obtained from randomised smoothing is: \[R=\frac{\sigma}{2}\left(\Phi^{-1}(p_{A})-\Phi^{-1}(p_{B})\right), \tag{3}\] Figure 1: A visual example of how manipulated randomness can result in mis-perception of real model confidence. A normal Randomised Smoothing procedure is shown on the left – the original datapoint \(d\) is perturbed to get points in the dotted region, ultimately deriving a true \(\epsilon\) confidence region. The attacker biases the sampled noise and changes the estimation of Randomised Smoothing, as shown on the right. This overestimates the confidence and leads to an incorrect prediction. Noise can also be biased to reduce the \(\epsilon\) instead. where \(\Phi^{-1}\) is inverse of the standard normal cumulative distribution function, \(p_{A}\) is the probability of the most probable class \(c_{A}\in\mathcal{Y}\) and \(p_{B}\) is the probability of the next most probable class. This result holds true even if \(p_{A}\) and \(p_{B}\) are replaced with \(p_{A}\), a lower bound on \(p_{A}\), and \(\overline{p_{B}}\), an upper bound on \(p_{B}\) such that \(p_{A}\geq p_{A}\geq\overline{p_{B}}\geq p_{B}\). The mathematical proof for Equation (3) and its maximality can be found in the appendices of the full version of the original paper by Cohen et al. [13]. Using these results, an algorithm for certifying the predicted class from an input \(x\) can be constructed as follows. For a base classifier \(f\) and input \(x\), \(n_{0}\) counts of noise are sampled from \(\mathcal{N}(0,\sigma^{2}I)\), and the modal class \(c_{A}\) is selected as the target class. In order to perform the certification, the output classes are sampled under noise from \(f\), \(n\) times. A lower confidence bound for \(p_{A}\) with confidence \(\alpha\) is obtained using the Clopper-Pearson method [12]. \(\overline{p_{B}}\) is estimated as \(1-\underline{p_{A}}\). Finally, the radius of certification can be computed using Equation (3). While \(n_{0}\) can be relatively small to determine \(c_{A}\) effectively, \(n\) needs to be quite large. Approximately \(10^{5}\) samples are required to certify a radius of \(4\sigma\) with 99.9% confidence [13]. ### Randomness in (Adversarial) ML Randomness is heavily used in machine learning and there are many applications. Stochastic Gradient Descent, the randomised optimisation algorithm that arguably enabled modern deep learning, relies on randomness for data sampling [56]; randomised dropout improves generalisation [22]; active learning approaches rely on biased samplers to enable faster learning [31]; randomised transformations are used to introduce trained invariance [41]; while randomised sampling of data can lead to improved privacy[3], and even bound privacy attacks [65]. Adversarial ML uses randomness in both attack and defense. Adversarial examples benefit significantly from random starts [14]; while many ML system enginners advocate for defences based on random pre-processing, where inputs are stochastically transformed before inference [52, 23, 2]. We are aware of only two attacks so far that only use randomness: the batch reordering attack [62] and the randomised augmentation attack [54], where the order of data and the randomness of the augmentation are manipulated respectively to introduce backdoors into a target model. Instances of randomness failure are not uncommon, especially in the context of differential privacy, where they have occasionally led to safety and security issues. A noteworthy example dates back to 2012 when Mironov discovered that the textbook implementations of floating point numbers for Laplace noise sampling resulted in an inaccurate distribution of sampled values [49]. This led to a violation of privacy guarantees and highlighted the significance of the problem. More recently, a timing side-channel was discovered in the implementation of randomness samplers, once more violating privacy [29]. Finally, randomness plagues reproducibility in ML - model training is highly stochastic [70], which is in tension with repeatable model training [28]. ## 3 Prior on Randomness Monte Carlo algorithms like the one described in Section 2.3 require random sampling. This immediately leads an inquisitive mind to question, "what is a random number, and how can one be generated?" The mathematical definition of a random sequence of numbers has been the subject of much debate since before computer science came into existence. For an in-depth discussion of random sequences and their limitations, the reader is referred to chapter 3.5 of The Art of Computer Programming by Knuth [33]. What follows is a summary of some points relevant to this paper. ### Random Number Generators While many distributions can be sampled to generate random sequences, in the context of computer programming, generally the uniform distribution \(\mathcal{U}(a,b)\) is used, _i.e._ every number between \(a\) and \(b\) has an equal probability of selection. As discussed later in Section 3.3, the standard uniform distribution (\(\mathcal{U}(0,1)\)) can be transformed into any other random distribution, and is therefore a natural starting point for random number generators (RNG). There are quite a few sources of entropy that a computer can rely on to generate random numbers such as the timing of keystrokes or mouse movements [21]. For example, consider an RNG relying on timing of keystrokes as the source of entropy. Generating one random bit from this RNG can be modelled as an experiment phrased as "Is the time taken between two keystrokes in milliseconds an even number?". The outcome of this experiment can either be 0 or 1, thereby generating a random bit. The function that assigns a real value to each outcome of a random experiment such as the one described here is called a _random variable_[57]. The _distribution function_ of a 1D random variable \(X\) over the real line can be defined as \[F(x)=\mathbb{P}\{X\leq x\}=\mathbb{P}\{X\in(-\infty,x]\}. \tag{4}\] The _probability mass function_\(p\) for a discrete random variable \(X\) is \[p(x)=\mathbb{P}\{X=x\}. \tag{5}\] At the dawn of computing, a lot of research effort was spent on developing efficient ways of generating random numbers. There were mechanical devices such as ERNIE [30] which was used to generate random numbers for an investment lottery and could possibly be attached to computers. Modern CPUs now often feature built-in hardware RNGs that use minuscule natural fluctuations in current to generate random bits [64]. However, the limitations of using mechanical RNGs in the 1950s are applicable to these modern hardware RNGs as well. First, a hardware RNG makes it difficult to reproduce the functioning of a randomised program to check if it is behaving as expected. There is also a possibility that the machine will fail in ways that would be difficult to detect by a program using it [33]. Another problem is that it is difficult to judge the level of entropy of an RNG relying on real-world properties [21]. Finally, of special interest to the problem being tackled in this work is the fact that it is very tricky to determine the distribution function of a hardware RNG, as this can depend on environmental factors outside the control of the system designer. These issues led to the development of pseudo-random number generators (PRNGs) which rely on deterministic calculations to generate sequences of numbers that are not truly random, yet appear to be. Von Neuman proposed the first PRNG[68] in 1946 using the middle-square method. PRNGs use the current number in the sequence or some other counter to generate the next number, but they need a starting point - a seed - to generate the first number or counter value. The sequence of numbers is thus a function of the seed, and two instances of the same PRNG will produce the same sequence if the seeds are identical2. The following sections present the process of generating random numbers, and transforming them to sample from the normal distribution. This is followed by an overview of statistical tests to determine the quality of random numbers produced by a PRNG, and finally a discussion of possible attacks. Footnote 2: Not all PRNGs work like this, but this is a desirable property to ensure reproducibility. Cryptographic PRNGs typically combine hardware and pseudorandom components in such a way that both have to fail to make key material easily predictable by an opponent. ### The Linear Congruential Method and PCG64 Most popular random number generators use the linear congruential method, first introduced by Lehmer in 1949 [38]. This produces a sequence of numbers \(X_{1},X_{2},\cdot\) as follows [32]: \[X_{n+1}=(aX_{n}+c)\mod m,\ n\geq 0, \tag{6}\] where: \[m>0\text{ is the modulus},\] \[0\leq a\leq m\text{ is the multiplier},\] \[0\leq c\leq m\text{ is the increment, and}\] \[0\leq X_{0}\leq m\text{ is the seed}.\] Equation (6) can be used to generate random numbers between 0 and \(m\). Different choices of \(a\) and \(c\) affect the performance. The permuted congruential generator (PCG) [51] is widely used in ML and uses an adaptation of this method. The 64-bit version has a 128-bit state variable, which is advanced according to Equation (6), _i.e._\(X_{n}\) gives the \(n\)th state and is a 128-bit integer. A 64-bit random number can be generated from this 128-bit state as: output = rotate64(((uint64_t) state ~ (uint64_t)(state>>64)), state>>122) First, the state is bit shifted right by 64 bits and XORed with itself to improve the randomness in high bits. Then a clockwise rotational bit shift of \(r\) bits is applied to the lower 64 bits of the resulting value, where the value of \(r\) (between 0 and \(2^{6}\)) comes from the 6 leftmost bits of the PRNG state3. For more details on the design choices and an empirical analysis of its performance, the reader is referred to O'Neill [51]. The PCG64 generator is the default choice for the Numpy library [26] and is the target of the attack presented in this work. Other popular random number generators include the Mersenne Twister [25, 42], SFC64 (Small Fast Chaotic) [18] and Philox [59]. Pytorch uses Philox, which is a counter passed through a block cipher. As it was designed to achieve high performance in HPC applications, the cipher was deliberately weakened [59]. ### Sampling from a Normal Distribution The numbers generated by PCG64 can be assumed to be sampled from the discrete uniform distribution \(\mathcal{U}(0,2^{64})\). However, most randomised algorithms need this to be transformed into a different distribution, usually \(\mathcal{U}(0,1)\) or \(\mathcal{N}(0,1)\), which can then be transformed into any uniform or normal distribution trivially. Transforming discrete \(\mathcal{U}(0,2^{64})\) to continuous \(\mathcal{U}(0,1)\) can be done by simply discarding the first 11 bits of the random number4, typecasting as double, and dividing by \(2^{53}\) to effectively shift the decimal point from after the least significant bit to before the most significant bit. Analytically, a \(\mathcal{U}(0,1)\) sample can be transformed to correspond to a sample from any other distribution with a distribution function \(F\) defined in Equation (4) by inverting it: Footnote 4: Note that this may result in quantisation artefacts, _e.g._ a pixel with value 245 noised to 235.3 gets quantised to 236, and potentially changes the performance of Randomised Smoothing. \[x^{\prime}=F^{-1}(x),\;x\in(0,1). \tag{7}\] However, for some distributions, including the normal distribution, it is impossible to invert \(F\) analytically, and other means must be employed. The rectangle-wedge-tail method, first proposed by Marsaglia in 1961 [44, 46, 45], is an ingenious way to compute the absolute value of a normally distributed random number. A simplified version is presented here, and the reader is directed to Knuth's 'The Art of Computer Programming' [33] for a more detailed discussion. The normal distribution is split into 32 regions, and the first 8 bits of the random number are used to determine which region the target number should fall in. Every region can be thought of as the uniform distribution lying between 0 and an upper bound, which increases with every subsequent region (see Figure 5) These uniform distributions, when stacked on top of each other, approximate the normal distribution. Within each region, 55 bits of the random number are transformed into a float lying between zero and the upper bound by multiplying with a constant stored separately. The last bit is used to assign a sign to the target number. This method can efficiently generate a normally distributed number 99.3% of the time. If the target number is determined to fall in the outermost region, the polar method [9] can be used. How the random integer generated by PCG64 is transformed to a normally distributed random floating point number will come into play when the attack is discussed below. ``` 0: A Normal User: A user attempting to certify the robust l2 radius that does not contain adversarial examples around an input in the above setting. Here, our user may be an analyst who aims to find enemy tanks to target with missiles. 0: An adversary with the objective of manipulating the certified radius obtained by the user. An attacker may increase it to make the prediction appear more robust than it actually is, _i.e._ convince an the user that tank is present at a given location, or decrease it to make the user uncertain about their prediction or even abstain from it, _i.e._ force them to not attack a tank at a given location. 0: The defender's goal is to detect the presence of a randomness-based adversary to determine if the noise function used for certification deviates significantly from white noise or not. ``` **Attack:**: Detection of an enemy tank is being performed from a satellite image. To make sure that the prediction is not a spurious correlation and to counteract camouflage paint, Randomised Smoothing is used. PRNG generates the noise for Randomised Smoothing. Randomness can come from the inference platform _i.e._ the cloud hardware or ML-as-a-Service API; alternatively the user can apply noise themselvesa. A Normal User: A user attempting to certify the robust l2 radius that does not contain adversarial examples around an input in the above setting. Here, our user may be an analyst who aims to find enemy tanks to target with missiles. Attacker: An adversary with the objective of manipulating the certified radius obtained by the user. An attacker may increase it to make the prediction appear more robust than it actually is, _i.e._ convince an the user that tank is present at a given location, or decrease it to make the user uncertain about their prediction or even abstain from it, _i.e._ force them to not attack a tank at a given location. Defender: The defender's goal is to detect the presence of a randomness-based adversary to determine if the noise function used for certification deviates significantly from white noise or not. ``` [MISSING_PAGE_POST] standard was first developed at the start of the 21st century. The PRNG featured in the first draft of the standard, which was published in 2004. The research community expressed concerns about EC-DRBG not being cryptographically sound by 2005 [24] - before the first official version of the standard was published [4] - leading to conjecture of an NSA backdoor [60]. Despite these concerns, EC-DBRG was adopted as the default random number generator by RSA in their BSAFE cryptography library [15]. The Snowden leaks in 2013 confirmed the existence of project Bullrun, which aimed "to covertly introduce weaknesses into the encryption standards followed by hardware and software developers worldwide" and successfully injected a backdoor into the default PRNG used for all cryptographic encryption between 2006 and 2013. This is a useful reminder of the tactics, tools, and procedures that may be used by a capable motivated adversary to carry out a backdoor attack against pseudo-random number generators. ## 4 Methodology ### Threat Model It is assumed that the victim is attempting to use ML to get predictions in a safety-sensitive or security-sensitive setting. Therefore it is important to accurately gauge the robustness of every prediction, and we assume that Randomised Smoothing is being employed for this purpose. By definition, the probability that an adversarial example can be found within the certified radius obtained from randomised smoothing is low: less than 0.1% if \(\alpha\) is set to \(0.001\) as suggested by Cohen et al. [13] when they proposed this technique. The knowledge that the victim is employing randomised smoothing can in itself be very useful to an adversary. This was demonstrated by Cullen et al. [16], where they used this information to only search for adversarial examples with l2 distance greater than the certified radius, achieving better than the state-of-the-art success rate at finding adversarial examples against ML models using randomised smoothing. Furthermore, in their evaluation of randomised smoothing, Cohen et al. [13] found that the probability of finding adversarial examples increases rapidly as the upper bound on the l2 norm of the adversarial example set by the adversary increases beyond the certified radius, \(R\). For the ImageNet dataset [17], they found that the probability of finding an adversarial example against a smoothing classifier is 0.17 at an upper bound of \(1.5R\) and 0.53 at \(2R\). Hence, confidence serves a good proxy for reliability of prediction, and any compromise of Randomised Smoothing will decrease its trustworthiness. The objectives of an attacker attempting the class of attacks presented in this paper are twofold, depending on whether the certified radius obtained from randomised smoothing is being manipulated to be higher or lower. A higher certified radius can make the victim believe that the prediction for a given input is more robust than it actually is, so that the victim ignores adversarial examples that exist within the spoofed radius. On the other hand, certification is costly, requiring approximately \(10^{5}\) inferences from the smoothed model to certify a radius of \(4\sigma\). So reducing the certified radius can cause the victim to waste time and compute resources, forcing them to generate more samples of predictions under noise to obtain the desired radius - and thus providing a service-denial attack. The setup used by the victim to perform training and certification can itself be quite complicated, leaving multiple avenues for an attacker to gain access and manipulate the noise being used to perform the sampling. Modern practices such as ML-as-a-service, and outsourcing of training and data generation to third-parties, opened a Pandora's box of new attacks. The attacks discussed here focus on modifying the noise distribution being used, first by using a different noise function directly and later by modifying the bitstream generated by the underlying pseudo-random number generator (PRNG). Of course, where the victim has little or no control over the software and hardware used for training and certification, there are even more attack vectors to worry about. The objective of the attacks presented in the following sections is to spoof the certified radius, while escaping detection by the victim, whether by analysing the performance of the model, or by using statistical tests (discussed in more detail in Section 4.4). This class of attacks can be carried out by targeting the hardware and software layers as well. In addition, with recent developments in denoising diffusion probabilistic models [27], such attacks can be carried out in an ML-as-a-Service scenario as well. If the victim is submitting inputs to a cloud API for prediction and certification, an attacker can effectively remove the noise introduced by the victim, and replace it. ### Naive noise distribution replacement In order to check the feasibility of the attacks described in the previous section, an initial test was done by explicitly changing the distribution that is sampled during certification. The Github repository released by Cohen et al.5 was used as a starting point. Cohen et al. [13] reported that the variance of distribution that additive noise is sampled from controls the trade off between the accuracy and robustness. A lower value of \(\sigma\) can be used to certify smaller radii but with a high degree of accuracy, whereas a higher value of \(\sigma\) is needed to certify large radii, resulting in lower accuracy. \(\sigma=0.25\) was found to achieve an acceptable trade off between certification radius and accuracy on the CIFAR 10 [35] dataset, which was used to run all the experiments in this paper. Using a base classifier trained with additive noise sampled from \(\mathcal{N}(0,0.25^{2})\), the naive attack swaps the noise distribution for one of the following when a certification is performed: Laplace distribution, \(\mathcal{L}(0,0.25)\); Absolute value of normal distribution, \(|\mathcal{N}(0,0.25^{2})|\); Uniform distribution, \(\mathcal{U}(-0.25,0.25)\); and Bernoulli distribution, \(\mathcal{B}(0.5)\). This naive attack was used to demonstrate the feasibility of this class of attacks on PRNGs. The next section will describe a more sophisticated attack. ### Bit-flipping PRNG attacker The objective of the bit-flipping PRNG attacker is to modify the stream of bits being generated by the random number generator to alter the distribution when normal variates are sampled. An overview of the algorithm used to transform random integers to floats in the standard normal distribution is presented in Section 3.3. Taking a 64-bit random integer, the rightmost 8 bits determine the rectangle in which the normally distributed random number will fall. These are called the _index bits_. The 9th bit is the _sign bit_. Finally, the remaining 55 bits are transformed into a floating point number falling within the limits of the uniformly distributed rectangle chosen by the index bits. These are called the _distribution bits_. The following attacks modify one of these three categories of bits to alter the resulting distribution. Altering a distribution can be done by either introducing kurtosis or skewness. Kurtosis is a measure of the tailedness of a distribution whereas skewness is a measure of the asymmetry of the distribution about its mean. All attacks described in this section were performed on the PCG64 random number generator [51] in the numpy library [26], which was then used to sample noise when performing randomised smoothing. PCG64 is a NIST-certified pseudo-random number generator. For an overview of how it works, the reader is referred to Section 3. The following attack was performed by modifying the next64 function, which is used to generate a random 64-bit integer, in the peg64. h file. The original version of the function is shown in Listing 1. #### 4.3.1 Negative Kurtosis Attack The uniform distribution increased the relative certified radius the most and it builds the intuition behind the negative kurtosis attack. By modifying the distribution bits, so that they are no longer uniformly distributed, but positively skewed, the kurtosis in the normal distribution is reduced, as there are fewer random numbers near the mean, and more towards the tails. The modification to the PCG64 code is shown in Listing 2. Consider a uniformly distributed random variable, \(X=\mathcal{U}(0,1)\). The probability density function (pdf) for \(X\) is: \[p(x)=\begin{cases}1&0<x<1\\ 0&\text{otherwise}\end{cases}. \tag{8}\] This can be skewed to a random variable \(X^{\prime}\) by altering the pdf as follows (for \(b\) in Equation (14)): \[p^{\prime}(x)=\begin{cases}ax+b&0<x<1\\ 0&\text{otherwise}\end{cases}. \tag{9}\] In order to transform \(X\) to \(X^{\prime}\), first, the cumulative distribution function, \(F^{\prime}\) of \(X^{\prime}\) must be derived. \[F^{\prime}(x) =\int_{0}^{x}p^{\prime}(y)\ dy=\int_{0}^{x}ay+b\ dy \tag{10}\] \[=\left[\frac{ay^{2}}{2}+by\right]_{0}^{x}=\frac{ax^{2}}{2}+bx. \tag{11}\] Since \(F^{\prime}(x):(0,1)\rightarrow(0,1)\), and \(0<X<1\), \(X\) can be transformed to \(X^{\prime}\) by inverting \(F^{\prime}\). For \(x\in X\), the corresponding \(x^{\prime}\in X^{\prime}\) can be computed as follows: \[x^{\prime} =\frac{-b+\sqrt{b^{2}-4\frac{ax}{2}}}{2\frac{a}{2}} \tag{12}\] \[=\frac{-b+\sqrt{b^{2}-2ax}}{a}. \tag{13}\] The degree of skewness in \(X^{\prime}\) can be controlled by changing \(a\). Since \(F(1)=1\) from the definition of \(X^{\prime}\), \[b=1-\frac{a}{2}. \tag{14}\] In Listing 2, the tunable parameter is \(\alpha\), such that \(a=1/\alpha\). First, the 64-bit random integer is converted to a 64-bit double precision floating point number. Then, the transformation from \(X\) to \(X^{\prime}\) is applied. Next, the number is converted back to a 64-bit integer. This is finally bit-shifted left by 9 bits and the least significant 9 bits are copied over from the original random integer generated by the prng. This is so that the index and sign bits remain random, and only the distribution bits are modified. A lower value of \(\alpha\) results in higher skewness in the distribution bits, and more negative kurtosis in the resulting distribution. The sampled probability densities are plotted in Figure 1(a), along with a reference curve of the probability density function of the normal distribution. #### 4.3.2 Skewness Attack The absolute normal distribution achieved a significant success rate in manipulating the certified radius to be high enough such that evasion attacks could be successful, and yet still maintain an acceptable level of performance compared to the original model. Therefore, this distribution formed the basis of the skewness attack as shown in Listing 3. An 8-bit integer is used as a counter, initialised to 1. Every time a random integer with the sign bit set to 1 is encountered, the counter is incremented. When the value of the counter reaches the tunable parameter, \(\beta\), the sign bit is flipped to zero, keeping the rest of the random number the same. This results in the final normally distributed random number being positive instead of negative, thereby skewing the distribution. The sampled probability distributions for \(\beta=0,1,2\), and \(4\) are shown in Figure 1(b). For \(\beta=0\), the resulting distribution is the same as \(|\mathcal{N}(0,0.25^{2})|\). #### 4.3.3 Positive Kurtosis Attack The motivation for this attack comes from the increase in certified radius observed when the normal distribution was replaced with the Laplace distribution. While the relative increase in radius achieved was not as high as the others, the attack is further motivated by the reasoning that it is more likely that the sampled prediction is the same as the modal class if the l2 norm of the additive noise is lower, thereby increasing the certified radius. The modified function in PCG for this attack is shown in Listing 4. A counter is initialised to 1 at first and is incremented by 1 every time a random number is generated. Every time the value of the counter is equal to the tunable parameter \(\gamma\), the index bits in the random number are shifted right by 1. This divides the value of the index and reduces the size of the rectangular region chosen by the algorithm used for transformation to the normal distribution. It moves the random normal numbers closer to the mean, and increases the kurtosis of the resulting distribution, with the maximum kurtosis achieved for \(\gamma=0\), and becoming closer to the standard normal distribution as \(\gamma\) increases. The sampled probability densities of the distributions resulting from \(\gamma=0,1,2\) and \(4\) are shown in Figure 1(c). Figure 2: These figures show the sampled probability densities against the normal pdf in orange for all considered values of the tunable parameters for each of the bit-flipping PRNG attacks proposed. The negative-kurtosis, skewness and positive-kurtosis attacks all alter the sampled distribution in a self-explanatory manner. ### Defences #### 4.4.1 NIST Test Suite The National Institute of Standard and Technology (NIST) first published the SP800-22 test suite in 2001. Its purpose is to to ensure that random number generators used for cryptography are secure. This is done via a set of 15 statistical tests that operate on bitstreams from a random number generator, aiming to determine any deviation from a truly random sequence of bits. Brief descriptions of all 15 tests are given in Appendix C, including recommendations for overall size of the bitstream and test parameter choice. The reader is directed to Basham et al. for a more in-depth description [5]. #### 4.4.2 Test for Normality We employ a number of statistical tests to detect finer fluctuations in the resulting distribution after transformation, and to quantify the deviation. In the main body of the paper we use the Shapiro-Wilk test - empirically one of the highest-power statistical tests to determine whether a set of samples is normally distributed [55]. We list performance of other tests in Appendix F. The test statistic for this test [61] is defined as: \[W=\frac{(\sum_{i=1}^{n}a_{i}x_{i})^{2}}{\sum_{i=1}^{n}(x_{i}-\overline{x})^{2}}, \tag{15}\] where: \[x_{i}\text{ is the }i\text{'th order statistic},\] \[\bar{x}\text{ is the sample mean},\] \[m=(m_{1},\dots,m_{n})^{T}\text{ are the expected values of}\] \[\text{ the order statistics},\] \[a=\frac{m^{T}V^{-1}}{C},\text{ for}\] \[V\text{ the covariance matrix of }m,\] \[C=||V^{-1}m||,\] The distribution of \(W\) does not have a name, and the cutoff values for it are computed using Monte-Carlo simulations. \(W\) lies between zero and one - the closer it is to one, the more likely it is that the samples belong to the normal distribution. The \(p\)-value determines the confidence in the test statistic. ## 5 Evaluation ### Experimental Setting **Setting:** The certification is performed on a subsample of 500 images from the CIFAR 10 test set [35], each with 100 noise samples for selection (\(n_{0}\)), 10,000 noise samples for certification (\(n\)), and \(\alpha=0.001\). We use the same base model as Cohen et al. [13], with a ResNet-110 architecture. **Measurements:** Two types of results are collected: the first is the accuracy of the smoothed classifier, and the second is the size of the certification radius. The accuracy results attempt to determine if a naive defender will be able to detect the attack by simply observing the performance of the model. These are presented as a relative confusion matrix compared to the baseline unperturbed classifier, presenting the number of predictions that are correct, incorrect or abstains. The results for the impact of each attack on the certified radius were collected for the images where both the attacked model and the baseline predicted the same class. The fraction of images for which the relative certified radius (\(R^{\prime}/R\), where \(R^{\prime}\) is the manipulated radius and \(R\) is the original radius) falls in each of the bins: (0, 1.0], (1, 1.1], (1, 1.25], (1.25, 1.5], (1.5, 2.0] and (2.0, \(\infty\)), is reported in Table 2. The higher the value of \(R^{\prime}/R\), the easier it will be to find adversarial examples within the manipulated radius \(R^{\prime}\). According to Cohen et al. [13], there is a 17% probability of finding an adversarial example with \(d_{max}=1.5R\) and 53% probability for \(d_{max}=2R\), where \(d_{max}\) is as defined in Equation (1). **Defences:** More sophisticated defences going beyond observing the change in performance of the classifier were also evaluated and are considered in Section 5.4. First, the NIST test suite for certifying cryptographically secure PRNGs was run for various lengths of bit streams, ranging from \(10^{2}\) to \(10^{6}\), with 1000 tests in each run. The official version of the NIST test suite was used. For each PRNG tested, raw 64-bit integers are generated, and the binary representation is saved as ASCII-encoded strings in a text file. This is used as an input for the test utility. The next defence evaluated is the Shapiro-Wilk test for normality. First, raw 64-bit integers were transformed to the normal distribution using the default implementation in numpy[26], and then tested using the AS R94 algorithm [58] from the scipy library [67]. This algorithm is limited to a maximum of 5000 samples, which is therefore the number of samples used to run our test. ### Naive attacker The change in classifier accuracy with the naive attack is presented in Table 1. As highlighted earlier, a smoothed model will never perform as well on the classification problem, as it becoming less accurate as more noise is introduced. Therefore, the attacker can get away with slightly reducing the accuracy and still escape detection since this is expected behaviour. \(\mathcal{L}(0,0.25)\) and \(|\mathcal{N}(0,0.25^{2})|\) perform well as attack noise distributions in this regard, with only nominally reducing model accuracy. The other two attack distributions \(\mathcal{U}(-0.25,0.25)\) and \(\mathcal{B}(0.5)\) result in significantly reduced accuracy, which will raise red flags for the defender. Based on the discussion in Section 4.1, the probability of finding an adversarial example increases considerably as the relative manipulated radius increases. The focus here is on values of \(R^{\prime}/R\) that lie in (1.5, 2.0] and (2.0, \(\infty\)), which give the attacker a significantly high probability of performing an evasion attack. \(|\mathcal{N}(0,0.25^{2})|\) performs really well in this regard, with 28% of input images having a spoofed radius that falls in the highly vulnerable categories. \(\mathcal{U}(-0.25,0.25)\) achieves the best attack success rate, but it is also the easiest attack distribution for the defender to detect by simply monitoring the performance of the classifier. \begin{table} \begin{tabular}{c|c c c} \multicolumn{4}{c}{\(\mathcal{L}(0,0.25)\)} \\ & & correct & incorrect & abstain \\ \hline correct & **282** & 48 & 44 \\ incorrect & 9 & **52** & 19 \\ abstain & 6 & 20 & **20** \\ \multicolumn{4}{c}{Laplace distribution} \\ \end{tabular} \end{table} Table 1: The above tables show a relative confusion matrix of the performance of Randomised Smoothing subject to naive attacks, where the normal distribution is swapped for a different distribution. The uniform distribution leads to the greatest deviation from baseline performance, followed by the Bernoulli distribution, then the absolute normal distribution, and finally the Laplace distribution. ### Bit-flipping Prng attacker In this subsection we evaluate the three attacks described in Section 4.3: negative-kurtosis, skewness and positive-kurtosis. The parameter values chosen for evaluation are \(\alpha\in\{1,2,3,4\}\), \(\beta\in\{0,1,2,4\}\) and \(\gamma\in\{0,1,2,4\}\) for each of the attacks respectively. Note that neither attack can be detected by a naive defender. Appendix D shows the relative accuracy results, that do not deviate significantly from the baseline. Relative certification radii for each attack are shown in Table 2, with the skewness attack achieving the highest manipulated radii, followed by the positive-kurtosis. The negative-kurtosis attack performed worst, significantly altering the radius for only 4% of images, even with \(\alpha=1\). \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline & \(\frac{R^{\prime}}{R}<1\) & \(1.0<\frac{R^{\prime}}{R}<1.1\) & \(1.1<\frac{R^{\prime}}{R}<1.25\) & \(1.25<\frac{R^{\prime}}{R}<1.5\) & \(1.5<\frac{R^{\prime}}{R}<2.0\) & \(\frac{R^{\prime}}{R}>2.0\) & max\(\left(\frac{R^{\prime}}{R}\right)\) \\ \hline \multicolumn{8}{l}{_Naive noise distribution replacement attack_} \\ \(\mathcal{L}(0,0.25)\) & 0.80 & 0.07 & 0.02 & 0.02 & **0.02** & **0.07** & 6.03 \\ \(|\mathcal{N}(0,0.25^{2})|\) & 0.24 & 0.31 & 0.09 & 0.08 & **0.10** & **0.18** & **76.25** \\ \(|\mathcal{U}(-0.25,0.25)\) & 0.32 & 0.22 & 0.03 & 0.08 & **0.07** & **0.28** & **13.99** \\ \(\mathcal{B}(0.5)\) & 0.88 & 0.02 & 0.01 & 0.02 & **0.03** & **0.04** & 9.58 \\ \hline \multicolumn{8}{l}{_Negative-Kurtosis attack_} \\ \(\alpha=1\) & 0.58 & 0.31 & 0.05 & 0.03 & **0.02** & **0.02** & 2.10 \\ \(\alpha=2\) & 0.57 & 0.35 & 0.04 & 0.02 & **0.02** & 0.00 & 1.38 \\ \(\alpha=3\) & 0.52 & 0.41 & 0.04 & 0.01 & **0.01** & 0.00 & 1.06 \\ \(\alpha=4\) & 0.55 & 0.40 & 0.03 & 0.01 & 0.00 & 0.00 & 1.95 \\ \hline \multicolumn{8}{l}{_Sewness attack_} \\ \(\beta=0\) & 0.24 & 0.31 & 0.09 & 0.08 & **0.10** & **0.18** & **81.24** \\ \(\beta=1\) & 0.24 & 0.35 & 0.10 & 0.08 & **0.09** & **0.13** & **36.27** \\ \(\beta=2\) & 0.30 & 0.33 & 0.12 & 0.10 & **0.06** & **0.08** & **27.43** \\ \(\beta=4\) & 0.34 & 0.39 & 0.12 & 0.06 & **0.04** & **0.05** & **18.11** \\ \hline \multicolumn{8}{l}{_Positive-Kurtosis attack_} \\ \(\gamma=0\) & 0.14 & 0.32 & 0.14 & 0.14 & **0.10** & **0.16** & **40.70** \\ \(\gamma=1\) & 0.18 & 0.38 & 0.17 & 0.13 & **0.06** & **0.08** & **15.79** \\ \(\gamma=2\) & 0.20 & 0.42 & 0.20 & 0.09 & **0.04** & **0.06** & **10.49** \\ \(\gamma=4\) & 0.22 & 0.51 & 0.16 & 0.04 & **0.03** & **0.04** & 6.05 \\ \hline \hline \end{tabular} \end{table} Table 2: This table shows the relative certified radius, \(R^{\prime}/R\), where \(R^{\prime}\) is the manipulated radius under attack, and \(R\) is the baseline radius. The fraction of images for which \(R^{\prime}/R\) falls in different bins is shown for the naive attacks and all three of the bit-flipping PRNG attacks. The maximum value of the relative certified radius achieved for each attack is also reported. The values in red indicate instances when the attack managed to successfully manipulate the radius by a factor of at least 1.5. Out of the naive attacks, \(\mathcal{U}(-0.25,0.25)\) achieves the best performance, followed by \(|\mathcal{N}(0,0.25^{2})|\), \(\mathcal{L}(0,0.25)\), and finally \(\mathcal{B}(0.5)\). Among the bit-flipping PRNG attacks, the skewness performs the best, then the positive-kurtosis, followed by the negative kurtosis. \begin{table} \begin{tabular}{l c c c c c c c c c c c c c c} \hline \hline & \multirow{2}{*}{\begin{tabular}{c} \(\alpha\) \\ \(\mathcal{N}(0,0.25)\) \\ \(\mathcal{N}(0,0.25)\) \\ \(\mathcal{N}(0,0.25)\) \\ \(\mathcal{N}(0,0.25)\) \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} \(\gamma\) \\ \(\mathcal{N}(0,0.25)\) \\ \(\mathcal{N}(0,0.25)\) \\ \(\mathcal{N}(0,0.25)\) \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} \(\alpha\) \\ \(\mathcal{N}(0,1.2)\) \\ \(\mathcal{N}(0,0.25)\) \\ \(\mathcal{N}(0,0.25)\) \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} \(\beta\) \\ \(\mathcal{N}(0,1.2)\) \\ \(\mathcal{N}(0,0.25)\) \\ \(\mathcal{N}(0,0.25)\) \\ \end{tabular} } & \multirow{2}{*}{ \begin{tabular}{c} \(\gamma\) \\ \(\mathcal{N}(0,1.2)\) \\ \(\mathcal{N}(0,0.25)\) \\ \(\mathcal{N}(0,0.25)\) \\ \end{tabular} } \\ \(\gamma=2\) & 0.20 & 0.42 & 0.20 & 0.09 & **0.04** & **0.06** & **10.49** \\ \(\gamma=4\) & 0.22 & 0.51 & 0.16 & 0.04 & **0.03** & **0.04** & 6.05 \\ \hline \hline \end{tabular} \end{table} Table 3: This table shows the results of the NIST test suite for cryptographically secure RNGs on the three bit-flipping PRNG attacks with varying values of \(\alpha\), \(\beta\) and \(\gamma\). The results for 4 popular modern PRNG, MT19337, PCG64, Philox and SFC64 are also shown. Tests were run for bit streams of length \(10^{6}\) and the reported values are the number of instances of each test that passed out of 1000. The minimum pass rate for a good RNG recommended by NIST is 980. The frequency, cumulative sums and runs tests managed to detect all attacks with high confidence. ### Defences **NIST tests -** Table 3 reports the number of bitstreams that passed each test out of 1000, where each bitstream was of length \(10^{6}\). This is excluding the random excursions and random excursions variant test, for which the NIST utility decides the number of bit streams to consider based on previous test results. The recommendation from NIST is that a good PRNG should pass at least 980 out of 1000 runs of each test. For the random excursion tests, the pass rate is reported based on the number of tests run. In addition to the attacked PRNGs, the results are also reported for the four most popular PRNGs currently in use: MT19937, PCG64, Philox and SFC64. While the other three manage to pass all tests, the MT19937 PRNG fails most of them, which is good news, since MT19937 is not recommended for any use case where a secure PRNG is required. Focusing on the results for \(\beta=4\) and \(\gamma=4\), the tests that were able to successfully detect both the skewness and positive-kurtosis attacks with high confidence are the frequency test, cumulative sums test and the runs test. It must be noted that the minimum recommended input size by NIST for these three tests is 100, which is significantly lower than the input size of \(10^{6}\) used for generating the results in Table 3. Figure 3 shows the pass rate for these tests for different sample sizes ranging from \(10^{2}\) to \(10^{6}\) for the attacked PRNGs with \(\beta=4\) and \(\gamma=4\). Results show a significant drop in pass rate only after the input size is increased above \(10^{5}\), demonstrating a clear need to re-evaluate the recommended parameters and input sizes suggested by NIST for safety-critical ML. Another important result is that for the negative-kurtosis attack with \(\alpha=3\). It was able to get a significantly higher pass rate compared to the rest of the PRNGs for the same attack. It goes to show that statistical tests cannot always be relied upon and bad PRNGs sometimes get high pass rates. Figure 4: These figures show the quantile-quantile (QQ) plots for \(10^{5}\) samples collected from PRNGs subject to the negative-kurtosis, skewness and positive-kurtosis attacks, showing the difference in CDF relative to the normal distribution. While attacks that significantly alter the distribution can be detected visually, this is not always apparent, as is the case with the skewness attack with \(\beta=4\) and positive-kurtosis attacks with \(\gamma=2\) and \(\gamma=4\). Figure 3: These figures show the NIST test pass rates for the frequency, cumulative sums and runs tests, the three tests that managed to detect all the attacked PRNGs with high confidence at an input size of \(10^{6}\). These plots show how the pass rates vary as the input size is increased from the minimum recommended value by NIST, \(10^{2}\), to \(10^{6}\) for the two attacks that were the hardest to detect: skewness attack with \(\beta=4\) and positive-kurtosis attack with \(\gamma=4\). The attacks become detectable only after the input size is increased to \(>10^{5}\). **Shapiro test** The results from the NIST tests demonstrate that it is difficult to detect attacks against PRNGs that alter the distribution once it is transformed to the normal distribution by using existing tests developed for ensuring that PRNGs are cryptographically secure. While attacks such as the negative kurtosis attack that significantly alter the distribution of raw random integers produced by the PRNG can be detected, more sophisticated attacks such as the skewness attack, and the negative kurtosis attack escape detection since they modify specific bits that have a greater impact on the normal distribution transformation. Therefore, the next class of defences explored here focuses on detecting the attack after the transformation and is only performed for the skewness and positive kurtosis attacks. The first step in verifying whether a sampled distribution is close to normal is to manually look at the quantile-quantile (Q-Q) plot of the samples. It shows the difference between the sample cumulative distribution and the expected CDF. A QQ plot of the normal distribution sampled from the baseline PCG PRNG is shown in Figure 6 in the Appendix. The QQ plots of PRNGs attacked with the negative kurtosis, skewness and positive kurtosis attacks are shown in Figure 4. It is clear from the plots that with lower values of \(\beta\) and \(\gamma\), it is fairly easy to identify that the PRNG is not behaving as expected. However, with higher values of \(\beta\) and \(\gamma\), which do not alter the distribution as much, it is a non-trivial task. Shapiro-Wilk test results are reported in Table 4. While the rest of the PRNGs have extremely low \(p\)-values and therefore can be considered to have significantly deviated from the normal distribution, the PRNG subject to the positive kurtosis attack achieved a \(W\) value that is the same as the baseline within three significant digits and also manages to get a \(p\)-value of 0.045 - _i.e._ it passes the NIST recommended confidence threshold. Just as the NIST test suite was not a foolproof method of detecting deviations from the uniform distribution, the Shapiro-Wilk test, while useful, is not a foolproof technique for detecting deviation from the normal distribution. ## 6 Discussion This work successfully demonstrates the feasibility of an attack on the pseudo-random number generator in an ML library to manipulate the certified radius obtained for the industry standard Randomised Smoothing technique. By altering the bit stream produced by the random number generator, kurtosis or skewness can be introduced into the noise distribution used. This can significantly affect the certified radius and thus make the end user over-estimate the real robustness. This work adds to a small but growing area of research into machine learning security that looks beyond the attack surfaces of the models themselves. **Technical stack** ML security is just like traditional security, in that practitioners need to consider the whole technical stack and not just limit themselves to the 'ML' piece of data collection, training and inference-time attacks. An ML model runs on the same kind of hardware and software layers as every other piece of software and is therefore just as vulnerable to these 'traditional' attack surfaces. The literature already notes that attacks can come from ML compilers [11], underlying platforms [53, 8], model architectures [7], or even quantisation granularity [43]. Yet there is no standard guide for secure ML deployment. **Protocols and guidelines** New protocols and guidelines must be designed specifically for current machine learning practices. These should recognise that firms outsource parts of the ML pipeline during design, training and inference. \begin{table} \begin{tabular}{l c c} & Test Statistic & \(p\)-value \\ \hline PCG Baseline & 0.999 & 0.203 \\ \hline \(\beta=0\) & 0.925 & \(2.66\times 10^{-44}\) \\ \(\beta=1\) & 0.988 & \(5.71\times 10^{-20}\) \\ \(\beta=2\) & 0.992 & \(1.09\times 10^{-15}\) \\ \(\beta=4\) & 0.997 & \(2.34\times 10^{-6}\) \\ \hline \(\gamma=0\) & 0.983 & \(4.93\times 10^{-24}\) \\ \(\gamma=1\) & 0.996 & \(1.04\times 10^{-9}\) \\ \(\gamma=2\) & 0.998 & \(3.17\times 10^{-5}\) \\ \(\gamma=\textbf{4}\) & **0.999** & \(\textbf{0.045}\) \\ \end{tabular} \end{table} Table 4: This table shows the value of the Shapiro-Wilk test statistic and p-value for tests conducted on 5000 samples collected from the baseline PCG PRNG, and PRNGs subject to the skewness and positive-kurtosis attacks that were difficult to detect with the NIST test suite. While other attacks could be detected with the Shapiro-Wilk test, the positive-kurtosis attack with \(\gamma=4\) managed to achieve a \(p\)-value that is greater than 0.01 – the threshold recommended by NIST for its tests. Current industry protocols trust the software and hardware on which these models run, without a clear threat model. Our paper helps show that this leads to exploitable vulnerabilities. **Randomness in ML** The meaning of a _secure PRNG_ needs to be updated for ML models. As these become more complex and place more reliance on non-deterministic algorithms, the random number generator is becoming as important in these systems as it is in cryptography. Some of the generators commonly used in ML are derived from those used in cryptography, but have been weakened to make them run faster. But even a cryptographically secure generator is not necessarily secure for ML, as the output is often shaped to have a distribution other than the uniform one, and this shaping provides a new attack vector. So statistical tests for the actual distributions commonly used in machine learning should be explicitly incorporated into official benchmarks. **On passing tests** Bassham et al. stress [5] that by the very nature of statistical tests, some good PRNGs will fail them, while some bad PRNGs will pass. Therefore, the design of a new PRNG should be mathematically and architecturally sound, allow for external scrutiny, and its implementation must be studied carefully for backdoors. Other applications of randomness in ML are also likely to have specific vulnerabilities and will need explicit tests designed to target them _e.g._ in differential privacy. **New Standards** We argue that ML needs its own standard for randomness. This must be informed by a better threat model, as well as application domain knowledge. For example, if a given ML application uses the PRNG to sample Gaussian noise, then this property must be explicitly verified. Appendix F shows that each normality test detects different attacks, while no test detects all of the attacks reliably. A comprehensive set of tests should be mandated and should become a part of the standard. ML brings up novel issues that were never a concern of cryptographers. In particular, we need to revisit issues around floating-point representations of random numbers - Mironov noted these in 2011 [49], and Zhuang et al. found that similar attacks work in 2022 [70]. Our attacks further demonstrate the importance of these issues. In cryptographic applications, key generation is performed relatively rarely, so an application developer who is suspicious of the quality of random numbers supplied by the platform will pass them through a PRNG of their own construction, which typically maintaine a pool of randomness and used one-way functions both to update this pool and to draw key material from it. However this uses several Kb of memory and several thousand instructions for every pseudorandom value drawn. Many ML applications cannot afford to do this as they make intensive use of computation, draw many random values, and can waste neither compute cycles nor GPU-adjacent RAM. This rules out the use of application-specific PRNGs, and also rules out continuous testing of the random numbers supplied by the platform - running the NIST suite takes hours. As such testing is impractical at runtime we may well therefore need separate test suites for system certification and to detect runtime attacks, as in some cryptographic systems; but that may not be enough. In critical applications, designers may need to understand that mechanisms for shaping bitstream statistics are within the trusted computing base. System evaluation will also have to take account of this, and demand assurance against the kind of attacks discussed here. ## 7 Conclusion Machine learning systems rely on randomness for many purposes, ranging from differential privacy, through selecting representative subsets of data for training, to privacy amplification in federated learning and the generation of synthetic data. Yet the consequences of poor random number generators have remained unstudied; common tools optimise random number generation for speed rather than security, and no attention is paid to the possibility that adversaries might manipulate random number sources to undermine model safety or security. In this paper we presented two proof-of-concept attacks on randomised smoothing, showing first that replacing a Gaussian random generator with a Laplacian one is effective if detectable, and second that flipping targeted bits in a Gaussian random generator is also effective but much more covert, in the sense of being very hard to detect using standard NIST randomness tests with default parameters. The consequences, and future directions of research, are three-fold. First of all, there is a real need to define what security means for an ML PRNG, just as there has been extensive study of the requirements for cryptographic PRNGs. Second, we need to put more effort into exploring attacks that modify the random bit-stream in less predictable ways to improve attack performance and reduce the probability of detection. Finally, we need proper standards for randomness in ML systems, and we believe that a careful study of both attack and defence is the necessary foundation.
2303.06299
Reconfiguration of Minimum Independent Dominating Sets in Graphs
The independent domination number $i(G)$ of a graph $G$ is the minimum cardinality of a maximal independent set of $G$, also called an $i(G)$-set. The $i$-graph of $G$, denoted $\mathcal{I}(G)$, is the graph whose vertices correspond to the $i(G)$-sets, and where two $i(G)$-sets are adjacent if and only if they differ by two adjacent vertices. We show that not all graphs are $i$-graph realizable, that is, given a target graph $H$, there does not necessarily exist a source graph $G$ such that $H$ is isomorphic to $\mathcal{I}(G)$. Examples of such graphs include $K_{4}-e$ and $K_{2,3}$. We build a series of tools to show that known $i$-graphs can be used to construct new $i$-graphs and apply these results to build other classes of $i$-graphs, such as block graphs, hypercubes, forests, cacti, and unicyclic graphs.
R. C. Brewster, C. M. Mynhardt, L. E. Teshima
2023-03-11T04:08:39Z
http://arxiv.org/abs/2303.06299v1
# Reconfiguration of Minimum Independent Dominating Sets in Graphs ###### Abstract The independent domination number \(i(G)\) of a graph \(G\) is the minimum cardinality of a maximal independent set of \(G\), also called an \(i(G)\)-set. The \(i\)-graph of \(G\), denoted \(\mathscr{I}(G)\), is the graph whose vertices correspond to the \(i(G)\)-sets, and where two \(i(G)\)-sets are adjacent if and only if they differ by two adjacent vertices. We show that not all graphs are \(i\)-graph realizable, that is, given a target graph \(H\), there does not necessarily exist a source graph \(G\) such that \(H\cong\mathscr{I}(G)\). Examples of such graphs include \(K_{4}-e\) and \(K_{2,3}\). We build a series of tools to show that known \(i\)-graphs can be used to construct new \(i\)-graphs and apply these results to build other classes of \(i\)-graphs, such as block graphs, hypercubes, forests, cacti, and unicyclic graphs. **Keywords:** independent domination number, graph reconfiguration, \(i\)-graph **AMS Subject Classification Number 2020:** 05C69 Introduction The \(i\)-graph \(H\) of a graph \(G\) is an example of a "reconfiguration graph". It has as its vertex set the minimum independent dominating sets of \(G\), and two vertices of \(H\) are adjacent whenever the symmetric difference of their corresponding sets consists of two vertices that are adjacent in \(G\). We consider the following realizability question: for which graphs \(H\) does there exist a graph \(G\) such that \(H\) is the \(i\)-graph of \(G\)? Following definitions and general discussions in the remainder of this section, we begin our investigation into \(i\)-graph realizability in Section 2 by composing a series of observations and technical lemmas concerning the adjacency of vertices in an \(i\)-graph and the structure of their associated \(i\)-sets in the seed graph. In Section 3, we present the three smallest graphs which are not \(i\)-graphs, and in Section 4, we show that several common graph classes, like trees and cycles, are \(i\)-graphs. We conclude by examining, in Section 5, how new \(i\)-graphs can be constructed from known ones. ### Reconfiguration In general, a reconfiguration problem asks whether it is possible to transform a given source (or seed) solution to a given problem into a target solution through a series of incremental transformations (called reconfiguration steps) under some specified rule, such that each intermediate step is also a solution. The resulting chain of the source solution, intermediate solutions, and target solution is a reconfiguration sequence. In graph theory, reconfiguration problems are often concerned with solutions that are vertex/edge subsets or labellings of a graph. In particular, when the solution is a vertex (or edge) subset, the reconfiguration problem can be viewed as a token manipulation problem, where a solution subset is represented by placing a token at each vertex or edge of the subset. The reconfiguration step for vertex subsets can be of one of three variants (edge subsets are handled analogously): * **Token Slide (TS) Model**: A single token is slid along an edge between adjacent vertices. * **Token Jump (TJ) Model**: A single token jumps from one vertex to another (without the vertices necessarily being adjacent). * **Token Addition/Removal (TAR) Model:** A single token can either be added to a vertex or be removed from a vertex. To represent the many possible solutions in a reconfiguration problem, each solution can be represented as a vertex of a new graph, referred to as a _reconfiguration graph_, where adjacency between vertices follows one of the three token adjacency models, producing the _slide graph_, the _jump graph_, or the _TAR graph_, respectively. More formally, given a graph \(G\), the _slide graph_ of \(G\) under some specified reconfiguration rule is the graph \(H\) such that each vertex of \(H\) represents a solution of some problem on \(G\), and two vertices \(u\) and \(v\) of \(H\) are adjacent if and only if the solution in \(G\) corresponding to \(u\) can be transformed into the solution corresponding to \(v\) by sliding a single token along an edge of \(G\). ### \(\gamma\)-Graphs We use the standard notation of \(\gamma(G)\) for the cardinality of a minimum dominating set of a graph \(G\). The _private neighbourhood_ of a vertex \(v\) with respect to a vertex set \(S\) is the set \(\mbox{pn}(v,S)=N[v]-N[S-\{v\}]\); therefore, a dominating set \(S\) is minimal dominating if, for each \(u\in S\), \(\mbox{pn}(u,S)\) is nonempty. The _external private neighbourhood_ of \(v\) with respect to \(S\) is the set \(\mbox{epn}(v,S)=\mbox{pn}(v,S)-\{v\}\). The _independent domination number_\(i(G)\) of \(G\) is the minimum cardinality of a maximal independent set of \(G\), or, equivalently, the minimum cardinality of an independent domination set of \(G\). An independent dominating set of \(G\) of cardinality \(i(G)\) is also called an \(i\)-_set_ of \(G\), or an \(i(G)\)-_set_. In general, we follow the notation of [7]. In particular, the disjoint union of two graphs \(G\) and \(H\) is denoted \(G\cup H\), whereas the join of \(G\) and \(H\), denoted \(G\lor H\), is the graph obtained from \(G\cup H\) by joining every vertex of \(G\) with every vertex of \(H\). For other domination principles and terminology, see [14, 15]. First defined by Fricke, Hedetniemi, Hedetniemi, and Hutson [10] in 2011, the \(\gamma\)-_graph of a graph_\(G\) is the graph \(G(\gamma)=(V(G(\gamma)),E(G(\gamma)))\), where each vertex \(v\in V(G(\gamma))\) corresponds to a \(\gamma\)-set \(S_{v}\) of \(G\). The vertices \(u\) and \(v\) in \(G(\gamma)\) are adjacent if and only if there exist vertices \(u^{\prime}\) and \(v^{\prime}\) in \(G\) such that \(u^{\prime}v^{\prime}\in E(G)\) and \(S_{v}=(S_{u}-u^{\prime})\cup\{v^{\prime}\}\); this is a token-slide model of adjacency. An initial question of Fricke et al. [10] was to determine exactly which graphs are \(\gamma\)-graphs; they showed that every tree is the \(\gamma\)-graph of some graph and conjectured that every graph is the \(\gamma\)-graph of some graph. Later that year, Connelly, Hutson, and Hedetniemi [8] proved this conjecture to be true. For additional results on \(\gamma\)-graphs, see [3, 8, 9, 10]. Mynhardt and Teshima [18] investigated slide model reconfiguration graphs with respect to other domination parameters. Subramanian and Sridharan [21] independently defined a different \(\gamma\)-_graph of a graph_\(G\), denoted \(\gamma\cdot G\). The vertex set of \(\gamma\cdot G\) is the same as that of \(G(\gamma)\); however, for \(u,w\in V(\gamma\cdot G)\) with associated \(\gamma\)-sets \(S_{u}\) and \(S_{w}\) in \(G\), \(u\) and \(w\) are adjacent in \(\gamma\cdot G\) if and only if there exist some \(v_{u}\in S_{u}\) and \(v_{w}\in S_{w}\) such that \(S_{w}=(S_{u}-\{v_{u}\})\cup\{v_{w}\}\). This version of the \(\gamma\)-graph was dubbed the "single vertex replacement adjacency model" by Edwards [9], and is sometimes referred to as the "jump \(\gamma\)-graph" as it follows the TJ-Model for token reconfiguration. Further results concerning \(\gamma\cdot G\) can be found in [16, 19, 20]. Notably, if \(G\) is a tree or a unicyclic graph, then there exists a graph \(H\) such that \(\gamma\cdot H=G\)[20]. Conversely, if \(G\) is the (jump) \(\gamma\)-graph of some graph \(H\), then \(G\) does not contain any induced \(K_{2,3},\ P_{3}\lor K_{2}\), or \((K_{1}\cup K_{2})\lor 2K_{1}\)[16]. Using a token addition/removal model, Haas and Seyffarth [11] define the \(k\)-_dominating graph_\(D_{k}(G)\) of \(G\) as the graph with vertices corresponding to the \(k\)-dominating sets of \(G\) (i.e., the dominating sets of cardinality at most \(k\)). Two vertices in the \(k\)-dominating graph are adjacent if and only if the symmetric difference of their associated \(k\)-dominating sets contains exactly one element. Additional results can be found in [1, 2, 12, 13, 22], and a survey on reconfiguration of colourings and dominating sets of graphs in [17]. ### \(i\)-Graphs The _\(i\)-graph_ of a graph \(G\), denoted \(\mathscr{I}(G)=(V(\mathscr{I}(G)),E(\mathscr{I}(G)))\), is the graph with vertices representing the minimum independent dominating sets of \(G\) (that is, the _\(i\)-sets_ of \(G\)). As in the case of \(\gamma\)-graphs as defined in [10], adjacency in \(\mathscr{I}(G)\) follows a slide model where \(u,v\in V(\mathscr{I}(G))\), corresponding to the \(i(G)\)-sets \(S_{u}\) and \(S_{v}\), respectively, are adjacent in \(\mathscr{I}(G)\) if and only if there exists \(xy\in E(G)\) such that \(S_{u}=(S_{v}-x)\cup\{y\}\). We say _\(H\) is an \(i\)-graph_, or is _\(i\)-graph realizable_, if there exists some graph \(G\) such that \(\mathscr{I}(G)\cong H\). Moreover, we refer to \(G\) as the _seed graph_ of the \(i\)-graph \(H\). Going forward, we mildly abuse notation to denote both the \(i\)-set \(X\) of \(G\) and its corresponding vertex in \(H\) as \(X\), so that \(X\subseteq V(G)\) and \(X\in V(H)\). Imagine that there is a token on each vertex of an \(i\)-set \(S\) of \(G\). Then \(S\) is adjacent, in \(\mathscr{I}(G)\), to an \(i(G)\)-set \(S^{\prime}\) if and only if a single token can be slid along an edge of \(G\) to transform \(S\) into \(S^{\prime}\). Notice that the _token jump model_ of reconfiguration for independent domination is identical to the token-slide model. On a graph \(G\) a token may only "jump" from a vertex \(v\) in the \(i\)-set \(S_{1}\) to another vertex \(w\) (to form the \(i\)-set \(S_{2}\)) if \(w\) is dominated only by \(v\) in \(S_{1}\). Otherwise, if \(w\) is dominated by some other \(u\neq v\) in \(S_{1}\), then \((S_{1}-u)\cup\{w\}\) is not an independent set as it contains the adjacent vertices \(u\) and \(w\). A token is said to be _frozen_ (in any reconfiguration model) if there are no available vertices to which it can slide/jump. In acknowledgment of the slide-action in \(i\)-graphs, given \(i\)-sets \(X=\{x_{1},x_{2},\ldots,x_{k}\}\) and \(Y=\{y_{1},x_{2},\ldots x_{k}\}\) of \(G\) with \(x_{1}y_{1}\in E(G)\), we denote the adjacency of \(X\) and \(Y\) in \(\mathscr{I}(G)\) as \(X\stackrel{{ x_{1}y_{1}}}{{\sim}}Y\), where we imagine transforming the \(i\)-set \(X\) into \(Y\) by sliding the token at \(x_{1}\) along an edge to \(y_{1}\). When discussing several graphs, we use the notation \(X\stackrel{{ x_{1}y_{1}}}{{\sim}}Y\) to specify that the relationship is on \(G\). More generally, we use \(x\sim y\) to denote the adjacency of vertices \(x\) and \(y\) (and \(x\not\sim y\) to denote non-adjacency); this is used in the context of both the seed graph and the target graph. Although every graph is the \(\gamma\)-graph of some graph, there is no such tidy theorem for \(i\)-graphs; as we show in Section 3, not every graph is an \(i\)-graph, and determining which classes of graphs are (or are not) \(i\)-graphs has proven to be an interesting challenge. ## 2 Observations To begin, we propose several observations about the structure of \(i\)-sets within given \(i\)-graphs which we then use to construct a series of useful lemmas. **Observation 2.1**: _Let \(G\) be a graph and \(H=\mathscr{I}(G)\). A vertex \(X\in V(H)\) has \(\deg_{H}(X)\geq 1\) if and only if for some \(v\in X\subseteq V(G)\), there exists \(u\in\operatorname{epn}(v,X)\) such that \(u\) dominates \(\operatorname{pn}(v,X)\)._ From a token-sliding perspective, Observation 2.1 shows that a token on an \(i\)-set vertex \(v\) is frozen if and only if \(\operatorname{epn}(v)=\varnothing\) or \(G[\operatorname{epn}(v,X)]\) has no dominating vertex. For some path \(X_{1},X_{2},\ldots,X_{k}\) in \(H\), only one vertex of the \(i\)-set is changed at each step, and so \(X_{1}\) and \(X_{k}\) differ on at most \(k\) vertices. This yields the following observation. **Observation 2.2**: _Let \(G\) be a graph and \(H=\mathscr{I}(G)\). Then for any \(i\)-sets \(X\) and \(Y\) of \(G\), the distance \(d_{H}(X,Y)\geq|X-Y|\)._ **Lemma 2.3**: _Let \(G\) be a graph with \(H=\mathscr{I}(G)\). Suppose \(XY\) and \(YZ\) are edges in \(H\) with \(X\stackrel{{ xy_{1}}}{{\sim}}Y\) and \(Y\stackrel{{ yz_{2}}}{{\sim}}Z\), with \(X\neq Z\). Then \(XZ\) is an edge of \(H\) if and only if \(y_{1}=y_{2}\)._ **Proof.** Let \(X=\{x,v_{2},v_{3}\ldots,v_{k}\}\) and \(Y=\{y_{1},v_{2},,v_{3}\ldots,v_{k}\}\) so that \(X\stackrel{{ xy_{1}}}{{\sim}}Y\). To begin, suppose \(y_{1}=y_{2}\). Then \(Y\stackrel{{ yz_{2}}}{{\sim}}Z\) and \(Z=\{z,v_{2},,v_{3}\ldots,v_{k}\}\), hence \(|X-Z|=1\). Since \(X\) is dominating, \(z\) is adjacent to a vertex in \(\{x,v_{2},v_{3}\ldots,v_{k}\}\); moreover, since \(Z\) is independent, \(z\) is not adjacent to any of \(\{v_{2},v_{3},\ldots v_{k}\}\). Thus \(z\) is adjacent to \(x\) in \(G\) and \(X\stackrel{{ xz}}{{\sim}}Z\), so that \(XZ\in E(H)\). Conversely, suppose \(y_{1}\neq y_{2}\). Then, without loss of generality, say \(y_{2}=v_{2}\) and so \(X=\{x,y_{2},v_{3}\ldots,v_{k}\}\), \(Y=\{y_{1},y_{2},v_{3}\ldots,v_{k}\}\), and \(Z=\{y_{1},z,v_{3}\ldots,v_{k}\}\). Notice that \(x\neq z\) since \(x\sim y_{1}\) and \(z\not\sim y_{1}\). Thus \(|X-Z|=2\), and it follows that \(XZ\notin E(H)\). Combining Observation 2.2 and Lemma 2.3 yields the following observation for vertices of \(i\)-graphs at distance two. **Observation 2.4**: _Let \(G\) be a graph and \(H=\mathscr{I}(G)\). Then for any \(i\)-sets \(X\) and \(Y\) of \(G\), if \(d_{H}(X,Y)=2\), then \(|X-Y|=2\)._ **Lemma 2.5**: _Let \(G\) be a graph and \(H=\mathscr{I}(G)\). Suppose \(H\) contains an induced \(K_{1,m}\) with vertex set \(\{X,Y_{1},Y_{2},\ldots,Y_{m}\}\) and \(\deg_{H}(X)=m\). Let \(i\neq j\). Then in \(G\),_ * \(X-Y_{i}\neq X-Y_{j}\)_,_ * \(|Y_{i}\cap Y_{j}|=i(G)-2\)_, and_ * \(m\leq i(G)\)_._ **Proof.** Suppose \(X\stackrel{{ x_{i}y_{i}}}{{\sim}}Y_{i}\) and \(X\stackrel{{ x_{j}y_{j}}}{{\sim}}Y_{j}\). Then \((X-Y_{i})=\{x_{i}\}\) and \((X-Y_{j})=\{x_{j}\}\). From Lemma 2.3, since \(Y_{i}\not\sim Y_{j}\), we have that \(x_{i}\neq x_{j}\), which establishes Statement (i). Moreover, \(Y_{i}\cap Y_{j}=X-\{x_{i},x_{j}\}\), and so as these are \(i\)-sets, Statement (ii) also follows. Finally, for Statement (iii), again applying Lemma 2.3, we see that \(|\bigcap_{1\leq i\leq m}Y_{i}|=|X|-m=i(G)-m\geq 0\) Induced \(C_{4}\)'s in a target graph \(H\) play an important role in determining the \(i\)-graph realizability of \(H\) and determine a specific relationship among \(i\)-sets of a potential source graph \(G\), as we show next. **Proposition 2.6**: _Let \(G\) be a graph and \(H=\mathscr{I}(G)\). Suppose \(H\) has an induced \(C_{4}\) with vertices \(X,A,B,Y\), where \(XY,AB\notin E(H)\). Then, without loss of generality, the set composition of \(X,A,B,Y\) in \(G\), and the edge labelling of the induced \(C_{4}\) in \(H\), are as in Figure 1._ **Proof.** Suppose that the \(i\)-set \(X\) of \(G\) has \(X=\{x_{1},x_{2},v_{3},\ldots,v_{k}\}\). Then by Lemma 2.3, without loss of generality, the edge from \(X\) to \(A\) can be labelled as \(X\stackrel{{ x_{1}y_{1}}}{{\sim}}A\) for some \(y_{1}\in V(G)-X\), so that \(A=\{y_{1},x_{2},v_{3},\ldots,v_{k}\}\), while the edge from \(X\) to \(B\) can be labelled \(X\stackrel{{ x_{2}y_{2}}}{{\sim}}B\) for some \(y_{2}\) and \(B=\{x_{1},y_{2},v_{3},\ldots,v_{k}\}\). Consider the edge \(AY\in E(H)\) labelled \(A\stackrel{{ ay^{*}}}{{\sim}}Y\). From Lemma 2.3, since \(XY\notin E(G)\), \(a\neq y_{1}\). If, say, \(a=v_{3}\), then \(Y=\{y_{1},x_{2},y^{*},\ldots,v_{k}\}\). However, neither \(y_{1}\) nor \(x_{2}\) is in \(B\), so \(|Y-B|\geq 2\), contradicting Observation 2.2. Thus, \(a\neq v_{i}\) for any \(3\leq i\leq k\). This leaves \(a=x_{2}\), and \(Y=\{y_{1},y^{*},v_{3}\ldots,v_{k}\}\). Since \(|Y-B|=1\), \(y^{*}=y_{2}\) and \(Y=\{y_{1},y_{2},v_{3},\ldots,v_{k}\}\) as required. ## 3 Realizability of \(i\)-Graphs Having now established a series of observations and lemmas about the structures of \(i\)-graphs and the composition of their associate \(i\)-sets, we demonstrate that not all graphs are \(i\)-graphs by presenting three counterexamples: the diamond graph \(\mathfrak{D}\), \(K_{2,3}\) and \(\kappa\), as pictured in Figure 2. Figure 1: Reconfiguration structure of an induced \(C_{4}\) subgraph from Proposition 2.6. **Proposition 3.1**: _The diamond graph \(\mathfrak{D}=K_{4}-e\) is not \(i\)-graph realizable._ **Proof.** Suppose to the contrary there is some graph \(G\) with \(\mathscr{I}(G)=\mathfrak{D}\). Let \(V(\mathfrak{D})=\{X,A,B,Y\}\) where \(AB\notin E(\mathfrak{D})\). Say \(X\stackrel{{ xy}}{{\sim}}Y\). Then by Lemma 2.3, without loss of generality, the edges incident with \(A\) can be labelled as \(X\stackrel{{ xa}}{{\sim}}A\) and \(A\stackrel{{ ay}}{{\sim}}Y\). Likewise, \(X\stackrel{{ xb}}{{\sim}}B\) and \(B\stackrel{{ by}}{{\sim}}Y\) (see Figure 2). However, since \(B\stackrel{{ bx}}{{\sim}}X\) and \(X\stackrel{{ xa}}{{\sim}}A\), Lemma 2.3 implies that \(AB\in E(\mathfrak{D})\), a contradiction. **Proposition 3.2**: _The graph \(K_{2,3}\) is not \(i\)-graph realizable._ **Proof.** Suppose \(K_{2,3}=\mathscr{I}(G)\) for some graph \(G\). Let \(\{\{X,Y\},\{A,B,C\}\}\) be the bi-partition of \(K_{2,3}\). Apply the exact labelling from Proposition 2.6 and Figure 1 to the \(i\)-sets and edges of \(X,A,B,\) and \(Y\). We attempt to extend the labelling to \(C\). By Lemma 2.3, since \(C\) is adjacent to \(X\), but not \(A\) or \(B\), without loss of generality, \(X\stackrel{{ v_{3}c}}{{\sim}}C\) and \(C=\{x_{1},x_{2},c,v_{4},\ldots,v_{k}\}\). As \(A\) is an \(i\)-set, \(y_{1}v_{3}\notin E(G)\). Since \(v_{3}c\in E(G)\), \(c\neq y_{1}\). Similarly, \(c\neq y_{2}\). Now \(|C-Y|=3\) and \(d(C,Y)=1\), contradicting Observation 2.2. **Proposition 3.3**: _The graph \(\kappa\) is not \(i\)-graph realizable._ **Proof.** Suppose \(\kappa=\mathscr{I}(G)\) for some graph \(G\) and let \(V(\kappa)=\{X,A,B,C_{1},C_{2},Y\}\) as in Figure 2, and to the subgraph induced by \(X,A,B,Y\), apply the labelling of Proposition 2.6 and Figure 1. Through additional applications of Proposition 2.6, we can, as in the proof of Proposition 3.2, assume without loss of generality that \(X\stackrel{{ x_{3}y_{3}}}{{\sim}}C_{1}\). However, \(d(C_{1},Y)=2\) but \(|Y-C_{1}|=3\), contradicting Observation 2.2. It follows that no such \(G\) exists and \(\kappa\) is not an \(i\)-graph. The observant reader will have undoubtedly noticed the common structure between the graphs in the previous three propositions - they are all members of the class of _theta graphs_ (see [4]), graphs that are the union of three internally disjoint nontrivial paths with the Figure 2: Three graphs not realizable as \(i\)-graphs. same two distinct end vertices. The graph \(\Theta\left\langle j,k,\ell\right\rangle\) with \(j\leq k\leq\ell\), is the theta graph with paths of lengths \(j\), \(k\), and \(\ell\). In this notation, our three non \(i\)-graph realizable examples are \(\mathfrak{D}\cong\Theta\left\langle 1,2,2\right\rangle\), \(K_{2,3}\cong\Theta\left\langle 2,2,2\right\rangle\), and \(\kappa\cong\Theta\left\langle 2,2,3\right\rangle\). Further rumination on the similarity in structure suggests that additional subdivisions of the central path in \(\kappa\) could yield more theta graphs that are not \(i\)-graphs. However, the proof technique used for \(\kappa\) no longer applies when, for example, a path between the degree 3 vertices has length greater than 4. In [6], we explore an alternative method for determining the \(i\)-graph realizability of theta graphs. ## 4 Some Classes of \(i\)-Graphs Having studied several graphs that are not \(i\)-graphs, we now examine the problem of \(i\)-graph realizability from the positive direction. To begin, it is easy to see that complete graphs are \(i\)-graphs; moreover, as with \(\gamma\)-graphs, complete graphs are their own \(i\)-graphs, i.e., \(\mathscr{I}(K_{n})\cong K_{n}\). **Proposition 4.1**: _Complete graphs are \(i\)-graph realizable._ Hypercubes \(Q_{n}\) (the Cartesian product of \(K_{2}\) taken with itself \(n\) times) are also straightforward to construct as \(i\)-graphs, with \(\mathscr{I}(nK_{2})\cong Q_{n}\). Each \(K_{2}\) pair can be viewed as a \(0-1\) switch, with the vertex of the \(i\)-set in each component sliding between the two states. **Proposition 4.2**: _Hypercubes are \(i\)-graph realizable._ Hypercubes are a special case of the following result regarding Cartesian products of \(i\)-graphs. **Proposition 4.3**: _If \(\mathscr{I}(G_{1})\cong H_{1}\) and \(\mathscr{I}(G_{2})\cong H_{2}\), then \(\mathscr{I}(G_{1}\cup G_{2})\cong H_{1}\,\Box\,H_{2}\)_ **Proof.** Let \(\{X_{1},X_{2},\ldots,X_{k}\}\) be the \(i\)-sets of \(G_{1}\) and let \(\{Y_{1},Y_{2},\ldots,Y_{\ell}\}\) be the \(i\)-sets of \(G_{2}\). Then, the \(i\)-sets of \(G_{1}\cup G_{2}\) are of the form \(X_{i}\cup Y_{j}\). Clearly \(X_{i}\cup Y_{j}\sim_{G_{1}\cup G_{2}}X_{i^{*}}^{*}\cup Y_{j^{*}}^{*}\) if and only if \(X_{i}\sim_{G}X_{i}^{*}\) and \(Y_{j}=Y_{j}^{*}\), or \(Y_{j}\sim_{G_{2}}Y_{j}^{*}\) and \(X_{i}=X_{i}^{*}\). This gives a natural isomorphism to \(H_{1}\,\Box\,H_{2}\), where \(X_{i}\cup Y_{j}\) is the vertex \((X_{i},Y_{j})\). Moving to cycles, the constructions become markedly more difficult. **Proposition 4.4**: _Cycles are \(i\)-graph realizable._ **Proof.** The constructions for each cycle \(C_{k}\) for \(k\geq 3\) are as described below. * \(\mathscr{I}(C_{3})\cong C_{3}\) From Proposition 4.1. **(ii)**: \(\mathscr{I}(2K_{2})\cong C_{4}\) From Proposition 4.2. **(iii)**: \(\mathscr{I}(C_{5})\cong C_{5}\) Recall that \(i(C_{5})=2\). A labelled \(C_{5}\) and the resulting \(i\)-graph with \(\mathscr{I}(C_{5})\cong C_{5}\) are given in Figure 3 below. **(iv)**: \(\mathscr{I}(K_{2}\mathbin{\Box}K_{3})\cong C_{6}\) Label the vertices of \(K_{2}\mathbin{\Box}K_{3}\) as in Figure 4 below. The set \(\{x_{i},y_{j}\}\) is an \(i\)-set of \(K_{2}\mathbin{\Box}K_{3}\) if and only if \(i\neq j\), so that \(|V(\mathscr{I}(K_{2}\mathbin{\Box}K_{3}))|=6\), and adjacencies are as in Figure 4. **(v)**: _For any_ \(k\geq 7\)_, construct the graph_ \(H\) _with_ \(V(H)=\{v_{0},v_{1},\ldots,v_{k-1}\}\)_, and_ \(v_{i}v_{j}\in E(H)\) _if and only if_ \(j\not\equiv i-2,i-1,i+1,i+2\)__\((\text{mod }k)\)_. Then_ \(\mathscr{I}(H)\cong C_{k}\)_._ For convenience, we assume that all subscripts are given modulo \(k\). Thus in \(H\), for all \(0\leq i\leq k-1\), we have the following: 1. \(N[v_{i}]\backslash N[v_{i+1}]=\{v_{i},v_{i+3}\}\) 2. \(N[v_{i+1}]\backslash N[v_{i}]=\{v_{i-2},v_{i+1}\}\). Since \(H\) is vertex-transitive, suppose that \(v_{i}\) is in some \(i\)-set \(S\). Then \(v_{i-2},v_{i-1},v_{i+1}\), and \(v_{i+2}\) are not dominated by \(v_{i}\). To dominate \(v_{i+1}\), either \(v_{i+1}\) or \(v_{i-2}\) is in \(S\), because all other vertices in \(N[v_{i+1}]\) are also adjacent to \(v_{i}\), as in (II). Begin by assuming that \(v_{i+1}\in S\). Now since \(\{v_{i},v_{i+1}\}\) dominates all of \(H\) except \(v_{i+2}\) and \(v_{i-1}\), and \(N(v_{i+2},v_{i-1})\subseteq N(v_{i},v_{i+1})\), either \(v_{i+2}\) or \(v_{i-1}\) is in \(S\). Thus \(S=\{v_{i},v_{i+1},v_{i+2}\}\) or \(S=\{v_{i-1},v_{i},v_{i+1}\}\). Suppose now instead that \(v_{i-2}\in S\). Now, only \(v_{i-1}\) is not dominated by \(\{v_{i},v_{i-2}\}\); moreover, since \(N(v_{i-1})\subseteq N(\{v_{i-2},v_{i}\})\), we have that \(v_{i-1}\in S\), and so \(S=\{v_{i-2},v_{i-1},v_{i}\}\). Combining the above two cases yields that \(i(H)=3\) and that all \(i\)-sets of \(H\) have the form \(S_{i}=\{v_{i},v_{i+1},v_{i+2}\}\), for each \(0\leq i\leq k-1\). Moreover, as there are \(k\) unique such sets, it follows that \(|V(\mathscr{I}(H))|=k\). We now consider the adjacencies of \(\mathscr{I}(H)\). From our set definitions, \(S_{i}\stackrel{{ v_{i-1}v_{i+2}}}{{\sim}}S_{i+1}\), and \(S_{i}\stackrel{{ v_{i+1}v_{i-2}}}{{\sim}}S_{i-1}\). To see that \(S_{i}\) is not adjacent to any other \(i\)-set in \(H\), notice that the token at \(v_{i}\) is frozen; \(N(v_{i}))\subseteq(N(v_{i-1})\cup N(v_{i+1}))\). Moreover, by (II), the token at \(v_{i+1}\) can only slide to \(v_{i-2}\), and likewise, the token at \(v_{i-1}\) can only slide to \(v_{i+2}\). Thus \(S_{i}\sim S_{i+1}\sim\cdots\sim S_{i-1}\sim S_{i}\), and so \(\mathscr{I}(H)\cong C_{k}\) as required. This completes the \(i\)-graph constructions for all cycles. The constructions presented in Proposition 4.4 are not unique. Brewster, Mynhardt and Teshima show in [5] that for \(k\geq 5\) and \(k\equiv 2\) (mod 3), \(\mathscr{I}(C_{k})\cong C_{k}\), and in [6] they use graph complements to construct graphs with \(i\)-graphs that are cycles. We now present three lemmas with the eventual goal of demonstrating that all forests are \(i\)-graphs. When considering the \(i\)-graph of some graph \(H\), if a vertex \(v\) of some \(i\)-set \(S\) of \(H\) has no external private neighbours, then the token at \(v\) is frozen. In the first of the three lemmas, Lemma 4.5, we construct a new seed graph for a given target graph, where each vertex of the seed graph's \(i\)-set has a non-empty private neighbourhood. **Lemma 4.5**: _For any graph \(H\), there exists a graph \(G\) such that \(\mathscr{I}(G)\cong\mathscr{I}(H)\) and for any \(i\)-set \(S\) of \(G\), all \(v\in S\) have \(\operatorname{epn}(v,S)\neq\varnothing\)._ **Proof.** Suppose \(S\) is an \(i\)-set of \(H\) having some \(v\in S\) with \(\operatorname{epn}(v,S)=\varnothing\). Construct the graph \(G_{1}\) from \(H\) by joining new vertices \(a\) and \(b\) to each vertex of \(N[v]\). To begin, we show that the \(i\)-sets of \(G_{1}\) are exactly the \(i\)-sets of \(H\). Let \(R\) be some \(i\)-set of \(H\) and say that \(v\) is dominated by \(u\in R\). Then \(u\in N[v]\), so \(u\) also dominates \(a\) and \(b\) in \(G_{1}\); therefore, \(R\) is independent and dominating in \(G_{1}\), and so \(i(G_{1})\leq i(H)\). Conversely, suppose that \(Q\) is an \(i\)-set of \(G_{1}\). If neither \(a\) nor \(b\) is in \(Q\), then \(Q\) is an independent dominating set of \(H\), and so \(i(H)\leq i(G_{1})\). Hence, suppose instead that \(a\in Q\). Notice that since \(Q\) is independent and \(a\) is adjacent to each vertex in \(N(v)\), \(N_{H}[v]\cap Q=\varnothing\). Some vertex in \(Q\) dominates \(b\); however, since \(N_{H}(a)=N_{H}(b)\) and \(Q\) is independent, it follows that \(b\) is self-dominating and so \(b\in Q\). However, since \(N_{G_{1}}[\{a,b\}]=N_{G_{1}}[v]\), the set \(Q^{\prime}=(Q-\{a,b\})\cup\{v\}\) is an independent dominating set of \(G_{1}\) such that \(|Q^{\prime}|<|Q|\), a contradiction. Thus, \(i(G_{1})=i(H)\) and the \(i\)-sets of \(H\) and \(G_{1}\) are identical. In particular, \(S\) is an \(i\)-set of \(G_{1}\), and moreover, \(\operatorname{epn}_{G_{1}}(v,S)=\{a,b\}\). By repeating the above process for each \(i\)-set of \(G_{j}\), \(j\geq 1\), that contains a vertex \(v\) with \(\operatorname{epn}(v,S)=\varnothing\), we eventually obtain a graph \(G=G_{k}\) such that for each \(i\)-set of \(S\) of \(G\) and each vertex \(v\in S\), \({\rm epn}_{G}(v,S)\neq\varnothing\). Since the \(i\)-sets of \(H\) and \(G\) are identical and \(H\) is a subgraph of \(G\), \(\mathscr{I}(G)=\mathscr{I}(H)\) as required. Next on our way to constructing forests, we demonstrate that given an \(i\)-graph, the graph obtained by adding any number of isolated vertices is also an \(i\)-graph. **Lemma 4.6**: _If \(H\) is the \(i\)-graph of some graph \(G\), then there exists some graph \(G^{*}\) such that \(\mathscr{I}(G^{*})=H\cup\{v\}\)._ **Proof.** First assume that \(i(G)\geq 2\). Let \(V(G)=\{v_{1},v_{2},\ldots,v_{n}\}\) and let \(W\) be an independent set of size \(i(G)=k\) disjoint from \(V(G)\), say \(W=\{w_{1},w_{2},\ldots,w_{k}\}\). Construct a new graph \(G^{*}\) by taking the join of \(G\) with the vertices of \(W\), so that \(G^{*}=G\lor W\). Notice that \(W\) is independent and dominating in \(G^{*}\). Moreover, if an \(i\)-set \(S\) of \(G^{*}\) contains any vertex \(w_{i}\) of \(W\), since \(W\) is independent and each vertex of \(W\) is adjacent to all of \(\{v_{1},v_{2},\ldots,v_{n}\}\), it follows that \(S\) contains all of \(W\), and so, \(S=W\). That is, if an \(i\)-set of \(G^{*}\) contains any vertex of \(W\), it contains all of \(W\). Thus, \(i(G)=i(G^{*})\). Furthermore, any \(i\)-set of \(G\) is also an \(i\)-set of \(G^{*}\), and so the \(i\)-sets of \(G^{*}\) comprise of \(W\) and the \(i\)-sets of \(G\). That is, \(V(\mathscr{I}(G^{*}))=V(\mathscr{I}(G))\cup\{W\}=V(H)\cup\{W\}\). If \(S\) is an \(i\)-set of \(G\), then \(S\cap W=\varnothing\). Thus, \(W\) is not adjacent to any other \(i\)-set in \(\mathscr{I}(G^{*})\). Relabelling the vertex representing the \(i\)-set \(W\) in \(G^{*}\) as \(v\) in \(\mathscr{I}(G^{*})\) yields \(\mathscr{I}(G^{*})=H\cup\{v\}\) as required. If \(i(G)=1\), then \(G\) has a dominating vertex; begin with \(G\cup K_{1}\), which has \(\mathscr{I}(G)=\mathscr{I}(G\cup K_{1})\) and \(i(G\cup K_{1})=2\), and then proceed as above. As a final lemma before demonstrating the \(i\)-graph realizability of forests, we show that a pendant vertex can be added to any \(i\)-graph to create a new \(i\)-graph. **Lemma 4.7**: _If \(H\) is the \(i\)-graph of some graph \(G\), and \(H_{u}\) is the graph \(H\) with some pendant vertex \(u\) added, then there exists some graph \(G_{u}\) such that \(\mathscr{I}(G_{u})=H_{u}\)._ **Proof.** By Lemma 4.5 we may assume that for any \(i\)-set \(S\) of \(G\), \({\rm epn}(v,S)\neq\varnothing\) for all \(v\in S\). To construct \(G_{u}\), begin with a copy of \(G\). If \(w\) is the stem of \(u\) in \(H_{u}\), then consider the \(i\)-set \(W=\{v_{1},v_{2},\ldots,v_{k}\}\) in \(G\) corresponding to \(w\). To each \(v_{i}\in W\), attach a new vertex \(x_{i}\) for all \(1\leq i\leq k\). Then join each \(x_{i}\) to a new vertex \(y\), and then to \(y\), add a final pendant vertex \(z\). Thus \(V(G_{u})=V(G)\cup\{x_{1},x_{2},\ldots,x_{k},y,z\}\) as in Figure 5. Figure 5: The construction of \(G_{u}\) from \(G\) in Lemma 4.7. It is easy to see that if \(S\) is an \(i\)-set of \(G\), then \(S_{y}=S\cup\{y\}\) is an independent dominating set of \(G_{u}\). The set \(W_{z}=W\cup\{z\}\) is also an independent dominating set of \(G_{u}\). Thus, \(i(G_{u})\leq i(G)+1\). It remains only to show that these are \(i\)-sets and the only \(i\)-sets of \(G_{u}\). We claim that no \(x_{i}\) in \(X=\{x_{1},x_{2},\ldots,x_{k}\}\) is in any \(i\)-set of \(G_{u}\). To show this, suppose to the contrary that \(S^{*}\) is an \(i\)-set with \(S^{*}\cap X=\{x_{1},x_{2},\ldots,x_{\ell}\}\) for some \(1\leq\ell\leq k\). Then, \(y\notin S^{*}\); that is, \(\{y,z\}\cap S^{*}=\{z\}\). To dominate the remaining \(\{x_{\ell+1},x_{\ell+2},\ldots,x_{k}\}\), we have that \(S^{*}=\{x_{1},x_{2},\ldots,x_{\ell}\}\cup\{v_{\ell+1},v_{\ell+2},\ldots,v_{k} \}\cup\{z\}\). Recall from our initial assumption on \(G\) that there exists some \(v_{1}^{*}\in{\rm epn}_{G}(v_{1},W)\). Thus, \(v_{1}^{*}\notin(N_{H_{u}}[\{v_{\ell+1},v_{\ell+2},\ldots,v_{k}\}]\cap V(G))\), and so \(v_{1}^{*}\) is undominated by \(S^{*}\), which implies that \(S^{*}\) is not an \(i\)-set. Thus in every \(i\)-set of \(G_{u}\), \(y\) is dominated either by itself or by \(z\). If \(y\) is not in a given \(i\)-set \(S\) (and so \(z\in S\)), then to dominate \(X\), \(W\subseteq S\), and so \(S=W\cup\{z\}\). Conversely, if \(y\in S\) (and \(z\notin S\)), then since the vertices of \(G\) can only be dominated internally, \(S\) is an \(i\)-set of \(G_{u}\) if and only if \(S-\{y\}\) is an \(i\)-set of \(G\), which completes the proof of our claim. If \(u^{*}\) and \(w^{*}\) are the vertices in \({\mathscr{I}}(G_{u})\) associated with \(W_{z}\) and \(W_{y}=W\cup\{y\}\) respectively, then clearly \({\mathscr{I}}(G_{u})-\{u^{*}\}\cong{\mathscr{I}}(G)\). Furthermore, since \(W_{y}\) is the only \(i\)-set with \(|W_{y}-W_{z}|=1\) and \(yz\in E(G_{u})\), it follows that \(\deg(u^{*})=1\) and \(u^{*}w^{*}\in E({\mathscr{I}}(G_{u}))\), and we conclude that \({\mathscr{I}}(G_{u})\cong H_{u}\). Finally, we amalgamate the previous lemmas on adding isolated and pendant vertices to \(i\)-graphs to demonstrate that forests are \(i\)-graphs. **Theorem 4.8**: _All forests are \(i\)-graph realizable._ **Proof.** We show by induction on the number of vertices that if \(F\) is a forest with \(m\) components, then \(F\) is \(i\)-graph realizable. For a base, note that \({\mathscr{I}}(\overline{K_{2}})=K_{1}\). Construct the graph \(\overline{K_{m}}\) by repeatedly applying Lemma 4.6. Suppose that all forests on \(m\) components on at most \(n\) vertices are \(i\)-graph realizable. Let \(F\) be some forest with \(|V(F)|=n+1\) and components \(T_{1},T_{2},\ldots,T_{m}\). If all vertices of \(F\) are isolated, we are done, so assume there is some leaf \(v\) with stem \(w\) in component \(T_{1}\). Let \(F^{*}=F-\{v\}\). By induction there exists some graph \(G^{*}\) with \({\mathscr{I}}(G^{*})\cong F^{*}\). Applying Lemma 4.7 to \(G^{*}\) at \(w\) constructs a graph \(G\) with \({\mathscr{I}}(G)\cong F\). Moreover, by adding Proposition 4.4 to the previous results, we obtain the following immediate corollary. **Corollary 4.9**: _Unicyclic graphs are \(i\)-graph realizable._ With the completion of the constructions of forests and unicyclic graphs as \(i\)-graphs, we have now determined the \(i\)-graph realizability of many collections of small graphs. In particular, we draw the reader's attention to the following observation. **Observation 4.10**: _Every graph on at most four vertices except \(\mathfrak{D}\) is an \(i\)-graph._ Building \(i\)-Graphs In this section, we examine how new \(i\)-graphs can be constructed from known ones. We begin by presenting three very useful tools for constructing new \(i\)-graphs: the **Max Clique Replacement Lemma**, the **Deletion Lemma**, and the **Inflation Lemma**. The first among these shows that maximal cliques in \(i\)-graphs can be replaced by arbitrarily larger maximal cliques. **Lemma 5.1** (Max Clique Replacement Lemma): _Let \(H\) be an \(i\)-graph with a maximal \(m\)-vertex clique, \({\cal K}_{m}\). Then, the graph \(H_{w}\) formed by adding a new vertex \(w^{*}\) adjacent to all of \({\cal K}_{m}\) is also an \(i\)-graph._ **Proof.** Suppose \(G\) is a graph such that \({\mathscr{I}}(G)=H\) and \(i(G)=k+1\) where \(k\geq 1\), and let \({\cal K}_{m}=\{V_{1},V_{2},\ldots,V_{m}\}\) be a maximal clique in \(H\). From Lemma 2.3, the corresponding \(i\)-sets \(V_{1},V_{2},\ldots,V_{m}\) of \(G\) differ on exactly one vertex, so for each \(1\leq i\leq m\), let \(V_{i}=\{v_{i},z_{1},z_{2},\ldots,z_{k}\}\subseteq V(G)\), so that \(Z=\{z_{1},z_{2},\ldots,z_{k}\}=\bigcap_{1\leq i\leq m}V_{i}\). Notice also from Lemma 2.3, for each \(1\leq i<j\leq m\), \(v_{i}v_{j}\in E(G)\), and so \(Q_{m}=\{v_{1},v_{2},\ldots,v_{m}\}\) is a (not necessarily maximal) clique of size \(m\) in \(G\). In addition to \(Q_{m}\) and \(Z\) defined above, we further weakly partition (i.e. some of the sets of the partition may be empty) the vertices of \(G\) as \[X=N(Q_{m})\backslash N(Z)\] , the vertices dominated by \[Q_{m}\] but not \[Z\] . \[Y=N(Q_{m})\cap N(Z)\] , the vertices dominated by both \[Q_{m}\] and \[Z\] . \[A=N(Z)\backslash N(Q_{m})\] , the vertices dominated by \[Z\] but not \[Q_{m}\] This partition (as well as the construction of \(G_{w}\) defined below) is illustrated in Figure 6. Before proceeding with the construction, we state the following series of claims regarding the set \(X\): **Claim 1:**_Each \(x\in X\) is dominated by every vertex of \(Q_{m}\)._ Otherwise, if some \(x\in X\) is not adjacent to some \(v_{j}\in Q_{m}\), then \(x\) is undominated in the the \(i\)-set \(V_{j}=\{v_{j}\}\cup Z\). **Claim 2:**\(|X|\neq 1\). If \(X=1\), say \(X=\{x\}\), then \(X^{*}=\{x\}\cup Z\) is independent, dominating, and has \(|X^{*}|=i(G)\); that is, \(X^{*}\) is an \(i\)-set of \(G\). However, since \(x\) is adjacent to all of \(Q_{m}\) in \(G\), \(X^{*}\stackrel{{ xv_{j}}}{{\sim}}V_{j}\) for each \(1\leq j\leq m\), contradicting the maximality of the clique \({\cal K}_{m}\) in \(H\). **Claim 3:**_No \(x\in X\) dominates all of \(X\)._ If \(x\in X\) dominates \(X\), then \(\{x\}\cup Z\) is an \(i\)-set of \(G\). Following a similar argument of Claim 2, this contradicts the maximality of \({\cal K}_{m}\) in \(H\). **Claim 4:**_For any \(v\in(X\cup Y\cup A)\), \(\{v\}\cup Z\) is not an \(i\)-set_. Combining Claims 2 and 3, if \(v\in X\), then there exists some \(x_{i}\in X\) such that \(v\not\sim x_{i}\), and thus \(\{v\}\cup Z\) does not dominate \(x_{i}\). If \(v\in(Y\cup A)\), then \(v\in N(Z)\), and so \(\{v\}\cup Z\) is not independent. We construct a new graph \(G_{w}\) from \(G\) by joining a new vertex \(w\) to each vertex in \(V(G)-Z\), as in Figure 6. We claim that \(\mathscr{I}(G_{w})\cong H_{w}\). Let \(S\) be some \(i\)-set of \(G_{w}\). If \(w\notin S\), then \(S\subseteq V(G)\) and so \(S\) is also independent dominating in \(G\), implying \(|S|=i(G)=k+1\). However, if \(w\in S\), then since \(w\) is adjacent to all of \(V(G)-Z\) and \(Z\) is independent, we have that \(S=\{w\}\cup Z\). It follows that \(i(G_{w})=i(G)\). Moreover, any \(i\)-set of \(G\) is also an \(i\)-set of \(G_{w}\), and so \(W:=\{w\}\cup Z\) is the only new \(i\)-set generated in \(G_{w}\). Thus \(V(\mathscr{I}(G_{w}))=V(\mathscr{I}(G))\cup\{W\}\). Consider now the edges of \(\mathscr{I}(G_{w})\). Since \(w\) is adjacent to all of \(Q_{m}\) in \(G_{w}\), \(W\stackrel{{ wv_{j}}}{{\sim}}V_{j}\) for each \(1\leq j\leq m\), and thus \(\mathcal{K}_{m}\cup W\) is a clique in \(\mathscr{I}(G_{w})\). Finally, we demonstrate that \(W\) is adjacent only to the \(i\)-sets of \(\mathcal{K}_{m}\). Consider some \(i\)-set \(S\notin\mathcal{K}_{m}\), and suppose to the contrary that \(W\sim S\). As \(W\) is the only \(i\)-set containing \(w\), we have that \(w\notin S\), and hence \(W\stackrel{{ wu}}{{\sim}}S\) for some vertex \(u\). Since \(w\sim u\), \(u\notin Z\). Moreover, since \(W\) and \(S\) differ at exactly one vertex and \(Z\subseteq W\), it follows that \(Z\subseteq S\); that is, \(S=\{u\}\cup Z\). If \(u\in Q_{m}\) then \(S\in\mathcal{K}_{m}\), a contradiction. If \(u\in(X\cup Y\cup A)\), then by Claim 4, \(S\) is not an \(i\)-set, which is again a contraction. We conclude that \(W\not\sim S\) for any \(i\)-set \(S\notin\mathcal{K}_{m}\), and therefore \(E(\mathscr{I}(G_{w}))=E(\mathscr{I}(G))\cup\left(\bigcup_{v_{i}\in Q_{m}}wv_{ i}\right)\). If follows that \(\mathscr{I}(G_{w})\cong H_{w}\). Our next result, the Deletion Lemma, shows that the class of \(i\)-graphs is closed under vertex deletion. It is unique among our other constructions; unlike most of our results which demonstrate how to build larger \(i\)-graphs from smaller ones, the Deletion Lemma instead shows that every induced subgraph of an \(i\)-graph is also an \(i\)-graph. Figure 6: Construction of \(G_{w}\) from \(G\) in Lemma 5.1. **Lemma 5.2** (The Deletion Lemma): _If \(H\) is a nontrivial \(i\)-graph, then any induced subgraph of \(H\) is also an \(i\)-graph._ **Proof.** Let \(G\) be a graph such that \(H=\mathscr{I}(G)\) and \(i(G)=k\). To prove this result, we show that for any \(X\in V(H)\), there exists some graph \(G_{X}\) such that \(\mathscr{I}(G_{X})=H-X\). To construct \(G_{X}\), take a copy of \(G\) and add to it a vertex \(z\) so that \(z\) is adjacent to each vertex of \(G-X\) (see Figure 7). Observe first that since \(H\) is nontrivial, there exists an \(i\)-set \(S\neq X\) of \(G\). Then, \(S\) is also an independent dominating set of \(G_{X}\), and so \(i(G_{X})\leq k\). Consider now some \(i\)-set \(S_{X}\) of \(G_{X}\). Clearly \(S_{X}\neq X\) because \(X\) does not dominate \(z\). If \(z\in S_{X}\), then as \(S_{X}\) is independent, no vertex of \(G-X\) is in \(S_{X}\). Moreover, since \(X\) is also independent and its vertices have all of their neighbors in \(G-X\), this leaves each vertex of \(X\) to dominate itself. That is, \(X\subseteq S_{X}\), implying that \(S_{X}=X\cup\{z\}\) and \(|S_{X}|=k+1\). This contradicts that \(i(G_{X})\leq k\), and thus we conclude that \(z\) is not in any \(i\)-set of \(G_{X}\). It follows that each \(i\)-set of \(G_{X}\) is composed only of vertices from \(G\) and so \(i(G_{X})=k\). Thus, \(S_{X}\neq X\) is an \(i\)-set of \(G_{X}\) if and only if it is an \(i\)-set of \(G\). Given that \(V(\mathscr{I}(G_{X}))=V(\mathscr{I}(G))-\{X\}=V(H)-\{X\}\), we have that \(\mathscr{I}(G_{X})=H-X\) as required. The following corollary is immediate as the contrapositive of Lemma 5.2. **Corollary 5.3**: _If \(H\) is not an \(i\)-graph, then any graph containing an induced copy of \(H\) is also not an \(i\)-graph._ This powerful corollary, although simple in statement and proof, immediately removes many families of graphs from \(i\)-graph realizability. For example, all wheels, 2-trees, and maximal planar graphs on at least five vertices contain an induced copy of the Diamond graph \(\mathfrak{D}\), which was shown in Proposition 3.1 to not be an \(i\)-graph. Moreover, given that \(i\)-graph realizability is an inherited property, this suggests that there may be a finite-family forbidden subgraph characterization for \(i\)-graph realizability. We now alter course to examine how one may construct new \(i\)-graphs by combining several known \(i\)-graphs. Understandably, an immediate obstruction to combining the constructions of \(i\)-graphs of, say, \(\mathscr{I}(G_{1})=H_{1}\) and \(\mathscr{I}(G_{2})=H_{2}\) is that it is possible (and indeed, likely) that \(i(G_{1})\neq i(G_{2})\). Figure 7: Construction of \(G_{X}\) in Lemma 5.2. Two solutions to this quandary are presented in the following lemmas. In the first, Lemma 5.4, given a graph \(G\), we progressively construct an infinite family of seed graphs \({\cal G}\) with the same number of components as \(G\), and such that \({\mathscr{I}}(G)={\mathscr{I}}(G_{j})\) for each \(G_{j}\in{\cal G}\). The second, Lemma 5.5 or the **Inflation Lemma**, offers a more direct solution: given an \(i\)-graph \(H\), we demonstrate how to "inflate" a seed graph \(G\) to produce a new graph \(G^{*}\) such that \({\mathscr{I}}(G^{*})={\mathscr{I}}(G)\) and the \(i\)-sets of \(G^{*}\) are arbitrarily larger than the \(i\)-sets of \(G\). **Lemma 5.4**: _If \(G\) is a graph with \({\mathscr{I}}(G)\cong H\), then there exists an infinite family of graphs \({\cal G}\) such that \({\mathscr{I}}(G_{j})\cong H\) for each \(G_{j}\in{\cal G}\). Moreover, the number of components of \(G_{j}\in{\cal G}\) is the same as \(G\) (\(k(G)=k(G_{j})\))._ **Proof.** Suppose \(v\in V(G)\), and let \(G^{*}\) be the graph obtained by attaching a copy of the star \(K_{1,3}\) with \(V(K_{1,3})=\{x,y_{1},y_{2},y_{3}\}\) (\(\deg(x)=3\)) by joining \(v\) to \(y_{1}\). As \(y_{2}\) and \(y_{3}\) are pendant vertices, \(i(G^{*})\geq i(G)+1\). If \(S\) is an \(i\)-set of \(G\), then \(S^{*}=S\cup\{x\}\) is dominating and independent, and so \(i(G^{*})=i(G)+1\). Thus, \(x\) is in every \(i\)-set of \(G^{*}\), and we can conclude that \(S^{*}\) is an \(i\)-set of \(G^{*}\) if and only if \(S^{*}-\{x\}\) is an \(i\)-set of \(G\). It follows that \({\mathscr{I}}(G^{*})\cong{\mathscr{I}}(G)\) as required. Attaching additional copies of \(K_{1,3}\) as above at any vertex of \(H\) similarly creates the other graphs of \({\cal G}\). **Lemma 5.5** (Inflation Lemma): _If \(H\) is the \(i\)-graph of some graph \(G\), then for any \(k\geq i(G)\) there exists a graph \(G^{*}\) such that \(i(G^{*})=k\) and \({\mathscr{I}}(G^{*})\cong H\)._ **Proof.** Begin with a copy of \(G\) and add to it \(\ell=k-i(G)\) isolated vertices, \(S=\{v_{1},v_{2},\ldots,v_{\ell}\}\). Immediately, \(X\) is an \(i\)-set of \(G\) if and only if \(X\cup S\) is an \(i\)-set of \(G^{*}\). Moreover, if \(X\) and \(Y\) are \(i\)-sets of \(G\) such that \(X{\sim}_{G}Y\) in \(H\), then \((X\cup S){\sim}_{G^{*}}(Y\cup S)\), and so \({\mathscr{I}}(G^{*})\cong H\). Now, when attempting to combine the constructions of \({\mathscr{I}}(G_{1})=H_{1}\) and \({\mathscr{I}}(G_{2})=H_{2}\) and \(i(G_{1})<i(G_{2})\), we need only inflate \(G_{1}\) until its \(i\)-sets are the same size as those in \(G_{2}\). A powerful construction tool, the Inflation Lemma is used repeatedly in almost all of the following results of this section. In the next result we show that, given \(i\)-graphs \(H_{1}\) and \(H_{2}\), a new \(i\)-graph \(H\) can be formed by identifying any two vertices in \(H_{1}\) and \(H_{2}\). The proof here uses Proposition 4.3, the Deletion Lemma (Lemma 5.2), and the Inflation Lemma (Lemma 5.5); a proof in which a source graph of \(H\) is given can be found in [23, Proposition 3.30]. This result provides an alternative proof for Theorem 4.8. **Proposition 5.6**: _Let \(H_{1}\) and \(H_{2}\) be \(i\)-graphs. Then the graph \(H_{x=y}\), formed by identifying a vertex \(x\) of \(H_{1}\) with a vertex \(y\) of \(H_{2}\), is also an \(i\)-graph._ **Proof.** Suppose \(G_{1}\) and \(G_{2}\) are graphs such that \({\mathscr{I}}(G_{1})=H_{1}\) and \({\mathscr{I}}(G_{2})=H_{2}\). Applying the Inflation Lemma we may assume that \(i(G_{1})=i(G_{2})=k\geq 2\). By Proposition 4.3 there is a graph \(G\) such that \({\mathscr{I}}(G)=H_{1}\,\Box\,H_{2}\). Since \(H_{x=y}\) is an induced subgraph of \(H_{1}\,\Box\,H_{2}\) we may apply the Deletion Lemma and delete all other vertices of \(H_{1}\,\Box\,H_{2}\) until only \(H_{x=y}\) remains. We use Proposition 5.6 to show that two \(i\)-graphs may be connected by an edge between any two vertices to produce a new \(i\)-graph. A proof that gives a source graph for this new \(i\)-graph is given in [23, Proposition 3.26]. **Proposition 5.7**: _Let \(H_{1}\) and \(H_{2}\) be disjoint \(i\)-graphs. Then the graph \(H_{xy}\), formed by connecting \(H_{1}\) to \(H_{2}\) by an edge between any \(x\in V(H_{1})\) and any \(y\in V(H_{2})\), is also an \(i\)-graph._ **Proof.** Let \(H_{3}\simeq K_{2}\) with \(V(H_{3})=\{u,v\}\). Applying Proposition 5.6 twice, we see that the graph \(H_{xu}\) obtained by identifying \(x\in VH_{1})\) with \(u\in V(H_{3})\), and the graph \(H_{xy}\) obtained by identifying \(v\in V(H_{xu})\) with \(y\in V(H_{2})\) are \(i\)-graphs. The following corollary provides a way to connect two \(i\)-graphs with a clique rather than a bridge. A constructive proof in which a source graph for the resulting \(i\)-graph is provided can be found in [23, Corollary 3.27]. **Corollary 5.8**: _Let \(H_{1}\) and \(H_{2}\) be \(i\)-graphs, and let \(H\) be the graph formed from them as in Proposition 5.7 by creating a bridge \(xy\) between them. Then the graph \(H_{m}\) formed by replacing \(xy\) with a \(K_{m}\) for \(m\geq 2\) is also an \(i\)-graph._ **Proof.** Apply the Max Clique Replacement Lemma (Lemma 5.1) to the edge \(xy\) in Proposition 5.7. The next proposition provides a method for combining two \(i\)-graphs without connecting them by an edge. **Proposition 5.9**: _If \(H_{1}\) and \(H_{2}\) are \(i\)-graphs, then \(H_{1}\cup H_{2}\) is an \(i\)-graph._ **Proof.** Suppose \(G_{1}\) and \(G_{2}\) are graphs such that \(\mathscr{I}(G_{1})=H_{1}\) and \(\mathscr{I}(G_{2})=H_{2}\). We assume that \(i(G_{1})=i(G_{2})\geq 2\). Otherwise, apply the Inflation Lemma (Lemma 5.5) to obtain graphs with \(i\)-sets of equal size at least \(2\). Let \(G=G_{1}\lor G_{2}\), the join of \(G_{1}\) and \(G_{2}\). We claim that \(\mathscr{I}(G)=H_{1}\cup H_{2}\). We proceed similarly to the proof of Proposition 5.7; namely, if \(S\) is an \(i\)-set of \(G_{1}\), of \(G_{2}\), then \(S\) is an independent dominating set of \(G\). Likewise, we observe that any \(i\)-set of \(G\) is a subset of \(G_{1}\) or \(G_{2}\), and so, \(S\) is a \(i\)-set of \(G\) if and only if it is an \(i\)-set of \(G_{1}\) or \(G_{2}\). Suppose \(X\sim_{G_{1}}^{xy}Y\). Then in \(G\), sets \(X\) and \(Y\) are still \(i\)-sets, and likewise, vertices \(X\) and \(Y\) are still adjacent, and so \(X\sim_{G}^{xy}Y\). Now suppose instead that \(X\) is an \(i\)-set of \(G_{1}\) and \(Y\) is an \(i\)-set of \(G_{2}\). Within \(G\), \(X\cap Y=\varnothing\) and \(|X|=|Y|\geq 2\), so \(X\) and \(Y\) are not adjacent in \(\mathscr{I}(G)\). Therefore, \(X\!\sim_{G}\!\!Y\) if and only if \(X\!\sim_{G_{1}}\!\!Y\) or \(X\!\sim_{G_{2}}\!\!Y\). It follows that \(\mathscr{I}(G)=\mathscr{I}(G_{1})\cup\mathscr{I}(G_{2})=H_{1}\cup H_{2}\) as required. Applying these new tools in combination yields some unexpected results. For example, the following corollary, which makes use of the previous Proposition 5.9 in partnership with the Deletion Lemma (a construction for combining \(i\)-graphs and a construction for vertex deletions) gives our first result on \(i\)-graph edge deletions. **Corollary 5.10**: _Let \(H\) be an \(i\)-graph with a bridge \(e\), such that the deletion of \(e\) separates \(H\) into components \(H_{1}\) and \(H_{2}\). Then_ 1. \(H_{1}\) _and_ \(H_{2}\) _are_ \(i\)_-graphs, and_ 2. _the graph_ \(H^{*}=H-e\) _is an_ \(i\)_-graph._ **Proof.** Part (i) follows immediately from Lemma 5.2. For (ii), by Part (i), \(H_{1}\) and \(H_{2}\) are \(i\)-graphs. Proposition 5.9 now implies that \(H_{1}\cup H_{2}=H-e\) is also an \(i\)-graph. Combining the results of Proposition 5.6 with Proposition 5.7 and Corollary 5.10, yields the following main result. **Theorem 5.11**: _A graph \(G\) is an \(i\)-graph if and only if all of its blocks are \(i\)-graphs._ As observed in Corollary 5.3, graphs with an induced \(\mathfrak{D}\) subgraph are not \(i\)-realizable. If we consider the family of connected chordal graphs excluding those with an induced copy of \(\mathfrak{D}\), we are left with the family of block graphs (also called clique trees): graphs where each block is a clique. As cliques are their own \(i\)-graph, the following is immediate. **Proposition 5.12**: _Block graphs are \(i\)-graph realizable._ Cacti are graphs whose blocks are cycles or edges. Thus, we have the following immediate corollary. **Corollary 5.13**: _Cactus graphs are \(i\)-graph realizable._ While the proof of Proposition 5.6 does provide a method for building block graphs, it is laborious to do so on a graph with many blocks, as the construction is iterative, with each block being appended one at a time. However, when we consider that the blocks of block graphs are complete graphs, and that complete graphs are their own \(i\)-graphs (and thus arguably the easiest \(i\)-graphs to construct), it is logical that there is a simpler construction. We offer one such construction below. An example of this process is illustrated in Figure 8. **Construction 5.14**: _Let \(H\) be a block graph with \(V(H)=\{v_{1},v_{2},\ldots,v_{n}\}\) and let \(\mathcal{B}_{H}=\{B_{1},B_{2},\ldots,B_{m}\}\) be the collection of maximal cliques of \(H\). To construct a graph \(G\) such that \(\mathscr{I}(G)=H\):_ 1. _Begin with a copy of each of the maximal cliques of_ \(H\)_, labelled_ \(A_{1},A_{2},\ldots,A_{m}\) _in_ \(G\)_, where_ \(A_{i}\) _of_ \(G\) _corresponds to_ \(B_{i}\) _of_ \(H\) _for each_ \(1\leq i\leq m\)_, and the_ \(A_{i}\) _are pairwise disjoint. Notice that each cut vertex of_ \(H\) _has multiple corresponding vertices in_ \(G\)_._ 2. _Let_ \(v\in V(H)\) _be a cut vertex and_ \(\mathcal{B}_{v}\) _be the collection of blocks containing_ \(v\) _in_ \(H\)_; for notational ease, say_ \(\mathcal{B}_{v}=\{B_{1},B_{2},\ldots,B_{k}\}\)_, and suppose that_ \(W=\{w_{1},w_{2},\ldots,w_{k}\}\subseteq V(G)\) _are the_ \(k\) _vertices corresponding to_ \(v\)_, where_ \(w_{i}\in A_{i}\) _for all_ \(1\leq i\leq k\)_._ _For each distinct pair_ \(w_{i}\) _and_ \(w_{j}\) _of_ \(W\)_, add to_ \(G\) _three internally disjoint paths of length two between_ \(w_{i}\) _and_ \(w_{j}\)_. Since_ \(v\) _is in_ \(k\) _blocks of_ \(H\)_,_ \(3\binom{k}{2}\) _vertices are added in this process. These additions are represented as the green vertices in Figure_ 8_._ 3. _Repeat Step (ii) for each cut vertex of_ \(H\)_._ To see that the graph \(G\) from Construction 5.14 does indeed have \(\mathscr{I}(G)=H\), notice that \(i(G)=m\), where \(m\) is the number of blocks in \(H\); if \(X\) is an \(i\)-set of \(G\), then \(|X\cap A_{i}|\)=1 for each \(A_{i}\in\{A_{1},A_{2},\ldots A_{m}\}\). Moreover, no \(i\)-set of \(G\) has vertices in the added green vertices, because, as with the proof of Proposition 5.6, the inclusion of any one of these green vertices in an independent dominating set necessitates the addition of them all. In Figure 8(b), the five yellow vertices form the \(i\)-set corresponding to the yellow vertex of \(G\) in Figure 8(a). Only the token on the purple \(K_{5}\) can move in \(G\); the other four tokens remain frozen, thereby generating the corresponding purple \(K_{5}\) of \(H\). It is only when the token on the purple \(K_{5}\) is moved to the vertex \(x_{G}\) that the tokens on the orange \(K_{4}\), and the brown and green \(K_{2}\)'s, unfreeze one clique at a time. This corresponds to the cut vertex Figure 8: The construction of \(G\) from \(H\) in the proof of Proposition 5.12. \(i\)-set \(X_{H}\) of \(H\). The freedom of movement now transfers from the purple \(K_{5}\) to any of the three other cliques, allowing for the generation of their associated blocks in \(G\) as required. Finally, before we depart from block graphs, as chordal graphs are among the most well-studied families of graphs, we offer one additional reframing of this block graph result from the chordal graph perspective. **Corollary 5.15**: _A chordal graph is \(i\)-graph realizable if and only if it is \(\mathfrak{D}\)-free._ With the addition of Proposition 5.12 to the results used to build Observation 4.10, this leaves only the house graph (see Figure 9(b)) as unsettled with regard to its \(i\)-graph realizability among the \(34\) non-isomorphic graphs on five vertices. Although not strictly a result concerning the construction of larger \(i\)-graphs from known results, we include the following short proposition here for the sake of completeness. **Proposition 5.16**: _The house graph \(\mathcal{H}\) is an \(i\)-graph._ To demonstrate Proposition 5.16, we provide an exact seed graph for the \(i\)-graph: the graph \(G\) in Figure 9(a) (\(K_{3}\) with a \(P_{3}\) tail) has \(\mathscr{I}(G)=\mathcal{H}\). The \(i\)-sets of \(G\) and their adjacency are overlaid on \(\mathcal{H}\) in Figure 9(b). ## 6 Conclusion As we observed above, although not every graph is \(i\)-graph realizable, every graph does have an \(i\)-graph. The exact structure of the resulting \(i\)-graph can vary among families of graphs from the simplest isolated vertex to surprisingly complex structures. To illustrate this point, we determine the \(i\)-graphs of paths and cycles in [5]. We showed in Section 3 that the theta graphs \(\mathfrak{D}\cong\Theta\left\langle 1,2,2\right\rangle\), \(K_{2,3}\cong\Theta\left\langle 2,2,2\right\rangle\), and \(\kappa\cong\Theta\left\langle 2,2,3\right\rangle\) are not \(i\)-graph realizable. In [6] we investigate the class of theta graphs and determine exactly which ones fail to be \(i\)-graph realizable - there are only finitely many such graphs. We also present a graph that is neither a theta graph nor \(i\)-graph realizable. The following question remains open. **Question 1**: _Does there exist a finite forbidden subgraph characterization of \(i\)-graph realizable graphs?_ **Acknowledgement** We acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC), RGPIN-2014-04760 and RGPIN-03930-2020. Cette recherche a ete financee par le Conseil de recherches en sciences naturelles et en genie du Canada (CRSNG), RGPIN-2014-04760 and RGPIN-03930-2020.
2305.02371
Spectral cyclicality of networks
We introduce the spectral influence and spectral cyclicality based on the largest eigenvalue of a graph adjacency matrix, two novel concepts of centrality capturing diffusion and interdependence from a local and a global point of view respectively. We define a new clustering algorithm to distinguish communities with high cyclicality and interdependence, allowing overlapping, and we conclude our study with an application to the input-output analysis in the case of the Moroccan economy.
Nizar Riane
2023-05-03T18:16:47Z
http://arxiv.org/abs/2305.02371v1
# Spectral cyclicality of networks ###### Abstract We introduce the spectral influence and spectral cyclicality based on the largest eigenvalue of a graph adjacency matrix, two novel concepts of centrality capturing diffusion and interdependence from a local and a global point of view respectively. We define a new clustering algorithm to distinguish communities with high cyclicality and interdependence, allowing overlapping, and we conclude our study with an application to the input-output analysis in the case of the Moroccan economy. \({}^{\dagger}\) Universite Mohammed V de Rabat, Maroc1 Footnote 1: [email protected] \({}^{\ddagger}\) Sorbonne Universite CNRS, UMR 7598, Laboratoire Jacques-Louis Lions, 4, place Jussieu 75005, Paris, France **Keywords**: Spectral influence; Spectral cyclicality; Cyclicality clustering; Input-output analysis. **AMS Classification**: 05C22 - 05C38 - 05C85 - 05C90 **JEL Classification**: ###### Contents * 1 Introduction * 2 Basics of graph theory * 3 Spectral influence and cyclicality * 3.1 Spectral influence: a local approach * 3.2 Spectral influence: a global approach * 4 Spectral influence and cyclicality clustering * 4.1 Divisive cyclicality clustering * 4.2 Agglomerative cyclicality clustering * 4.3 Overlapping clustering * 4.4 Discussion * 5 Application to input-output analysis * 5.1 Moroccan interindustrial cyclicality analysis * 5.2 Propulsive industries and key cyclicality components * 5.3 Discussion * 6 Conclusion * 7 Appendix * 7.1 Industrial poles of the OECD harmonised national input-output tables * 7.2 Spectral influence evolution by industrial poles ## 1 Introduction The idea of centrality in network analysis can be traced back to early work in graph theory, and many definitions of centrality have been proposed, each one bringing different insights to the connections in a graph. For example, in 1987, Phillip Bonacich [1] introduced a centrality measure of a vertex based on the power centrality of its neighbors, as a way of addressing some of the limitations of other centrality measures, such as degree centrality and eigenvector centrality, where a high score means that a vertex is connected to another vertices who themselves have high scores ([12] ch.7). Bonacich argued that "in bargaining situations, it is advantageous to be connected to those who have few options; power comes from being connected to those who are powerless. Being connected to powerful others who have many potential trading partners reduces one's bargaining power" ([1]). Linton Freeman ([13]) believed that "the development of measures should help to clarify a concept by specifying its components and their interrelationships. In the case of centrality, however, the opposite effect seems to have been achieved. In a sense, the introduction of new measures at this stage is inappropriate. Ideally, measures should grow out of advanced theoretical efforts; they should be defined in the context of explicit process models", despite this opposition, new definitions of centrality didn't stop appearing. We believe that the reservation regarding multiplicity of centrality definitions comes from the confusion that the term "centrality" induces, it did not designate anymore a unique measure of "importance" but a field of study regrouping different definitions for different purposes. Following this momentum, we introduce a new definition of centrality, based on the spectral radius of the network adjacency matrix, which we call the **spectral influence**. This new concept characterizes vertices importance from a diffusion and interdependence perspective: a vertex score will be higher as much as his role in the diffusion, through closed walks, across the network is important. The spectral influence is a local measure of a global one we call the **spectral cyclicality**, which characterize the cyclicality over the graph. We go further by developing a cyclicality based clustering algorithm. Following a divisive or an agglomerative philosophies, one can isolate communities with highest diffusion property in a graph. A cycle is the simplest structure that brings redundant paths in network connectivity and feedback effects in network dynamics ([10]). Cycles in a networks are synonym of interdependence, redundancy, diffusion, cascade effect, contagion, amplification and attenuation. Cyclical patterns and phenomena are the subject of investigation in various fields. For instance, in the field of biology, cyclicality is studied in the context of epidemic spread [20], while in biochemistry, it is explored as a possible explanation for biochemical oscillations [1]. In ecology, researchers investigate the robustness of food webs to biodiversity loss in the presence of cyclical patterns [14], while in finance, cyclicality is examined for its role in the propagation of shocks within financial systems [13], and so on. In the field of economics, numerous studies have focused on the importance of network structures in understanding economic agent interactions. These network-based studies emphasize significance of network structure in determining economic agents interaction outcomes. For example, Acemoglu et al. [1] showed that microeconomic idiosyncratic shocks may lead to aggregate fluctuations in the presence of intersectoral asymmetric interconnections, in opposite to Robert Lucas ([1]) argument that microeconomic shocks would average out, and thus, would only have negligible aggregate effects, but the authors conclude that in economies with balanced intersectoral network structures, such as cycle, the aggregate volatility decays at a constant rate. In an opposite direction, the authors of [2] expose a different conclusion in the case of banking system, where it is shown that a cycle structure increase risk of domino effect. In order to give a different perspective, we have opted to apply our definition of centrality to investigate interindustrial flows, taking advantage of the amplification effect of cycles to highlight propulsive industries, in line with Francois Perroux's theory ([1] page 380): "the actions by which the increase in the rate of growth of the product or the productivity of a simple or complex unit A, causes the increase in the rate of growth of the product or the productivity of another simple or complex unit B. The propulsive unit, thus understood, acts theoretically, either by an effect of productivity or innovation. Often combined, in practice, these effects can however be distinguished conceptually.". Our application could be further motivated by the work of Roland Lantner [1], who highlighted the interdependence implied by the interindustrial flows: "The significance of a transaction between a supplier and a demander is measured less by its absolute value than by the degree of vulnerability it implies for one or the other". Interdependence of industrial poles is accompanied by shocks diffusion between poles and the resulting amplification effects, as it was highlighted by Francois Perroux ([1] page 68): "The dominance effect of A can be transmitted to B, C or D, directly or indirectly. If it is transmitted through an intermediary unit (T), the latter can amplify or dampen (or even completely stop) the initial effect". We apply our spectral cyclicality analysis to the Moroccan input-output tables (IOTs) [2]. We brings to light the profound mutations of the Moroccan economy. By calculating the spectral influence of each industry we identify the propulsive industries and we highlight the unique cyclical character of the Moroccan interindustrial flows. We conclude by extracting the key cyclicality components of the industries that form the core of the Moroccan economy. Our paper is organized as follows: 1. In Section 2, we recall the rudiments of graph topology and spectral theory of non-negative matrix. 2. In Section 3, we introduce the spectral influence of a vertex using spectral radius, we explore its properties and we introduce the spectral cyclicality as a measure of global diffusion on a graph. 3. In Section 4, we introduce our diffusion-based clustering algorithm following a divisive/agglomerative approach allowing overlapping. 4. In Section 5, we apply our technique to the Moroccan input-output tables, from 1995 to 2018. Basics of graph theory Throughout this paper, our results concern weighted directed graphs, those results could be adapted to undirected graphs as well. We recall next some fundamental results from graph topology and non-negative matrix theory, the reader could consult for example [11] and [12] for more details. **Definition 2.1** (**Graph)**.: A **graph**\(G\) is an ordered pair \(G=\left(V,A\right)\) where: 1. \(V=\left\{s_{1},\ldots,s_{n}\right\}\) is the set of **vertices**, 2. \(A\subset V^{2}\) is the set of unordered pairs of vertices, called **edges**. When the pairs of the set \(A\) are directed, the graph is called a **directed graph** and the pairs are called **arcs**. The graph \(G=\left(V,A,W\right)\), endowed with a weight matrix \(W=\left(w_{ij}\right)_{1\leqslant i,\,j\leqslant n}\) is called a **weighted graph**. **Definition 2.2** (**Graph topology [11])**.: Let us denote by \(G=\left(V,A\right)\) a directed graph. We define 1. a **walk** in \(G\) as an alternating sequence of vertices and arcs, \(s_{0},a_{1},s_{1},\ldots,a_{n},s_{n}\), where, for \(1\leqslant i\leqslant n\), \(a_{i}=s_{i-1}\,s_{i}\). The length of such a walk is \(n\), which is also equal to the number of arcs. 2. a **closed walk** as a one with the same first and last vertex. 3. a **path** as a walk where all vertices are distinct. 4. a **trail** as a walk with no repeated arcs. 5. a **circuit** as a nontrivial closed trail. 6. a **cycle** as a nontrivial closed path. 7. a **Hamiltonian cycle** as a cycle that includes every vertex of \(V\). 8. a **loop** as an arc from a vertex to itself. A **strongly connected graph** is a graph where there is a path (in each direction) between each pair of vertices of the graph. **Definition 2.3** (**Walk value and geometric mean value**).: Given a weighted digraph \(G=\left(V,A,W\right)\) and a walk \(\mathcal{W}_{s_{0}s_{n}}=s_{0},a_{1},s_{1},\ldots,a_{n},s_{n}\) in \(G\) of length \(n\), one can define its **value**\(v(\mathcal{W}_{s_{0}s_{n}})\) as the product \[v(\mathcal{W}_{s_{0}s_{n}})=w_{01}\cdots w_{n-1\,n}\] and its **geometric mean value** as \[\sqrt[n]{v(\mathcal{W}_{s_{0}s_{n}})}=\sqrt[n]{w_{01}\cdots w_{n-1\,n}}\] **Proposition 2.1**.: _Closed walk decomposition Every closed walk can be decomposed into a union of Hamiltonian cycles._ Proof.: Suppose the closed walk is of length \(m\) and contains at least one redundant vertex, otherwise it is Hamiltonian. The sequence between two redundant vertices is a closed walk of some length \(k<m\) on which the same argument holds recursively, dropping all those closed walks we end with a Hamiltonian cycle. **Definition 2.4** (**Dominant cycle**).: Let \(G=(V,A,W)\) be a weighted digraph and let denotes by \(\mathcal{C}_{j}^{n_{j}}\) a Hamiltonian cycle of length \(n_{j}\) starting from \(s_{j}\). We call **dominant cycle** the Hamiltonian cycle \(\mathcal{C}_{\star}^{n_{\star}}\) with maximum geometric mean value. **Theorem 2.2** (**Perron-Frobenius [13] ch.8**).: _Let us consider a non-negative matrix \(W\), with spectral radius \(\rho(W)\). Then:_ 1. \(\rho(W)\) _is an eigenvalue of_ \(W\)_, and there exists a non-negative, non-zero vector_ \(\mathbf{v}\) _such that:_ \[W\mathbf{v}=\rho(W)\mathbf{v}\] 2. \(\min_{1\leqslant i\leqslant n}\sum_{j=1}^{n}w_{ij}\leqslant\rho(W)\leqslant \max_{1\leqslant i\leqslant n}\sum_{j=1}^{n}w_{ij}\) _and_ \(\min_{1\leqslant j\leqslant n}\sum_{i=1}^{n}w_{ij}\leqslant\rho(W)\leqslant \max_{1\leqslant j\leqslant n}\sum_{i=1}^{n}w_{ij}\)_._ _Moreover, if \(W\) is irreducible (which corresponds to the situation of a strongly connected graph), then_ 1. _the eigenvalue_ \(\rho(W)>0\) _is an algebraically simple._ 2. _there is a unique positive right eigenvector_ \(\mathbf{v}\) _such that_ \(W\mathbf{v}=\rho(W)\mathbf{v}\) _and_ \(\sum_{i=1}^{n}\mathbf{v}_{i}=1\)_._ 3. _there is a unique positive left eigenvector_ \(\mathbf{w}\) _such that_ \(\mathbf{w}^{T}W=\rho(W)\mathbf{w}^{T}\) _and and_ \(\mathbf{w}\cdot\mathbf{v}=1\)_._ **Corollary 2.3** (**Perron-Frobenius [13] ch.8**).: _If \(\mathbf{A}\) is a non-negative irreducible matrix and \(\mathbf{B}\) is any principal square submatrix of \(\mathbf{A}\), then \(\rho(\mathbf{B})<\rho(\mathbf{A})\)._ **Theorem 2.4** (**[**13**] p.539 )**.: _Let \(\mathbf{A}\) be a non-negative matrix and Tr the trace operator. Then \(\rho(\mathbf{A})=\limsup_{m\rightarrow+\infty}\left(\text{Tr}(\mathbf{A}^{m}) \right)^{\frac{1}{m}}\). The limit superior is replaced with the limit if the matrix is irreducible and primitive (positive at some matrix power \(k\geqslant 1\))._ Spectral influence and cyclicality In the sequel, \(G=(V,A,W)\) denotes a weighted digraph with adjacency matrix \(W\). ### Spectral influence: a local approach **Notation.** * Given a subset \(S\subset V\), \(W(S)\) designates the adjacency matrix \(W\) resulting from the suppression of rows and columns corresponding to the elements of \(S\). * Given a matrix \(\mathbf{A}\), write \(\mathbf{A}\geqslant 0\) if \(\mathbf{A}_{ij}\geqslant 0\), and \(\mathbf{A}>0\) if \(\mathbf{A}_{ij}>0\), for all \(i,j\). We introduce next the central concept of this work: **Definition 3.1** (**Spectral influence**).: Given a weighted graph \(G=(V,A,W)\), we define the **spectral influence** of a subset \(S\) of \(V\), as \[\mathcal{S}(S)=\frac{\rho(W)-\rho(W(S))}{\rho(W)}\] On has \(0\leqslant\mathcal{S}(S)\leqslant 1\). In the particular case of a strongly connected graph, the matrix \(W\) is irreducible, one can deduce from Perron-Frobenius corollary 2.3 that \(0<\mathcal{S}(S)\) since \(\rho(W(S))<\rho(W)\). We have the following fundamental result: **Theorem 3.1** (**Spectral radius, closed walks and dominant cycle**).: _The spectral radius of the matrix \(W\) is asymptotically a nondecreasing function of the graph closed walks value, in particular, it is an increasing function of the dominant cycle value._ Proof.: It follows directly from theorem 2.4 that \[\rho(W)=\limsup_{m\rightarrow+\infty}\left(\operatorname{Tr}\left(W^{m}\right) \right)^{\frac{1}{m}}=\limsup_{m\rightarrow+\infty}\left(\sum_{j=1}^{n} \left(W^{m}\right)_{jj}\right)^{\frac{1}{m}}\] Using closed walk decomposition 2.1, let designate by \(\mathcal{W}^{m}_{jj}\) an \(m\) closed walk starting from \(j\), \(\mathcal{C}^{n_{j}}_{j}\) a \(n_{j}\) Hamiltonian cycle starting from \(j\) and \(\mathcal{C}^{n_{*}}_{*}\) the dominant cycle, one has for some \(m\) sufficiently large \[\left(\sum_{j=1}^{n}\left(W^{m}\right)_{jj}\right)^{\frac{1}{m}} =\left(\sum_{j=1}^{n}\sum_{\mathcal{W}_{jj}^{m}}v\left(\mathcal{W} _{jj}^{m}\right)\right)^{\frac{1}{m}}\] \[=\left(\sum_{j=1}^{n}\sum_{\mathcal{W}_{jj}^{m}}\prod_{\mathcal{C }_{i}^{n_{i}}\subset\mathcal{W}_{jj}^{m}}v\left(\mathcal{C}_{i}^{n_{i}}\right) \right)^{\frac{1}{m}}\] \[=\left.\sqrt[n]{v\left(\mathcal{C}_{\star}^{n_{\star}}\right)} \left(\sum_{j=1}^{n}\sum_{\mathcal{W}_{jj}^{m}}\prod_{\mathcal{C}_{i}^{n_{i}} \subset\mathcal{W}_{jj}^{m}}\underbrace{\left(\frac{v\left(\mathcal{C}_{i}^{n_ {i}}\right)}{v\left(\mathcal{C}_{\star}^{n_{\star}}\right)^{\frac{n_{1}}{n_{ \star}}}\right)}}_{\leqslant 1}\right)^{\frac{1}{m}}\] \[=\mathcal{O}\left(\sqrt[n]{v\left(\mathcal{C}_{\star}^{n_{\star}} \right)}\right)\] Since \[\kappa(W,\star)=\left(\sum_{j=1}^{n}\sum_{\mathcal{W}_{jj}^{m}} \underset{\mathcal{C}_{i}^{n_{i}}\in\mathcal{W}_{jj}^{m}}{\prod}\left(\frac{v \left(\mathcal{C}_{i}^{n_{i}}\right)}{v\left(\mathcal{C}_{\star}^{n_{\star}} \right)^{\frac{n_{1}}{n_{\star}}}}\right)\right)^{\frac{1}{m}} \leqslant\left(n\sum_{\mathcal{W}_{jj}^{m}}\underset{\mathcal{C}_{ i}^{n_{i}}\in\mathcal{W}_{jj}^{m}}{\prod}\left(\frac{v\left(\mathcal{C}_{i}^{n_{i}} \right)}{v\left(\mathcal{C}_{\star}^{n_{\star}}\right)^{\frac{n_{1}}{n_{\star}} }}\right)\right)^{\frac{1}{m}}\] \[\leqslant\left(n\,n^{m-1}\right)^{\frac{1}{m}}\] \[=n\] where the upper bound is the number of closed walks in a complete graph. _Remark 3.1_.: 1. The first direct consequence is that the spectral influence is maximal for nodes whose deletion reduces the closed walks supremum value in \(G\) asymptotically the most, which is equivalent to the deletion of Hamiltonian cycles with the highest geometric mean value as a consequence of theorem 3.1: suppose the graph has \(l\) cycles decreasingly ordered by their geometric mean values \(\sqrt[n]{v\left(\mathcal{C}_{i_{1}}^{n_{1}}\right)}\geqslant\ldots\geqslant \sqrt[n]{v\left(\mathcal{C}_{i_{k}}^{n_{k}}\right)}\geqslant\ldots\geqslant \sqrt[n]{v\left(\mathcal{C}_{i_{l}}^{n_{l}}\right)}\), suppose the suppression of a vertex \(s\) induce the suppression of the first \(k-1\) Hamiltonian cycles in the order, the spectral influence is thus \[\mathcal{S}(s)=1-\mathcal{O}\left(\frac{\sqrt[n]{v\left(\mathcal{C}_{i_{k}}^{n_ {k}}\right)}}{\sqrt[n]{v\left(\mathcal{C}_{i_{1}}^{n_{1}}\right)}}\right)\] (1) 2. In the case of an **unweighted graphs**, the sum \(\sum_{j=1}^{n}\left(W^{m}\right)_{jj}\) is just the number of \(m\) closed walks and the spectral influence of a vertex \(s\) is a function of the number of closed walks passing through \(s\). 3. If the graph is **not strongly connected**, then the matrix \(W\) is reducible and it is similar to bloc triangular matrix \(\begin{pmatrix}W_{11}&W_{12}&\ldots&W_{1k}\\ 0&W_{22}&\ldots&W_{2k}\\ \vdots&\ddots&\ddots&\vdots\\ 0&\ldots&0&W_{kk}\end{pmatrix}\) ([11] page 63). The spectral radius is \(\rho(W)=\max\{\rho(W_{11}),\ldots,\rho(W_{nn})\}\) and the same reasoning holds per bloc, while the spectral influence is null for non dominant blocs. 4. Like other mathematical means, the geometric mean is sensible to extreme values. 5. One can enumerate the different configurations of spectral influence: denote by \(\mathcal{C}_{i_{1}}^{n_{i_{1}}}\) the dominant cycle and \(\mathcal{C}_{i_{s}}^{n_{i_{s}}}\) the dominant cycle after the suppression of a vertex \(s\), there are four situations 1. the vertex \(s\) does not belongs to the dominant cycle and 1. \(s\) does not belongs to a Hamiltonian cycle crossing the dominant cycle: \(\kappa(W(s),i_{1})=\kappa(W,i_{1})\) and \(\mathcal{S}(s)=0\). 2. \(s\) belongs to a Hamiltonian cycle crossing the dominant cycle: \(\kappa(W(s),i_{1})<\kappa(W,i_{1})\) and \(\mathcal{S}(s)>0\) (irreducibility corollary 2.3). 2. the vertex \(s\) belongs to the dominant cycle and 1. the vertex \(s\) does not belongs to a Hamiltonian cycle crossing \(\mathcal{C}_{i_{s}}^{n_{i_{s}}}\): \(\kappa(W(s),i_{2})=\kappa(W,i_{2})\) and \(\mathcal{S}(s)>0\). 2. the vertex \(s\) belongs to a Hamiltonian cycle crossing \(\mathcal{C}_{i_{s}}^{n_{i_{s}}}\): \(\kappa(W(s),i_{2})<\kappa(W,i_{2})\) and \(\mathcal{S}(s)\) is greater than situation (b).i. 6. The spectral influence is a relative local measure of interdependence and diffusion in networks, it measure the relative cyclicality involving the vertex in question. ### Spectral influence: a global approach We introduced above the concept of spectral influence as a measure of local influence, and it was demonstrated that this quantity is reliant on closed walks that involve the vertex under consideration, which in turn require the existence of Hamiltonian cycles (theorem 3.1). The total sum of the elementary influences will be as large as the number of vertices involved in Hamiltonian cycles, to achieve the maximum when all the vertices are contained in one dominant cycle (**redundancy effect**). Conversely, the total sum will decreases as much as the flow is split into small Hamiltonian cycles and loops. Given a subset \(S\subseteq V\) of cardinal \(k\), one can write \[0\leqslant\sum_{s\in S}\left(\rho(W)-\rho(W(s))\right)\leqslant k\,\rho(W) \tag{2}\] In the case of strongly connected graphs: the left hand bound is never reached by Perron-Frobenius corollary 2.3. Conversely, the right hand bound is strict except the case where the graph is a Hamiltonian cycle (the suppression of a vertex destruct every closed walk). The redundancy effect is the fact that a dominant cycle of length \(l\geqslant 2\) contains \(l\) vertices whose suppression reduce the spectral radius and increase the spectral influence, it has the following consequence: **Corollary 3.2** (**The whole is not the sum of the parts**).: _Let \(G=(V,A,W)\) be a weighted graph and \(S\) be a subset of \(V\) of cardinal \(k\geqslant 2\). **The whole is not the sum of the parts** in the sens that, the equality_ \[\sum_{s\in S}\left(\rho(W)-\rho(W(s))\right)=\rho(W)-\rho(W(S))\] _does not hold_ _in general. In particular, \(\mathcal{S}\) is not a mathematical measure on \(V\)._ Proof.: Figure 1 gives a counter-example. Consequently, the spectral influence \(\mathcal{S}\) does not respect countable additivity of mathematical measures. **Definition 3.2** (**Spectral cyclicality**).: Given a weighted graph \(G=(V,A,W)\), we define the **spectral cyclicality** of \(G\) as \[\mathbf{S}=\sum_{s=1}^{n}\mathcal{S}(s)\] The spectral cyclicality characterize interdependence and diffusion in the graph. According to equation 2, \(0\leqslant\mathbf{S}\leqslant n\) and three extreme configurations could occur: 1. **The situation of perfect cyclicality** * **Hamiltonian cycle**: when the connection structure is reduced to a Hamiltonian cycle and the elementary spectral influence is 1 (the destruction of a vertex disconnect the graph). The spectral cyclicality is \(n\). 2. **Situations of weak cyclicality** * **Fair division**: each vertex shares uniformly the weight \(w_{i}\) with all the vertices. The spectral radius of the matrix equals the mean flow \(\dfrac{\sum_{i=1}^{n}w_{i}}{n}\) and the spectral influence is \[\mathcal{S}(s)=\dfrac{\dfrac{\sum_{i=1}^{n}w_{i}}{n}-\dfrac{\sum_{i\neq s}w_{i }}{n}}{\dfrac{\sum_{i=1}^{n}w_{i}}{n}}=\dfrac{w_{s}}{\sum_{i=1}^{n}w_{i}}\] The spectral cyclicality is 1. * **Triangular structure**: each vertex \(s_{i}\) serves the next vertices \(s_{j}\), \(i\leqslant j\), until the last one which is self-connected (loop). The spectrum of the adjacency matrix contains one positive non null eigenvalue, the spectral influence equals one for the looped vertex and null elsewhere and the spectral cyclicality is 1. 3. **No cycles** * **Chain**: each vertex \(s_{i}\) is connected to the next vertex \(s_{i+1}\) constituting a chain. * **Loops**: every pole is looped and there is no diffusion. The spectral influence is null except eventually the pole with the dominant loop of value \(w_{i_{1}^{*}i_{1}^{*}}\), the spectral cyclicality is \(\frac{w_{i_{1}^{*}i_{1}^{*}}-w_{i_{2}^{*}i_{2}^{*}}}{w_{i_{1}^{*}i_{1}^{*}i_{1}^ {*}}}\), where \(w_{i_{1}s_{i}}\) is the second maximum loop value. Following remark 3.1, one can deduce that the spectral cyclicality is greater as much as the graph dominant cycle is longer and the graph is strongly connected, and the maximum is reached when the graph is a Hamiltonian cycle. Conversely, the spectral cyclicality is reduced by the introduction of non-dominant cycles with high geometric mean values, disjoint from the dominant one, reducing the vertices spectral influence by reducing dominant cycles gap and redundancy effect. We close this section by plotting in figure 3 a bestiary of unweighted digraphs with the corresponding spectral influences and cyclicality. ## 4 Spectral influence and cyclicality clustering The spectral cyclicality is a measurement of cyclicality and diffusion on graphs. In the sequel, we define a clustering algorithm based on this measurement to regroup vertices with strong cyclicality. In order to implement our algorithm, we need to adopt the following convention about isolated vertices: Figure 3: The spectral cyclicality for different disposition of a three vertices unweighted connected digraph: **a)** perfect diffusion on a cycle, **b)** diffusion slowed down by a circuit, **c)** diffusion slowed down by a loop, **d)** slowest diffusion in a complete graph, **e)** partial diffusion, **f)** one way diffusion. **Assumption 4.1**.: If the digraph \(G=(V,A,W)\) is reduced to a unique vertex \(s\), then the spectral cyclicality is \(\mathbf{S}=1\). The idea behind the assumption 4.1 is to decide whether two vertices should be split into two isolated vertices or not, the spiting decision will be taken if the spectral cyclicality is not greater than 1. Let's analyze the implication of this assumption: Given a graph with two vertices and the corresponding non-negative weight matrix \(W=\begin{pmatrix}a&b\\ c&d\end{pmatrix}\), the spectrum of the matrix \(W\) is \[\{\lambda_{1},\lambda_{2}\}=\left\{\frac{1}{2}\left(a+d+\sqrt{(a-d)^{2}+4bc} \right),\frac{1}{2}\left(a+d-\sqrt{(a-d)^{2}+4bc}\right)\right\}\] The spectral cyclicality is therefore \[\mathbf{S} =\frac{\frac{1}{2}\left(a+d+\sqrt{(a-d)^{2}+4bc}\right)-d}{\frac {1}{2}\left(a+d+\sqrt{(a-d)^{2}+4bc}\right)}+\frac{\frac{1}{2}\left(a+d+ \sqrt{(a-d)^{2}+4bc}\right)-a}{\frac{1}{2}\left(a+d+\sqrt{(a-d)^{2}+4bc} \right)}\] \[=\frac{2}{1+\frac{a+d}{\sqrt{(a-d)^{2}+4bc}}}\] \(\mathbf{S}\) reaches its maximum 2 in the case of a cycle (\(a=d=0\)). If either \(c\) or \(b\) is null, then \(\mathbf{S}<1\) and our algorithm split the vertices since there is no diffusion on the graph. In the general case, the splitting condition is \(\frac{a+d}{\sqrt{(a-d)^{2}+4bc}}\geqslant 1\) which is equivalent to \(ad\geqslant cd\), and this occurs when the product of loop values is greater than the cycle value. **Notation.** We will designate by * \([1,n]\) the set of integers from 1 to \(n\). * \(\tilde{W}(S)\) the resulting matrix obtained by deleting all rows and columns of \(W\), except for those that correspond to the elements of \(S\). * \(\overline{W}(S)\) the matrix resulting from \(W\) by setting \(w_{ij}=0\) if \(i\neq j\) and \(\{i,j\}\subset S\). ### Divisive cyclicality clustering We explain hereafter how the **divisive cyclicality clustering** algorithm works: given a weighted digraph \(G=(V,A,W)\), the algorithm eliminates the vertex whose removal increases the spectral cyclicality the most, if no increase is possible, the algorithm eliminates the vertices whose removal does not decrease spectral cyclicality. The same operation is applied to deleted vertices until all vertices are classified. ### Agglomerative cyclicality clustering Given a digraph \(G=(V,A,W)\), the **agglomerative cyclicality clustering** algorithm groups vertices based on their spectral cyclicality, with the highest values being grouped first. If no increase in cyclicality is possible, the algorithm repeats the process with the remaining vertices until all vertices are classified. ``` Input: a weight matrix \(W\in M_{n}\) and a vertices set \(V\) Initialization: set \(S=\{\}\) While: \(\max\limits_{i\in V}\mathbf{S}(W(i))\geqslant\mathbf{S}(W)\) update \(W=W(i^{\star})\), \(S=S\cup\{i^{\star}\}\), where \(i^{\star}=\arg\max\limits_{i\in V}\mathbf{S}(W(i))\) Update: set \(V=S\). Output: a partition \(\{V_{1},\ldots,V_{k}\}\). ``` **Algorithm 1**Divisive cyclicality clustering ### Overlapping clustering Let's denote by \(S_{1}(W)\) the first partition obtained by applying the divisive (resp. agglomerative) cyclicality algorithm. Overlapping clustering can be achieved by preserving the vertices and deleting the partition intra-connections at each stage: ``` Input: a weight matrix \(W\in M_{n}\) and a vertices set \(V\) Initialization: set \(O=S_{1}(W)\) While: \(\mathbf{S}(\overline{W}(O))>1\) update \(W=\overline{W}(O)\), \(O=S_{1}(W)\) Output: an overlapping partition \(\{O_{1},\ldots,O_{k}\}\). ``` **Algorithm 2**Agglomerative cyclicality clustering ### Discussion The schemes of divisive and agglomerative clustering are opposite in nature. Divisive clustering starts with a single cluster that contains all the vertices, and then recursively splits it into smaller clusters, whereas agglomerative clustering starts with each vertex as a separate cluster and then merges them into larger clusters. Overall, both divisive and agglomerative cyclicality clustering share the same philosophy of grouping vertices with high cyclicality, even though they achieve this goal through different methods. Let's analyze how the algorithm practically works in the case of a divisive scheme: Following remark 3.1.5, the algorithm starts deleting vertices not strongly connected to the dominant cycle, then it starts deleting competitor cycles reducing the cyclicality. In the case of unweighted graphs, the algorithm chooses the longest cycle as the dominant one. It important to notice that the algorithm outcome may not be a perfect cycle, but a strong cyclicality component, depending on the structure of the network. When overlapping is allowed, the algorithm cuts off all the arcs between the element of a partition keeping the loops, and preventing the same cycles to appears in other partitions. One limitation of the cyclicality clustering algorithm is the exaequo situation where the algorithm should pick a vertex randomly, moreover, the clustering algorithm depends on the division/agglomeration path, i.e., vertices suppression/concatenation order, and some vertices could be eliminated/regrouped earlier producing a component not cyclically optimal. ## 5 Application to input-output analysis The application of spectral cyclicality analysis to input-output analysis can help to better understand interindustrial flows and connections between industrial poles. On the one hand, Some poles plays the role of **propulsive industries** driving other poles to increase their production and creating transformation cycles. On the other hand, such mechanism is synonym with interdependence, influence, shock diffusion and amplification effects. In opposition to input-output analysis based on Leontieff or Ghosh normalization, our technique does not require a direction of normalization (supply or demand) and it is applied directly to the absolute flow, moreover, it does not require any coefficients stability hypothesis ([14]). In the sequel, we analyze the Moroccan interindustrial flows of the OECD Input-Output Tables (IOTs) [1], from 1995 to 2018. The data traces flows of intermediate goods and services of 45 industrial poles in million US Dollars (see appendix 7.1). ### Moroccan interindustrial cyclicality analysis We report in figure 4 the evolution of spectral cyclicality of interindustrial exchanges, the curve reflects a clear upward trend in cyclicality. We perform a benchmark for a pool of 30 countries, each country is designated by its three letters ISO code2, we can notice that Morocco manifests a distinguished cyclical character: Footnote 2: International Organization for Standardization [https://www.iso.org/iso-3166-country-codes.html](https://www.iso.org/iso-3166-country-codes.html). ### Propulsive industries and key cyclicality components We perform next a spectral cyclicality analysis of the Moroccan interindustrial flows from 1995 to 2018. The next two tables report, for each industry, the spectral influence \(\mathcal{S}\) and the corresponding cyclical cluster in the case of nonoverlapping divisive (**DC**) and agglomerative (**AC**) clustering, the results illustrate the profound changes in the interindustrial structure. Figure 4: Evolution of Moroccan spectral cyclicality from 1995 to 2018. \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline **Industry** & \multicolumn{3}{|c|}{**1995**} & \multicolumn{3}{|c|}{**2000**} & \multicolumn{3}{|c|}{**2005**} \\ \hline & \(\mathcal{S}\) & **DC** & **AC** & \(\mathcal{S}\) & **DC** & **AC** & \(\mathcal{S}\) & **DC** & **AC** \\ \hline **D01T02** & 0.16774 & 1 & \(\iota\) & 0.301 & 1 & \(\iota\) & 0.33535 & 1 & \(\iota\) \\ \hline **D03** & 0.0001 & \(\iota\) & \(\iota\) & 0.00008 & 4 & 6 & 0.00016 & 3 & 5 \\ \hline **D05T06** & \(4*10^{-7}\) & 1 & 6 & \(4*10^{-7}\) & 1 & 4 & \(6*10^{-7}\) & \(\iota\) & 5 \\ \hline **D07T08** & 0.00025 & 1 & \(\iota\) & 0.00002 & 2 & 2 & \(6*10^{-6}\) & 3 & 4 \\ \hline **D09** & 0.00005 & 1 & 6 & 0.00001 & 1 & 4 & 0.00001 & 1 & \(\iota\) \\ \hline **D10T12** & 0.16811 & 1 & \(\iota\) & 0.30329 & 1 & \(\iota\) & 0.38024 & 1 & \(\iota\) \\ \hline **D13T15** & 0.00542 & \(\iota\) & \(\iota\) & 0.00179 & \(\iota\) & \(\iota\) & 0.00095 & \(\iota\) & \(\iota\) \\ \hline **D16** & 0.00016 & \(\iota\) & \(\iota\) & 0.0001 & \(\iota\) & \(\iota\) & 0.00007 & 3 & \(\iota\) \\ \hline **D17T18** & 0.0002 & 1 & \(\iota\) & 0.00015 & \(\iota\) & \(\iota\) & 0.00011 & \(\iota\) & \(\iota\) \\ \hline **D19** & 0.00138 & 1 & 1 & 0.00123 & 1 & 1 & 0.00136 & 1 & 1 \\ \hline **D20** & 0.00581 & 1 & \(\iota\) & 0.00315 & 1 & \(\iota\) & 0.00266 & 3 & \(\iota\) \\ \hline **D21** & 0.00062 & 1 & \(\iota\) & 0.00043 & 1 & \(\iota\) & 0.00037 & 3 & \(\iota\) \\ \hline **D22** & 0.00072 & \(\iota\) & \(\iota\) & 0.00033 & \(\iota\) & 6 & 0.00037 & 1 & 5 \\ \hline **D23** & 0.0004 & 2 & \(\iota\) & 0.00014 & \(\iota\) & \(\iota\) & 0.00011 & 3 & \(\iota\) \\ \hline **D24** & 0.00018 & 1 & 4 & 0.00007 & 3 & 4 & 0.00009 & 3 & 4 \\ \hline **D25** & 0.00038 & \(\iota\) & 4 & 0.00021 & 3 & 4 & 0.00015 & 3 & 4 \\ \hline **D26** & 0.00005 & 1 & \(\iota\) & 0.00001 & \(\iota\) & \(\iota\) & 0.00001 & 1 & \(\iota\) \\ \hline **D27** & 0.0004 & \(\iota\) & \(\iota\) & 0.00016 & \(\iota\) & \(\iota\) & 0.00012 & 3 & \(\iota\) \\ \hline **D28** & 0.00007 & 1 & \(\iota\) & 0.00003 & 1 & \(\iota\) & 0.00003 & 1 & \(\iota\) \\ \hline **D29** & 0.00005 & \(\iota\) & \(\iota\) & 0.00002 & \(\iota\) & \(\iota\) & 0.00002 & 3 & \(\iota\) \\ \hline **D30** & \(2*10^{-6}\) & 1 & \(\iota\) & \(2*10^{-6}\) & \(\iota\) & \(\iota\) & \(1*10^{-6}\) & 3 & \(\iota\) \\ \hline **D31T33** & 0.00033 & 1 & \(\iota\) & 0.00013 & 4 & \(\iota\) & 0.00008 & 3 & 5 \\ \hline **D35** & 0.00077 & 1 & 1 & 0.00029 & 1 & \(\iota\) & 0.00002 & 1 & \(\iota\) \\ \hline **D36T39** & 0.00001 & 1 & 5 & 0.00001 & 1 & 2 & 0.00001 & 1 & \(\iota\) \\ \hline **D41T43** & 0.00115 & 2 & 2 & 0.00036 & 2 & 2 & 0.00028 & 2 & 2 \\ \hline **D45T47** & 0.01764 & 1 & \(\iota\) & 0.01603 & 1 & 6 & 0.01614 & 1 & 5 \\ \hline **D49** & 0.00051 & 1 & 2 & 0.0004 & 1 & 1 & 0.0005 & 1 & 1 \\ \hline **D50** & 0.00003 & 1 & \(\iota\) & 0.00001 & 1 & 6 & 0.00002 & 1 & 2 \\ \hline **D51** & 0.00005 & 1 & \(\iota\) & 0.00005 & 1 & \(\iota\) & 0.00006 & 1 & \(\iota\) \\ \hline **D52** & 0.0001 & 1 & \(\iota\) & 0.00007 & 1 & \(\iota\) & 0.00009 & 1 & \(\iota\) \\ \hline **D53** & 0.00002 & 1 & \(\iota\) & \(7*10^{-6}\) & 4 & 6 & \(8*10^{-6}\) & 1 & 5 \\ \hline **D55T56** & 0.00279 & 1 & 3 & 0.00172 & 1 & 3 & 0.0013 & 1 & 3 \\ \hline **D58T60** & 0.0001 & 1 & \(\iota\) & 0.00007 & 1 & \(\iota\) & 0.00006 & 1 & \(\iota\) \\ \hline **D61** & 0.00015 & 1 & \(\iota\) & 0.00006 & 1 & 6 & 0.00006 & 1 & \(\iota\) \\ \hline **D62T63** & 0.00001 & 1 & \(\iota\) & \(7*10^{-6}\) & 4 & \(\iota\) & \(5*10^{-6}\) & 3 & \(\iota\) \\ \hline **D64T66** & 0.00019 & 1 & \(\iota\) & 0.00041 & 1 & \(\iota\) & 0.00039 & 1 & \(\iota\) \\ \hline **D68** & 0.00015 & 1 & 2 & 0.00008 & 4 & 5 & 0.00006 & 2 & 2 \\ \hline **D69T75** & 0.00011 & 1 & \(\iota\) & 0.0001 & \(\iota\) & \(\iota\) & 0.0001 & 3 & \(\iota\) \\ \hline **D77T82** & 0.00005 & 1 & \(\iota\) & 0.00003 & 1 & \(\iota\) & 0.00003 & 1 & \(\iota\) \\ \hline **D84** & 0.00038 & 1 & 3 & 0.00041 & 1 & 3 & 0.00022 & 1 & 3 \\ \hline **D85** & 0.00023 & 1 & \(\iota\) & 0.00023 & 1 & \(\iota\) & 0.00012 & 1 & \(\iota\) \\ \hline **D86T88** & 0.00005 & 1 & \(\iota\) & 0.00006 & 1 & \(\iota\) & 0.00003 & 1 & 2 \\ \hline **D90T93** & \(2*10^{-6}\) & 1 & \(\iota\) & \(1*10^{-6}\) & 1 & 3 \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline **Industry** & \multicolumn{3}{c|}{**2010**} & \multicolumn{3}{c|}{**2015**} & \multicolumn{3}{c|}{**2018**} \\ \hline & \(\mathcal{S}\) & **DC** & **AC** & \(\mathcal{S}\) & **DC** & **AC** & \(\mathcal{S}\) & **DC** & **AC** \\ \hline **D01T02** & 0.54341 & 1 & 4 & 0.54843 & 1 & 4 & 0.53322 & 1 & 4 \\ \hline **D03** & 0.00008 & 2 & \(\iota\) & 0.00011 & \(\iota\) & \(\iota\) & 0.00007 & \(\iota\) & \(\iota\) \\ \hline **D05T06** & \(2*10^{-7}\) & 2 & 5 & \(4*10^{-8}\) & 2 & 5 & \(1*10^{-8}\) & 4 & 2 \\ \hline **D07T08** & 0.00019 & 2 & 1 & 0.00008 & 4 & 1 & 0.00008 & 5 & 1 \\ \hline **D09** & \(7*10^{-6}\) & 1 & 4 & \(4*10^{-6}\) & 1 & 4 & \(3*10^{-6}\) & 1 & 4 \\ \hline **D10T12** & 0.52652 & 1 & 4 & 0.58081 & 1 & 4 & 0.58233 & 1 & 4 \\ \hline **D13T15** & 0.00026 & \(\iota\) & \(\iota\) & 0.00018 & \(\iota\) & \(\iota\) & 0.00015 & \(\iota\) & \(\iota\) \\ \hline **D16** & 0.00005 & \(\iota\) & \(\iota\) & 0.00003 & \(\iota\) & \(\iota\) & 0.00003 & \(\iota\) & \(\iota\) \\ \hline **D17T18** & 0.00016 & 1 & 4 & 0.00014 & 1 & 4 & 0.00013 & 1 & 4 \\ \hline **D19** & 0.00016 & 1 & 2 & 0.00005 & 1 & 2 & \(7*10^{-7}\) & 4 & 2 \\ \hline **D20** & 0.00298 & \(\iota\) & \(\iota\) & 0.00229 & 4 & \(\iota\) & 0.00293 & 5 & \(\iota\) \\ \hline **D21** & 0.00032 & \(\iota\) & \(\iota\) & 0.00028 & \(\iota\) & \(\iota\) & 0.00022 & \(\iota\) & \(\iota\) \\ \hline **D22** & 0.00028 & 2 & 5 & 0.00019 & 2 & 5 & 0.00018 & 3 & 5 \\ \hline **D23** & 0.00013 & \(\iota\) & \(\iota\) & 0.00007 & \(\iota\) & \(\iota\) & 0.00005 & 2 & \(\iota\) \\ \hline **D24** & 0.00004 & \(\iota\) & \(\iota\) & 0.00001 & \(\iota\) & \(\iota\) & 0.00001 & \(\iota\) & \(\iota\) \\ \hline **D25** & 0.0001 & \(\iota\) & \(\iota\) & 0.00005 & \(\iota\) & \(\iota\) & 0.00006 & \(\iota\) & \(\iota\) \\ \hline **D26** & \(5*10^{-6}\) & \(\iota\) & 4 & \(3*10^{-6}\) & \(\iota\) & \(\iota\) & \(3*10^{-6}\) & \(\iota\) & \(\iota\) \\ \hline **D27** & 0.0001 & \(\iota\) & \(\iota\) & 0.00004 & \(\iota\) & \(\iota\) & 0.00004 & \(\iota\) & \(\iota\) \\ \hline **D28** & 0.00002 & 1 & 4 & \(6*10^{-6}\) & 1 & 4 & \(8*10^{-6}\) & 1 & 4 \\ \hline **D29** & 0.00003 & \(\iota\) & \(\iota\) & 0.00007 & \(\iota\) & \(\iota\) & 0.00009 & \(\iota\) & \(\iota\) \\ \hline **D30** & \(2*10^{-6}\) & \(\iota\) & \(\iota\) & \(2*10^{-6}\) & \(\iota\) & \(\iota\) & \(1*10^{-6}\) & \(\iota\) & \(\iota\) \\ \hline **D31T33** & 0.00004 & \(\iota\) & \(\iota\) & 0.00002 & \(\iota\) & \(\iota\) & 0.00002 & \(\iota\) & \(\iota\) \\ \hline **D35** & 0.00014 & 1 & 4 & 0.00012 & \(\iota\) & \(\iota\) & 0.00015 & 2 & \(\iota\) \\ \hline **D36T39** & \(3*10^{-6}\) & 1 & 4 & 3*10^{-6}\) & 1 & 4 & \(4*10^{-6}\) & 1 & 4 \\ \hline **D41T43** & 0.00036 & 1 & 1 & 0.0002 & 2 & 1 & 0.0002 & 2 & 1 \\ \hline **D45T47** & 0.01347 & 2 & 5 & 0.01056 & 2 & 5 & 0.01057 & 3 & 5 \\ \hline **D49** & 0.00035 & 2 & 2 & 0.00029 & \(\iota\) & 2 & 0.00033 & 3 & 5 \\ \hline **D50** & 0.00003 & 1 & 4 & 0.00003 & \(\iota\) & \(\iota\) & 0.00005 & \(\iota\) & \(\iota\) \\ \hline **D51** & 0.00003 & 2 & 5 & 0.00004 & 2 & 5 & 0.00004 & 1 & 4 \\ \hline **D52** & 0.00008 & \(\iota\) & \(\iota\) & 0.00007 & \(\iota\) & \(\iota\) & 0.00007 & \(\iota\) & \(\iota\) \\ \hline **D53** & \(4*10^{-6}\) & 2 & 5 & \(2*10^{-6}\) & 2 & 5 & \(1*10^{-6}\) & 3 & 5 \\ \hline **D55T56** & 0.00059 & \(\iota\) & 3 & 0.0005 & \(\iota\) & 3 & 0.00067 & \(\iota\) & 3 \\ \hline **D58T60** & 0.00003 & \(\iota\) & \(\iota\) & 0.00003 & \(\iota\) & \(\iota\) & 0.00003 & \(\iota\) & \(\iota\) \\ \hline **D61** & 0.00004 & \(\iota\) & 5 & 0.00002 & \(\iota\) & 5 & 0.000002 & 3 & 5 \\ \hline **D62T63** & \(2*10^{-6}\) & \(\iota\) & 6 & \(3*10^{-6}\) & 3 & \(\iota\) & \(3*10^{-6}\) & \(\iota\) & \(\iota\) \\ \hline **D64T66** & 0.00034 & \(\iota\) & 6 & 0.00031 & 3 & \(\iota\) & 0.00035 & \(\iota\) & 6 \\ \hline **D68** & 0.00002 & 2 & 6 & 0.00002 & 3 & 5 & 0.00002 & 2 & 6 \\ \hline **D69T75** & 0.00007 & \(\iota\) & \(\iota\) & 0.00006 & \(\iota\) & \(\iota\) & 0.00007 & \(\iota\) & \(\iota\) \\ \hline **D77T82** & 0.00001 & \(\iota\) & 6 & 0.00002 & 3 & \(\iota\) & 0.00002 & \(\iota\) & \(\iota\) \\ \hline **D84** & 0.00029 & \(\iota\) & 3 & 0.00028 & \(\iota\) & 3 & 0.00031 & 2 & 3 \\ \hline **D85** & 0.00013 & 1 & According to our algorithm, the interindustrial Moroccan flows in 2018 could be regrouped and ordered from a cyclicality point of view in one the following two ways: **Divisive clustering**: 1. **Component I**: 1. D19: Coke and refined petroleum products, 2. D05T06: Mining and quarrying, energy producing products. 2. **Component II**: 1. D23: Other non-metallic mineral products, 2. D41T43: Construction, 3. D35: Electricity, gas, steam and air conditioning supply, 4. D84: Public administration and defense, compulsory social security, 5. D68: Real estate activities. 3. **Component III**: 1. D10T12: Food products, beverages and tobacco, 2. D01T02: Agriculture, hunting, forestry, 3. D17T18: Paper products and printing, 4. D85: Education, 5. D86T88: Human health and social work activities, 6. D51: Air transport, 5. D28: Machinery and equipment, nec, 2. D36T39: Water supply, sewage, waste management and remediation activities, 1. D09: Mining support service activities. 4. **Component IV**: 1. D45T47: Wholesale and retail trade, repair of motor vehicles, 2. D49: Land transport and transport via pipelines, 3. D22: Rubber and plastics products, 4. D61: Telecommunications, 5. D53: Postal and courier activities, 6. D94T96: Other service activities. 5. **Component V**: 1. D20: Chemical and chemical products, 2. D07T08: Mining and quarrying, non-energy producing products. **Agglomerative clustering**: 1. **Component I**: 1. D41T43: Construction, 2. D07T08: Mining and quarrying, non-energy producing products. 2. **Component II**: 1. D19: Coke and refined petroleum products, 2. D05T06: Mining and quarrying, energy producing products. 3. **Component III**: 1. D84: Public administration and defense, compulsory social security, 2. D55T56: Accommodation and food service activities. 4. **Component IV**: 1. D10T12: Food products, beverages and tobacco, 2. D01T02: Agriculture, hunting, forestry, 3. D17T18: Paper products and printing, 4. D85: Education, 5. D86T88: Human health and social work activities, 6. D51: Air transport, 7. D28: Machinery and equipment, nec, 8. D36T39: Water supply, sewerage, waste management and remediation activities, 1. D09: Mining support service activities. 5. **Component V**: 1. D45T47: Wholesale and retail trade, repair of motor vehicles, 2. D49: Land transport and transport via pipelines, 4. D61: Telecommunications, 5. D53: Postal and courier activities, 6. D94T96: Other service activities. 6. **Component VI**: 1. D64T66: Financial and insurance activities, 2. D68: Real estate activities. By allowing overlapping, we deduce a different clustering: **Overlapping divisive clustering**: 1. **Component I**: 1. D19: Coke and refined petroleum products, 2. D05T06: Mining and quarrying, energy producing products. 2. **Component II**: 1. D23: Other non-metallic mineral products, 2. D41T43: Construction, 3. D35: Electricity, gas, steam and air conditioning supply, 4. D84: Public administration and defense, compulsory social security, 5. D68: Real estate activities, 6. D36T39: Water supply, sewerage, waste management and remediation activities. 3. **Component III**: 1. D10T12: Food products, beverages and tobacco, 2. D01T02: Agriculture, hunting, forestry, 3. D17T18: Paper products and printing, 4. D85: Education, 5. D86T88: Human health and social work activities, 6. D51: Air transport, 7. D28: Machinery and equipment, nec, 8. D36T39: Water supply, sewerage, waste management and remediation activities, 1. D09: Mining support service activities. 4. **Component IV**: 1. D45T47: Wholesale and retail trade, repair of motor vehicles, 2. D49: Land transport and transport via pipelines, 3. D22: Rubber and plastics products, 4. D61: Telecommunications, 5. D53: Postal and courier activities, 6. D94T96: Other service activities, 7. D03: Fishing and aquaculture. 5. **Component V**: 1. D45T47: Wholesale and retail trade, repair of motor vehicles, 2. D84: Public administration and defense, compulsory social security, 3. D35: Electricity, gas, steam and air conditioning supply, 4. D51: Air transport, 5. D77T82: Administrative and support services, 6. D68: Real estate activities, 7. D31T33: Manufacturing nec, repair and installation of machinery and equipment, 8. D36T39: Water supply; sewerage, waste management and remediation activities, 1. D30: Other transport equipment, 2. D62T63: IT and other information services, 3. D19: Coke and refined petroleum products. **Overlapping agglomerative clustering:** 1. **Component I:** 1. D41T43: Construction, 2. D07T08: Mining and quarrying, non-energy producing products. 2. **Component II:** 1. D19: Coke and refined petroleum products, 2. D05T06: Mining and quarrying, energy producing products. 3. **Component III:** 1. D84: Public administration and defense, compulsory social security, 2. D5T56: Accommodation and food service activities. 4. **Component IV:** 1. D23: Other non-metallic mineral products, 2. D41T43: Construction, 3. D35: Electricity, gas, steam and air conditioning supply, 4. D07T08: Mining and quarrying, non-energy producing products, 5. D08: Real estate activities, 6. D07T36: Water supply; severage, waste management and remediation activities. 5. **Component V:** 1. D84: Public administration and defense, compulsory social security, 2. D41T43: Construction. 6. **Component VI:** 1. D45T47: Wholesale and retail trade; repair of motor vehicles, 2. D41T43: Construction, 3. D84: Public administration and defense, compulsory social security, 4. D22: Rubber and plastics products, 5. D55T56: Accommodation and food service activities, 7. D19: Coke and refined petroleum products, 8. D05T06: Mining and quarrying, energy producing products. 7. **Component VII:** 1. D10T12: Food products, beverages and tobacco, 2. D01T02: Agriculture, hunting, forestry, 3. D17T18: Paper products and printing, 4. D85: Education, 5. D41T43: Construction, 6. D86T88: Human health and social work activities, 7. D51: Air transport, 8. D28: Machinery and equipment, nec, 9. D86T39: Water supply, severage, waste management and remediation activities, 10. D09: Mining support service activities. 8. **Component VIII:** 1. D49: Land transport and transport via pipelines, 2. D41T43: Construction. 9. **Component IX:** 1. D20: Chemical and chemical products, 2. D07T08: Mining and quarrying, non-energy producing products. 3. D22: Rubber and plastics products, 4. D41T43: Construction, 5. D31T33: Manufacturing nec, repair and installation of machinery and equipment, 6. D55T56: Accommodation and food service activities, 7. D19: Coke and refined petroleum products. 10. **Component X:** 1. D35: Electricity, gas, steam and air conditioning supply, 2. D84: Public administration and defense, compulsory social security, 3. D94T96: Other service activities. 11. **Component XI:** 1. D49: Land transport and transport via pipelines, 2. D19: Coke and refined petroleum products, 3. D05T06: Mining and quarrying, energy producing products. 12. **Component XII:** 1. D45T47: Wholesale and retail trade; repair of motor vehicles, 2. D49: Land transport and transport via pipelines, 3. D61: Telecommunications, 4. D28: Machinery and equipment, nec, 5. D53: Postal and courier activities, 6. D94T96: Other service activities, 6. D90T93: Arts, entertainment and recreation. 13. **Component XIII:** 1. D64T66: Financial and insurance activities, 2. D68: Real estate activities, 3. D41T43: Construction, 4. D07T08: Mining and quarrying, non-energy producing products, 5. D36T39: Water supply; severage, waste management and remediation activities, 6. D19: Coke and refined petroleum products, 6. D05T06: Mining and quarrying, energy producing products. 14. **Component XIV:** 1. D84: Public administration and defense, compulsory social security, 2. D61: Telecommunications, 3. D68: Real estate activities, 4. D36T39: Water supply; severage, waste management and remediation activities. 15. **Component XV:** 1. D35: Electricity, gas, steam and air conditioning supply, 2. D05T06: Mining and quarrying, energy producing products. To visualize the principal (non atomic) cyclicality components, we produce a graph proportional representation of arcs (with respect to the weights) and vertices (with respect to spectral influence in the component): Figure 6: The five divisive clusters of 2018 Moroccan input-output table. Figure 8: The six agglomerative clusters of 2018 Moroccan input-output table. Figure 10: The five overlapping divisive clusters of 2018 Moroccan input-output table. Figure 12: The fifteen overlapping agglomerative clusters of 2018 Moroccan input-output table. Figure 14: The fifteen overlapping agglomerative clusters of 2018 Moroccan input-output table-bis. A direct comparison of the nonoverlapping and overlapping situations reveals the following: * **Divisive algorithm:** The two algorithm produce almost the same first four components (in the overlapping situation, component number two has a sixth pole D36T39: Water supply, sewage, waste management and remediation activities, and component number four has a seventh pole D03: Fishing and aquaculture). The difference is recorded for the component number five where it is a two-poles component of mining and quarrying for the nonoverlapping algorithm and an eleven poles component of public expenditures. * **Agglomerative algorithm:** The difference is much greater and additional clusters arise when employing overlapping clustering. The first three clusters are identical, the overlapping cluster number seven has an extra pole (D41T43: Construction) compared to nonoverlapping cluster number four. The nonoverlapping cluster number five correspond to the overlapping cluster number twelve and the nonoverlapping cluster number six is included in overlapping cluster number thirteen. Figure 16: The fifteen overlapping agglomerative clusters of 2018 Moroccan input-output table-ter. ### Discussion In 2018, Moroccan interindustrial flows structure is characterized by high cyclicality. Compared to the countries panel, Morocco along with Saudi Arabia, are the only countries with a spectral cyclicality greater than one, making them the countries with the highest interindustrial flows cyclicality. We can distinguish three high cyclicality disjoint key components in the Moroccan interindustrial structure, we find convenient to do an analogy with cyclical clusters and planet systems in terms of influence. One could describe those components according to the following themes: 1. **Mining and quarrying** represented in figure 6.(a), formed by a planet: D19-Coke and refined petroleum products; and a moon: D05T06-Mining and quarrying, energy producing products. 2. **Construction** represented in figure 6.(b), characterized as planetary system of decreasing size (influence): D23-Other non-metallic mineral products; D41T43-Construction; D35-Electricity, gas, steam and air conditioning supply; D84-Public administration and defense, compulsory social security; D68-Real estate activities. 3. **Agri-food and related activities** represented in figure 6.(c), formed by two planet-hosting stars: D10T12-Food products, beverages and tobacco; and D01T02-Agriculture, hunting, forestry; and seven planets: D17T18-Paper products and printing; D85-Education; D86T88-Human health and social work activities; D51-Air transport; D28-Machinery and equipment, nec; D36T39-Water supply, sewerage, waste management and remediation activities; D09-Mining support service activities. 4. **Logistics and transports** in figure 6.(d), with a central star: D45T47-Wholesale and retail trade, repair of motor vehicles; two planets: D49-Land transport and transport via pipelines; D22-Rubber and plastics products; and three moons: D61-Telecommunications; D53-Postal and courier activities; D94T96-Other service activities. If we study the intersecting clusters, we can identify additional interesting productive components. Among them, we can enumerate the homogeneous ones: 1. Energy products represented in figure 12.(f), represented by a sun: D45T47: Wholesale and retail trade, repair of motor vehicles, three planets: D41T43: Construction, D84: Public administration and defense, compulsory social security, D22: Rubber and plastics products, and three satellites: D55T56: Accommodation and food service activities, D19: Coke and refined petroleum products, D05T06: Mining and quarrying, energy producing products. 2. Manufacture of chemical products represented in figure 14.(c), represented by a sun: D20: Chemical and chemical products, two planets: D07T08: Mining and quarrying, non-energy producing products, D22: Rubber and plastics products, and four satellites: D41T43: Construction, D31T33: Manufacturing nec, repair and installation of machinery and equipment, D55T56: Accommodation and food service activities, D19: Coke and refined petroleum products. If we study the historical evolution from 1995 to 2018, one can conclude, according to figure 4, that Moroccan interindustrial flows structure is clearly tending towards a structure with strong cyclicality characterized with strong diffusion and interdependence. The Moroccan passes from one big cluster with weak cyclicality regrouping almost all sectors to three key components with strong cyclicality. One can give a detailed lecture: 1. The top "propulsive industries", according to the spectral influence, are in order and persistently the poles: D10T12-Food products, beverages and tobacco; D01T02-Agriculture, hunting, forestry; D45T47-Wholesale and retail trade, repair of motor vehicles; D20-Chemical and chemical products. While the weakest industries, in the sense of the spectral influence, are in an increasing order: D97T98-Activities of households as employers, undifferentiated goods and services-producing activities of households for own use; D05T06-Mining and quarrying, energy producing products; D30-Other transport equipment. 2. According to spectral influences evolution graphs (see the appendix 7.2) one can notice the structural mutation of the Moroccan economy through a process of concentration and complementary of the interindustrial flows. One can notice poles with increasing spectral influence: D01T02-Agriculture, hunting, forestry; D10T12-Food products, beverages and tobacco; D29-Motor vehicles, trailers and semi-trailers; D50-Water transport. This increasing influences absorb the influences of others poles like: D13T15-Textiles, textile products, leather and footwear; D16-Wood and products of wood and cork; D21-Pharmaceuticals, medicinal chemical and botanical products; D22-Rubber and plastics products; D23-Other non-metallic mineral products; D24-Basic metals; D25-Fabricated metal products; D26-Computer, electronic and optical equipment; D27-Electrical equipment; D28-Machinery and equipment, nec; D31T33-Manufacturing nec; repair and installation of machinery and equipment; D35-Electricity, gas, steam and air conditioning supply; D41T43-Construction; D45T47-Wholesale and retail trade; repair of motor vehicles; D53-Postal and courier activities; D55T56-Accommodation and food service activities; D61-Telecommunications; D68-Real estate activities. ## 6 Conclusion In summary, the new concept of spectral influence is a local measurement of centrality of vertex in a graph from a diffusion perspective, this local measurement induce a global measurement characterizing cyclicality of the whole graph. One can use this global measure in the definition of clustering algorithms to isolated high cyclicality components. The application of this new technique to input-output analysis sheds new light to interindustrial organization by highlighting interindustrial transformation unites involving many industries and detecting propulsive industries which are the driving force of the economy. In opposite to the classical Leontieff/Ghosh analysis, supposing a flow relative orientation (demand/supply) and supposing a structural stabitity of the coefficients, the spectral cyclicality analyzes the absolute flows and trace the bilateral influences at time \(t\). Cyclicality in graph reflects many interesting features that could be very helpful in understanding social networks, like interdependence (dominance, resilience,...), diffusion (ideas, diseases, economic shocks...), feedback effects and amplification (productivity, transformation,...) and many potential uses of this new technique are conceivable. [MISSING_PAGE_POST] Appendix ### Industrial poles of the OECD harmonised national input-output tables **D01T02**: Agriculture, hunting, forestry **D07T08**: Mining and quarrying, non-energy producing products **D13T15**: Textiles, textile products, leather and footwear **D19**: Coke and refined petroleum products **D22**: Rubber and plastics products **D25**: Fabricated metal products **D28**: Machinery and equipment, nec **D31T33**: Manufacturing nec; repair and installation of machinery and equipment **D41T43**: Construction **D50**: Water transport **D53**: Postal and courier activities **D61**: Telecommunications **D68**: Real estate activities **D84**: Public administration and defense; compulsory social security **D90T93**: Arts, entertainment and recreation **D03**: Fishing and aquaculture **D09**: Mining support service activities **D10T12**: Food products, beverages and tobacco **D16**: Wood and products of wood and cork **D17T18**: Paper products and printing **D20**: Chemical and chemical products **D21**: Pharmaceuticals, medicinal chemical and botanical products **D23**: Other non-metallic mineral products **D26**: Computer, electronic and optical equipment **D29**: Motor vehicles, trailers and semi-trailers **D35**: Electricity, gas, steam and air conditioning supply **D41T43**: Construction **D45T47**: Wholesale and retail trade; repair of motor vehicles **D50**: Water transport **D51**: Air transport **D53**: Postal and courier activities **D61**: Telecommunications **D62T63**: IT and other information services **D68**: Real estate activities **D69T75**: Professional, scientific and technical activities **D85**: Education **D90T93**: Arts, entertainment and recreation **D03**: Fishing and aquaculture **D05T06**: Mining and quarrying, energy producing products **D10T12**: Food products, beverages and tobacco **D16**: Wood and products of wood and cork **D17T18**: Paper products and printing **D20**: Chemical and chemical products **D21**: Pharmaceuticals, medicinal chemical and botanical products **D24**: Basic metals **D27**: Electrical equipment **D30**: Other transport equipment **D36T39**: Water supply; sewage, waste management and remediation activities **D49**: Land transport and transport via pipelines **D52**: Warehousing and support activities for transportation **D58T60**: Publishing, audio-visual and broadcasting activities **D64T66**: Financial and insurance activities **D77T82**: Administrative and support services **D86T88**: Human health and social work activities **D97T98**: Activities of households as employers; undifferentiated goods- and services-producing activities of households for own use ### Spectral influence evolution by industrial poles Figure 18: Evolution of the spectral radius of Moroccan industrial poles. Figure 20: Evolution of the spectral radius of Moroccan industrial poles-bis.
2302.09283
Convex subgraphs and spanning trees of the square cycles
We classify connected spanning convex subgraphs of the square cycles. We then show that every spanning tree of $C_n^2$ is contained in a unique nontrivial connected spanning convex subgraph of $C_n^2$. As a result, we obtain a purely combinatorial derivation of the formula for the number of spanning trees of the square cycles.
Akihiro Munemasa, Yuuho Tanaka
2023-02-18T10:39:39Z
http://arxiv.org/abs/2302.09283v1
# Convex subgraphs and spanning trees of the square cycles ###### Abstract. We classify connected spanning convex subgraphs of the square cycles. We then show that every spanning tree of \(C_{n}^{2}\) is contained in a unique nontrivial connected spanning convex subgraph of \(C_{n}^{2}\). As a result, we obtain a purely combinatorial derivation of the formula for the number of spanning trees of the square cycles. Key words and phrases:spanning tree, square cycle, circulant graph, Fibonacci sequence 2010 Mathematics Subject Classification: 05C05,05C70,05C10 ## 1. Introduction It is well known that the number \(t(G)\) of spanning trees of a connected graph \(G\) can be computed using the matrix-tree theorem (see e.g., [4, Section 13.2]). More precisely, \(t(G)\) is the product of nonzero eigenvalues of the Laplacian of \(G\), divided by the number of vertices of \(G\). For families of graphs whose Laplacian eigenvalues can be computed, this method is very useful in computing \(t(G)\), except that the results sometimes need to be simplified since eigenvalues may not be rational integers. Extensive work has been done to simplify the formula for \(t(G)\) for circulant graphs (see [1, 2, 8]). For example, the derivation of the number \(t(C_{n}^{2})\) of spanning trees of the square cycle with \(n\) vertices using the matrix-tree theorem was done first by Baron et al. [3]. Kleitman and Golden [5] used a different approach to compute \(t(C_{n}^{2})\). Namely, they used topological properties of a planar embedding of \(C_{n}^{2}\) to derive a formula for \(t(C_{n}^{2})\) when \(n\) is even, and mentioned that a similar method can be used to derive the same formula for odd \(n\), without giving details. If \(n\) is even, \(C_{n}^{2}\) is isomorphic to the rose window graph \(R_{n/2}(1,1)\)[7]. The graph \(C_{n}^{2}\) is also denoted by \(C_{n}(1,2)\)[1] and \(C_{n}^{1,2}\)[8]. In this paper, we transform the topological argument given by Kleitman and Golden [5] to a purely combinatorial one, using the theory of graph homotopy [6]. This allows us to give a uniform proof of the formula for \(t(C_{n}^{2})\) independent of the parity of \(n\). The key idea in our proof is the fact that every spanning tree of \(C_{n}^{2}\) is contained in a unique nontrivial connected spanning convex subgraph. Although this fact appeared implicitly in [5] when \(n\) is even, the classification of connected convex subgraphs of \(C_{n}^{2}\) is new. The organization of this paper is as follows. In Section 2, we fix notation for the square cycles as circulant graphs, and give some properties of the Fibonacci sequence. We give a classification of connected spanning convex subgraphs of \(C_{n}^{2}\) in Section 3. In Section 4, we show that the set of the spanning trees of \(C_{n}^{2}\) coincides with the disjoint union of the set of the spanning trees of strip graphs with tails \(S_{n,k,j}\), defined in Section 2. As a consequence, we deduce a combinatorial proof of the formula for \(t(C_{n}^{2})\) which does not depend on the parity of \(n\). ## 2. Preliminaries **Definition 2.1**.: A graph that is connected and has no closed paths is called a _tree_. For a graph \(G\), we say that \(G^{\prime}\) satisfying \[E(G^{\prime})\subseteq E(G),V(G)=V(G^{\prime})\] is a _spanning subgraph_ of \(G\). If a spanning subgraph \(G^{\prime}\) in a connected graph \(G\) is a tree, then \(G^{\prime}\) is called a _spanning tree_ of the graph \(G\). **Definition 2.2**.: Let \(n\) be an integer with \(n\geq 5\). The _square cycle_\(C_{n}^{2}\) is defined by \(V(C_{n}^{2})=\mathbb{Z}_{n}=\mathbb{Z}/n\mathbb{Z}\), \(E(C_{n}^{2})=\{\{v_{i},v_{j}\}\mid v_{i},v_{j}\in V(C_{n}^{2}),\ i,j\in \mathbb{Z},\ i-j=1,2\}\), where \(v_{i}=i+n\mathbb{Z}\in\mathbb{Z}_{n}\). Let \(n\) be an integer with \(n\geq 5\). Then, \(E(C_{n}^{2})=\{e_{i}\mid i\in\mathbb{Z}\}\cup\{f_{i}\mid i\in\mathbb{Z}\}\), where we define _frame_\(e_{i}\) and _window_\(f_{i}\) as follows. \[e_{i}=\{v_{i},v_{i+1}\},\ f_{i}=\{v_{i},v_{i+2}\}\quad(i\in \mathbb{Z}).\] We denote by \(\mathcal{W}(n)\) and \(\mathcal{F}(n)\) the set of frames and windows, respectively as follows. \[\mathcal{W}(n) =\{f_{i}\mid 0\leq i\leq n-1\},\] \[\mathcal{F}(n) =\{e_{i}\mid 1\leq i\leq n\}.\] By a _triangle_ of \(C_{n}^{2}\) we mean a set \[T_{i}=\{e_{i},e_{i+1},f_{i}\}\quad(i\in\mathbb{Z}).\] Then, \[E(C_{n}^{2})=\bigcup_{i=0}^{n-1}T_{i}.\] **Definition 2.3**.: Given \(i\) (\(i\in\mathbb{Z}\)), if a subgraph \(G\) of \(C_{n}^{2}\) satisfies \(|T_{i}\cap E(G)|\leq 1\) or \(T_{i}\subseteq E(G)\), then \(G\) is said to be _convex_ with respect to the triangle \(T_{i}\). A subgraph \(G\) of \(C_{n}^{2}\) is said to be _convex_ if \(G\) is convex with respected to \(T_{i}\) for all \(i\) (\(i\in\mathbb{Z}\)). **Definition 2.4**.: The graph \(S_{k}\) defined by \(V(S_{k})=\{1,2,\dots,k\}\), \(E(S_{k})=\{\{i,j\}\mid i,j\in V(S_{k}),1\leq|i-j|\leq 2\}\) is called a _strip graph_. The sequence of numbers \(F_{n}\) defined by the recurrence relation \(F_{0}=0,F_{1}=1,F_{n+2}=F_{n+1}+F_{n}\) (\(n=0,1,2,\dots\)) is called the _Fibonacci sequence_. The following two lemmas are due to Kleitman and Golden [5]. **Lemma 2.5**.: _For \(n\geq 2\), \(t(S_{n})=F_{2n-2}\)._ **Lemma 2.6**.: _For \(n\geq 2\),_ \[F_{n}^{2}=\begin{cases}\sum_{k=0}^{(n-2)/2}F_{4k+2}&\text{if $n$ is even,}\\ 1+\sum_{k=1}^{(n-1)/2}F_{4k}&\text{if $n$ is odd.}\end{cases}\] The following substructures appeared implicitly in [5]. In fact, an escape route is the set of edges crossed by a path from the interior to the outside region, in the planar drawing of \(C_{n}^{2}\) (see [5, Fig. 4]). The removal of an escape route gives a strip graph with tails (see [5, Fig. 5]). **Definition 2.7**.: Let \(n\geq 5\). For integers \(j\) and \(k\) with \(0\leq k\leq\lceil\frac{n-2}{2}\rceil\), we define the graph \(S_{n,k,j}\) as follows: \[V(S_{n,k,j}) =V(C_{n}^{2}),\] \[E(S_{n,k,j}) =E(C_{n}^{2})\setminus ES(n,k,j),\quad(j,k\in\mathbb{Z},\ 0\leq k \leq\lceil\frac{n-2}{2}\rceil),\] where \[ES(n,k,j)=\{f_{j},f_{j+2k+1}\}\cup\{e_{j+1},\ldots,e_{j+2k+1}\}\quad(j,k\in \mathbb{Z},\;0\leq k\leq\lceil\frac{n-2}{2}\rceil).\] The graph \(S_{n,k,j}\) is called a _strip graph with tails_, and \(ES(n,k,j)\) is called the _escape route_. The graphs \(S_{n,k,j}\) are connected spanning convex subgraphs of \(C_{n}^{2}\). Clearly, \(C_{n}^{2}\) and \((\mathbb{Z}_{n},\mathcal{W}(n))\) for \(n\) odd are also connected spanning subgraphs of \(C_{n}^{2}\), and we call these subgraphs _trivial_ connected spanning subgraphs. For a graph \(G\), let \(T_{G}\) be the set of all spanning trees of \(G\). Then \(t(G)=|T_{G}|\). Since \(S_{n,k,j}\) can be obtained from the strip graph \(S_{n-2k}\) by attaching two tails of length \(k\), the following lemma holds. **Lemma 2.8**.: _For \(j,k\in\mathbb{Z},\;0\leq k\leq\lceil\frac{n-2}{2}\rceil\), \(t(S_{n,k,j})=t(S_{n-2k})\)._ ## 3. Spanning convex subgraphs In this section, we prove our first main result which gives a classification of connected spanning convex subgraphs of \(C_{n}^{2}\). **Lemma 3.1**.: _Let \(G\) be a connected spanning convex subgraph of \(C_{n}^{2}\). If \(k\) and \(p\) are integers with \(0\leq p<n\) and_ \[\{e_{k-1},f_{k},f_{k+2},\ldots,f_{k+2p-2},e_{k+2p}\}\subseteq E(G), \tag{1}\] _then \(\{e_{k},e_{k+1},\ldots,e_{k+2p-1}\}\subseteq E(G)\)._ Proof.: We prove the assertion by induction on \(p\). If \(p=0\), then it is trivial. Therefore, we may assume that \(p\geq 1\). Suppose that there exists an integer \(i\) with \(0\leq i\leq 2p-1\) such that \(e_{k+i}\in E(G)\). If \(i\) is even, then since \(G\) is convex with respected to \(T_{k+i}\), \(e_{k+i+1}\in E(G)\). Therefore, we can apply the induction to \(\{e_{k-1},f_{k},f_{k+2},\ldots,f_{k+i-2},e_{k+i}\}\) and \(\{e_{k+i+1},f_{k+i+2},f_{k+i+4},\ldots,f_{k+2p-2},e_{k+2p}\}\). Similarly, if \(i\) is odd, then we can apply the induction. It remains to derive a contradiction by assuming \[e_{k},e_{k+1},\ldots,e_{k+2p-1}\notin E(G). \tag{2}\] Since \(G\) is convex with respect to \(T_{k-1}\), \[f_{k-1}\notin E(G). \tag{3}\] Similarly, since \(G\) is convex with respect to \(T_{k+2p-1}\) \[f_{k+2p-1}\notin E(G). \tag{4}\] From (2), (3), and (4), we see that the set \(\{v_{k+1},v_{k+3},\ldots,v_{k+2p-1}\}\) is separated from its complement in the connected spanning subgraph \(G\). This is a contradiction. **Lemma 3.2**.: _Let \(G\) be a nontrivial connected spanning convex subgraph of \(C_{n}^{2}\). If \(E(G)\) contains no frame, then \(n\) is odd, and \(G=S_{n,\frac{n-1}{2},j}\) for some integer \(j\) with \(0\leq j\leq n-1\)._ Proof.: By the assumption, \(E(G)\) consists only of windows. Since \(G\) is connected, \(n\) is odd. Since \(G\) is nontrivial, \(|E(G)|\leq n-1.\) Since \(G\) is connected, \(|E(G)|\geq n-1\). Therefore, \(|E(G)|=n-1\). Then there exists \(j\) such that \(E(G)=\mathcal{W}(n)\setminus\{f_{j}\}=E(S_{n,\frac{n-1}{2},j})\). This proves \(G=S_{n,\frac{n-1}{2},j}\). **Lemma 3.3**.: _Let \(G\) be a nontrivial connected spanning convex subgraph of \(C_{n}^{2}\). If \(E(G)\) contains a frame, then \(G=S_{n,k,j}\) for some integers \(j,k\) with \(0\leq j\leq n-1\), \(0\leq k\leq\lfloor\frac{n-2}{2}\rfloor\)._ Proof.: If \(\mathcal{F}(n)\subset E(G)\), then it is easy to see that \(G=C_{n}^{2}\), contradicting the assumption that \(G\) is nontrivial. Since \(\mathcal{F}(n)\cap E(G)\neq\emptyset\), there exists \(i,l\) with \(0\leq i\leq n-1\), \(1\leq l\leq n-1\) satisfying \(\{e_{i},e_{i+1},\ldots,e_{i+l-1}\}\subseteq E(G)\) and \(e_{i-1},e_{i+l}\notin E(G)\). Without loss of generality, we may assume that \(i=0\). In this case, we have \[\{e_{0},e_{1},\ldots,e_{l-1}\} \subseteq E(G), \tag{6}\] \[e_{-1} \notin E(G),\] (7) \[e_{l} \notin E(G). \tag{5}\] Since \(G\) is convex with respected to \(T_{j}\) (\(0\leq j\leq l-2\)), (5) implies \[f_{0},f_{1},\ldots,f_{l-2}\in E(G). \tag{8}\] Since \(G\) is convex with respected to \(T_{-1}\), (5) and (6) imply \[f_{-1}\notin E(G). \tag{9}\] Since \(G\) is convex with respect to \(T_{l-1}\), (5) and (7) imply \[f_{l-1}\notin E(G). \tag{10}\] Let \(s\) and \(t\) be the largest non-negative integers such that \[f_{-2},f_{-4},\ldots,f_{-2s}\in E(G), \tag{11}\] and \[f_{l},f_{l+2},\ldots,f_{l+2t-2}\in E(G), \tag{12}\] respectively. Then, \(f_{-2s-2}\notin E(G)\) and \(f_{l+2t}\notin E(G)\). We show that \[e_{l},e_{l+1},\ldots,e_{l+2t}\notin E(G)\text{ and }t<\frac{n-l}{2}, \tag{13}\] \[e_{-1},e_{-2},\ldots,e_{-2s-1}\notin E(G)\text{ and }s<\frac{n-l}{2}. \tag{14}\] Assume that there exists an integer \(m\) with \(0\leq m\leq 2t\) and that \(e_{l+m}\in E(G)\). We may choose minimal such \(m\). By (7), we have \(m>0\). If \(m\) is odd, then by (12) and by the convexity of \(G\), \(T_{l+m-1}\subseteq E(G)\). Therefore, \(e_{l+m-1}\in E(G)\). This contradicts the minimality of \(m\). If \(m\) is even, then by (5) and (12), we have \(\{e_{l-1},f_{l},f_{l+2},\ldots,f_{l+m-1},e_{l+m}\}\subseteq E(G)\). Then, by Lemma 3.1, we have \(e_{l+m-1}\in E(G)\), again contradicting the minimality of \(m\). Therefore, (13) holds. Similarly, we can prove (14). Let \(K=\{v_{-2s},v_{-2s+2},\ldots,v_{0},v_{1},\ldots,v_{l},v_{l+2},\ldots,v_{l+2t}\}\). If \(K\neq\mathbb{Z}_{n}\), then by (9), (10), (13) and (14), \(G\) is disconnected. This is a contradiction. Therefore, \(K=\mathbb{Z}_{n}\), and in particular, \(s+l+1+t=|K|=n\). From (13) and (14), \(s=t=\frac{n-l-1}{2}\). Then, from (11), (12) and (13), we have \[\{f_{l+1},f_{l+3},\ldots,f_{n-2}\} \subseteq E(G),\] \[\{f_{l},f_{l+2},\ldots,f_{n-3}\} \subseteq E(G),\] \[e_{l},e_{l+1},\ldots,e_{n-1} \notin E(G),\] respectively. Together with (5), (8), (9) and (10), these imply \(E(G)=E(S_{n,\frac{n-l-1}{2},l-1})\). This proves \(G=S_{n,\frac{n-l-1}{2},l-1}\). **Theorem 3.4**.: _Let \(G\) be a nontrivial connected spanning convex subgraph of \(C_{n}^{2}\). Then there exists integers \(j,k\) with \(0\leq j\leq n-1\), \(0\leq k\leq\lceil\frac{n-2}{2}\rceil\) such that \(G=S_{n,k,j}\)._ Proof.: This is immediate from Lemmas 3.2 and 3.3. ## 4. Enumerating spanning trees of the square cycles In this section, we prove our second main result which states that every spanning tree of \(C_{n}^{2}\) is contained in a unique connected spanning convex subgraph. As a consequence, we obtain an alternative proof of the formula for the number of spanning trees of \(C_{n}^{2}\). Our method is a combinatorial formulation of the topological proof given in [5]. The tool we use is the theory of graph homotopy. We refer the reader to [6] for the precise definition of the homotopy group. Roughly speaking, the homotopy group \(\pi(G,v_{0})\) of the graph \(G\) with respect to a vertex \(v_{0}\) is the group formed by equivalence classes of circuits through \(v_{0}\). It contains the subgroup \(\pi(G,v_{0},3)\) which is "generated" by triangles. It is clear that \(\pi(G,v_{0})=\pi(G,v_{0},3)\) if \(G\) is a tree, strip graph, or strip graph with tails, while \(\pi(G,v_{0})\neq\pi(G,v_{0},3)\) if \(G\) is a cycle of length at least \(4\) or \(G=C_{n}^{2}\) with \(n\geq 7\). **Theorem 4.1**.: _Let \(n\) be an integer with \(n\geq 5\). For every is a spanning tree \(G\) of \(C_{n}^{2}\), there exists a unique nontrivial connected spanning convex subgraph \(H\) of \(C_{n}^{2}\) such that \(E(G)\subseteq E(H)\). More precisely,_ \[T_{C_{n}^{2}}=\bigcup_{k=0}^{\lceil\frac{n-2}{2}\rceil}\bigcup_{j=0}^{n-1}T_{ S_{n,k,j}}\quad\text{(disjoint).} \tag{15}\] Proof.: Since the assertion can be verified directly for \(n=5\) and \(6\), we assume \(n\geq 7\). According to Lewis [6], for a graph \(G\) we can define its homotopy group \(\pi(G,v_{0})\) and the normal subgroup \(\pi(G,v_{0},3)\) of \(\pi(G,v_{0})\) generated by the triangles. Clearly \(\pi(G,v_{0})\) is the trivial group for the spanning tree \(G\) of \(C_{n}^{2}\), so in particular \(\pi(G,v_{0})=\pi(G,v_{0},3)\) holds. For a spanning tree \(G\) of \(C_{n}^{2}\) which is not convex with respect to some triangle \(T_{i}\), \(\pi(G^{\prime},v_{0})=\pi(G^{\prime},v_{0},3)\) also holds for the graph \(G^{\prime}\) obtained from \(G\) by adding the unique missing edge of \(T_{i}\). This process can be iterated until we reach a convex subgraph containing \(G\). The resulting graph \(H\) is a connected spanning convex subgraph \(H\) of \(C_{n}^{2}\), and hence it is one of the graphs classified in Theorem 3.4, or one of the trivial connected spanning convex subgraph. Since \(\pi(H,v_{0})=\pi(H,v_{0},3)\) holds only for nontrivial connected spanning convex subgraph \(H\), there exist \(j,k\) with \(0\leq j\leq n-1\), \(0\leq k\leq\lceil\frac{n-2}{2}\rceil\) such that \(E(G)\subseteq E(S_{n,k,j})\). It remains to show that the union in (15) is disjoint. Suppose \(E(G)\subseteq E(S_{n,k^{\prime},j^{\prime}})\) for some \(j^{\prime},k^{\prime}\) with \(0\leq k^{\prime}\leq\lceil\frac{n-2}{2}\rceil,\ 0\leq j^{\prime}\leq n-1\). Then the subgraph with edge set \(E(S_{n,k,j})\cap E(S_{n,k^{\prime},j^{\prime}})\) is a nontrivial connected spanning convex subgraph of \(C_{n}^{2}\), and hence coincides with \(S_{n,k^{\prime},j^{\prime}}\) for some \(j^{\prime\prime},k^{\prime\prime}\) with \(0\leq k^{\prime\prime}\leq\lceil\frac{n-2}{2}\rceil,\ 0\leq j^{\prime\prime}\leq n-1\). This implies \(E(S_{n,k^{\prime\prime},j^{\prime\prime}})\subseteq E(S_{n,k,j})\) which is possible only when \((j,k)=(j^{\prime\prime},k^{\prime\prime})\). Then we have \((j,k)=(j^{\prime},k^{\prime})\). Therefore, the union in (15) is disjoint. **Corollary 4.2** (Kleitman and Golden [5]).: \[t(C_{n}^{2})=nF_{n}^{2}.\] Proof.: \[t(C_{n}^{2}) =\sum_{k=0}^{\lceil\frac{n-2}{2}\rceil}\sum_{j=0}^{n-1}t(S_{n,k,j}) \text{(by Theorem \ref{thm:main})}\] \[=n\sum_{k=0}^{\lceil\frac{n-2}{2}\rceil}t(S_{n-2k}) \text{(by Lemma \ref{thm:main})}\] \[=\begin{cases}n\sum_{k=0}^{(n-2)/2}t(S_{2k+2})&\text{if $n$ is even,}\\ n+n\sum_{k=1}^{(n-1)/2}t(S_{2k+1})&\text{if $n$ is odd}\end{cases}\] \[=\begin{cases}n\sum_{k=0}^{(n-2)/2}F_{4k+2}&\text{if $n$ is even,}\\ n(1+\sum_{k=1}^{(n-1)/2}F_{4k})&\text{if $n$ is odd}\end{cases}\] \[\text{(by Lemma \ref{thm:main})}\] \[=nF_{n}^{2}\] (by Lemma 2.6).
2301.10282
Recent Results from the FASTSUM Collaboration
The FASTSUM Collaboration has developed a comprehensive research programme in thermal QCD using 2+1 flavour, anisotropic ensembles. In this talk, we summarise some of our recent results including thermal hadron spectrum calculations using our ``Generation 2L'' ensembles which have pion masses of 239(1) MeV. These include open charm mesons and charm baryons. We also summarise our work using the Backus Gilbert approach to determining the spectral function of the NRQCD bottomonium system. Finally, we review our determination of the interquark potential in the same system, but using our ``Generation 2'' ensembles which have heavier pion masses of 384(4) MeV.
Chris Allton, Gert Aarts, Ryan Bignell, Tim Burns, Sergio Chaves, Simon Hands, Benjamin Jäger, Seyong Kim, Maria Paola Lombardo, Ben Page, Sinéad M. Ryan, Jon-Ivar Skullerud, Thomas Spriggs
2023-01-24T19:57:31Z
http://arxiv.org/abs/2301.10282v1
# Recent Results from the FASTSUM Collaboration ###### Abstract: The FASTSUM Collaboration has developed a comprehensive research programme in thermal QCD using \(2+1\) flavour, anisotropic ensembles. In this talk, we summarise some of our recent results including thermal hadron spectrum calculations using our "Generation 2L" ensembles which have pion masses of 239(1) MeV. These include open charm mesons and charm baryons. We also summarise our work using the Backus Gilbert approach to determining the spectral function of the NRQCD bottomonium system. Finally, we review our determination of the interquark potential in the same system, but using our "Generation 2" ensembles which have heavier pion masses of 384(4) MeV. Introduction The thermal spectrum of QCD is of great interest for intrinsic reasons in order to understand how confinement is manifest in hadronic systems. It is also crucial to aid the analysis of heavy-ion collision experiments. Here we present an update of the Fastsum Collaboration's thermal hadronic spectrum research, focussing on open charm mesons, charm baryons and bottomonium. We also present an update of our studies of the interquark potential in bottomonium. We use 2+1 flavour dynamical simulations with anisotropic lattices where the temporal lattice spacing, \(a_{\tau}\), is smaller than the spatial one, \(a_{s}\)[1, 2]. Our anisotropy is designed to maximise information from thermal temporal correlators, noting that they are constrained in temporal extent, \(L_{\tau}\), since the temperature, \(T=1/L_{\tau}\). We use stout-linked, clover-improved Wilson fermions and Symanzik-improved gauge fields. The lattice ensembles used in this work are our Generation 2 and 2L ensembles which have parameters listed in Tables 1 and 2. These ensembles have a pion mass of 384(4) and 239(1) MeV respectively and span temperatures both below and above the pseudocritical temperature \(T_{\rm pc}\). ## 2 Charm hadron spectrum Here we summarise results from both open-charm mesons and charmed baryons using our Generation 2L ensembles. Unlike hiddened-charmed mesons at non-zero temperature, which have been extensively studied on the lattice [5], open-charmed mesons have received less attention [6, 7, 8]. In [4], we present results for the open charm meson spectrum for \(T\lesssim T_{pc}\). Because the states are confined, we proceed with conventional analysis techniques and assess up to which temperatures these are applicable. We extend these techniques to determine the variation of the masses as a function of temperature. The results are shown in Fig.1. These show a small temperature variation where both the pseudoscalar, \(D_{(s)}\), and vector, \(D_{(s)}^{*}\), mesons' masses decrease as the temperature appro \begin{table} \begin{tabular}{l|c c c c|c c c c c} \(N_{\tau}\) & 128 & 40 & 36 & 32 & 28 & 24 & 20 & 16 \\ \hline \(T\) [MeV] & 44 & 141 & 156 & 176 & 201 & 235 & 281 & 352 \\ \end{tabular} \end{table} Table 1: Parameters for the Fastsum Generation 2 ensembles used in this work. The lattice sizes are \(24^{3}\times N_{\tau}\), with lattice spacings \(a_{s}=0.1205(8)\) fm and \(a_{\tau}=35.1(2)\) am, and pion mass \(m_{\pi}=384(4)\) MeV. The vertical line indicates the position of \(T_{\rm pc}\approx 181\) MeV. Full details in [1, 2]. \begin{table} \begin{tabular}{l|c c c c|c c c c c c} \(N_{\tau}\) & 128 & 64 & 56 & 48 & 40 & 36 & 32 & 28 & 24 & 20 & 16 \\ \hline \(T\) [MeV] & 47 & 95 & 109 & 127 & 152 & 169 & 190 & 217 & 253 & 304 & 380 \\ \end{tabular} \end{table} Table 2: Parameters for the Fastsum Generation 2L ensembles used in this work. The lattice sizes are \(32^{3}\times N_{\tau}\), with lattice spacings \(a_{s}=0.1121(3)\) fm and \(a_{\tau}=32.46(7)\) am, and pion mass \(m_{\pi}=239(1)\) MeV [3]. The vertical line indicates the position of \(T_{\rm pc}\approx 167\) MeV. Full details in [2, 4]. However, note that this temperature shift is at the percent level. In contrast, the analogous thermal effects for the axial vector and scalar channel are very strong, see [4] for details. In another analysis [10, 11], we study the charm baryon spectrum paying particular attention to both parity states. We extract the masses in the confined phase and use a method based on a direct analysis of the correlation function to determine whether the parity states approach degeneracy for \(T\geq T_{pc}\). In Fig.2 we plot the masses for baryons with a variety of charm content as a function of \(T\) up to just beyond \(T_{pc}\) where the simple pole fits become unreliable. For \(T>T_{pc}\) we cannot assume that the charm baryon states are bound and so a conventional pole fitting ansatze cannot be applied. To gain information about the mass of the two parity states, Figure 1: Temperature dependence of the groundstate masses in the hadronic phase, for \(D\) and \(D^{*}\) (left) and \(D_{s}\) and \(D_{s}^{*}\) (right) mesons. The vertical band indicates \(T_{pc}\) and the horizontal stubs at \(T=0\) represent the PDG values [9]. Figure 2: Mass spectrum of of the \(J^{1/2+}\) baryons with positive (left) and negative parity (right) as a function of temperature. Dashed lines are zero-temperature experimental results [9] to guide the eye. The inner (outer) shaded regions represents the statistical (systematic) errors. See [10] for the corresponding plots for the \(J^{3/2+}\) states. we therefore define the ratio \(R\), \[R(n_{0})=\frac{\sum_{n=n_{0}}^{\frac{1}{4}N_{\tau}-1}\mathcal{R}(\tau_{n})/\sigma _{\mathcal{R}}^{2}(\tau_{n})}{\sum_{n=n_{0}}^{\frac{1}{4}N_{\tau}-1}1/\sigma_{ \mathcal{R}}^{2}(\tau_{n})},\hskip 28.452756pt\mbox{where}\hskip 14.226378pt \mathcal{R}(\tau)=\frac{G^{+}(\tau)-G^{-}(\tau)}{G^{+}(\tau)+G^{-}(\tau)}. \tag{1}\] Typically we use \(n_{0}=4\), and our results are not qualitatively sensitive to this choice. \(R\) will be unity in the limit of \(M^{+}\ll M^{-}\) and zero for the degenerate case. In Fig.3 we plot \(R\) for a number of channels. We can see an approach to degeneracy above \(T_{pc}\) which is most pronounced for baryons with the least charm content. By fitting the data to cubic splines, we can determine estimates of the transition temperatures from the inflection points and these are indicated as vertical lines in the figure. We note that the location of the inflection points coincide, within a few MeV, with the pseudocritical temperature, \(T_{\rm pc}=167(3)\) MeV, determined via the chiral condensate [4] and hence are a manifestion of chiral symmetry restoration in the charmed baryon sector. Further details about these points are elucidated in [10, 11]. ## 3 Bottomonium (NRQCD) spectrum Bottomonium states are important probes of the quark gluon plasma phase in heavy-ion collision experiments because they are created very early and do not reach chemical equilibrium. The lattice approach to analysing bottomonium spectra invariably relies on extracting the spectral function at temperature \(T\), \(\rho(\omega,T)\), defined from the correlation function, \[G(\tau;T)=\int_{\omega_{\rm min}}^{\infty}\frac{d\omega}{2\pi}K(\tau,\omega) \rho(\omega;T), \tag{2}\] where the kernel for NRQCD quarks is defined as, \[K(\tau,\omega)=e^{-\omega\tau}. \tag{3}\] Figure 3: R value from Eq.(1) for \(J=\frac{1}{2}\) baryons with the lines from cubic splines. The transition temperature estimates obtained from the inflection point of the splines are shown as vertical lines. Note that since NRQCD introduces an additive energy shift, the lower limit of the integral in Eq.(2), \(\omega_{\rm min}\), is not necessarily zero. The spectral function gives complete information about the spectrum of a particular channel, including the widths of the states. The Fastsum Collaboration has studied the bottomonium spectrum using NRQCD bottom quarks in a number of publications, e.g. [12], using a variety of methods to extract the spectral function. In [13] we extend this work by using the Backus Gilbert [14] method to obtain the spectral function with our Generation 2L ensembles, and we report on this work here. We note that we can introduce two "hyper-parameters" in our analysis. The first is the "whitening" factor, \(\alpha\), in the Tikhonov-like method [15], which governs how much of the identity (white noise) is added to the kernel. To remove the \(\alpha\) dependency in the final result, the \(\alpha\to 0\) limit is taken. The second parameter is an energy shift, \(\Delta\). Since Eq.(2) is a Laplace transform, we can trivially shift \(\rho(\omega)\to\rho(\omega+\Delta)\) by multiplying \(G(\tau)\) by \(e^{\Delta\tau}\). Increasing the energy shift, \(\Delta\), moves the ground state feature closer to \(\omega_{\rm min}\) which is advantageous because that is where the Backus Gilbert method has the greatest resolution. Note however that the value of \(\Delta\) needs to be limited to ensure that no spectral feature is pushed into the \(\omega<\omega_{\rm min}\) region. Hence we remove the dependency on the \(\Delta\) hyper-parameter via this requirement. In Fig.4 we plot the \(\chi_{b1}\) spectral function obtained from local correlators with various \(\Delta\) values to illustrate that the ground state feature becomes better resolved as \(\Delta\) increases. Full results and predictions of masses and widths obtained using this method are in [13]. We can also perform an interesting statistical analysis of the correlation functions. As pointed out by Parisi and Lepage [16, 17], the statistical error of the hadronic correlation function, \(\langle{\cal O}(t){\cal O}(O)\rangle\) at large time is determined by the lightest states that can be composed from \({\cal O}^{2}\). Typically this will be the pseudoscalar state. We have analysed the statistical error in the bottomonium correlation functions by measuring their covariance matrices' singularity as the energy shift, \(e^{\Delta\tau}\), is applied. The value of \(\Delta\) corresponding to the most singular covariance matrix, \(\Delta_{\rm sing}\), is a prediction of (half) the ground state Figure 4: The \(\chi_{b1}\) spectral function obtained via the Backus Gilbert method for a variety of \(\Delta\) energy shifts at \(T=47\) MeV. mass in the \(\mathcal{O}^{2}\) channel. We used the condition number of the covariance matrix to determine \(\Delta_{\text{sing}}\). Following the Parisi and Lepage analysis, we expect to find \(\Delta_{\text{sing}}=M_{\eta_{b}}\) i.e. the mass of the pseudoscalar in the bottomonium sector. In Fig.5 we plot results for the channels considered (\(\eta_{b}\), \(\Upsilon\), \(h_{b}\) and \(\chi_{\text{0,b1,b2}}\)) as a function of \(\tau_{2}\) where the covariance matrices are analysed over the time interval \([0,\tau_{2}/a_{\tau}]\). Smeared operators at both the source and sink were used. Figure 5 shows a convergence, as \(\tau_{2}\) increases, to a mass value compatible with the pseudoscalar, \(\eta_{b}\), independent of the channel, as expected from an analysis following [16, 17]. Further details of this work are discussed in [13]. ## 4 Interquark potentials in Bottomonium For temperatures below \(T_{pc}\), the interquark potential is known to be confining, i.e. with a non-zero string tension, whereas above \(T_{pc}\) it is expected to flatten allowing unbound quark states. Lattice studies of the interquark potential have been obtained from Wilson loops, which correspond to infinitely heavy quarks [18, 19], the NRQCD bottomonium system [20], and from charmonium using the HAL QCD method [21, 22, 23]. In this work, we study the interquark potential in the bottomonium system also using the HAL QCD approach, see [24]. The bottom quarks were simulated using the NRQCD approximation and our Generation 2 ensembles were used, see Table 1. The HAL QCD method [25] uses Bethe Salpeter wavefunctions, \(\psi\left(t,\vec{r}\right)\), obtained from temporal correlators of non-local heavy quark-antiquark meson operators, \(\overline{Q}\left(\tau,\vec{x}\right)\Gamma Q\left(\tau,\vec{x}+\vec{r}\right)\). It represents Figure 5: The value of the energy shift, \(\Delta_{\text{sing}}\), (i.e. the predicted mass) which gives the most singular covariance matrix for a variety of bottomonium channels as a function of \(1/\tau_{2}\). The covariance matrices are defined over the time interval \([0,\tau_{2}/a_{\tau}]\) and therefore the best results are obtained as \(\tau_{2}\to\infty\). Experimental mass values are also shown as horizontal lines [9]. This indicates the method correctly predicts the \(\eta_{b}\) (i.e. the pseudoscalar) mass as \(\tau_{2}\) increases. the mesonic system with a Schrodinger equation, \[\left[\frac{p^{2}}{2\mu}+V(\vec{r})\right]\psi(\tau,\vec{r})=E\psi(\tau,\vec{r}), \tag{4}\] where \(\mu\) is the reduced quark mass in the centre of mass frame. The residual, non-physical \(\tau-\)dependency has to be carefully handled and this is discussed in [24]. Figure 6 shows the preliminary results for the interquark bottomonium potential for a variety of temperatures. In each pane, the _same_ time window was considered, thus nullifying any systematic effects from this fitting artefact. These plots show evidence of the expected flattening of the potential as the temperature increases. ## 5 Conclusions Recent results from Fastsum Collaboration's thermal spectrum research [4, 10, 11, 13] and interquark potentials [24] have been presented. ## 6 Acknowledgements This work is supported by STFC grant ST/T000813/1. SK is supported by the National Research Foundation of Korea under grant NRF-2021R1A2C1092701 and Grant NRF-2021K1A3A1A16096820, funded by the Korean government (MEST). BP has been supported by a Swansea University Research Excellence Scholarship (SURES). This work used the DiRAC Extreme Scaling service at the University of Edinburgh, operated by the Edinburgh Parallel Computing Centre and the DiRAC Data Intensive service operated by the University of Leicester IT Services on behalf of the STFC DiRAC HPC Facility (www.dirac.ac.uk). This equipment was funded by BEIS capital funding via Figure 6: The interquark potential in (NRQCD) bottomonium via the HAL QCD procedure. In each pane, all the temperatures used the same time window: [13, 14] (left) and [17, 18] (right). This is to isolate thermal effects from \(\tau\) systematics. The expected flattening of the potential with temperature can be seen. STFC capital grants ST/R00238X/1, ST/K000373/1 and ST/R002363/1 and STFC DiRAC Operations grants ST/R001006/1 and ST/R001014/1. DiRAC is part of the UK National e-Infrastructure. This work was performed using PRACE resources at Cineca (Italy), CEA (France) and Stuttgart (Germany) via grants 2015133079, 2018194714, 2019214714 and 2020214714. We acknowledge the support of the Swansea Academy for Advanced Computing, the Supercomputing Wales project, which is part-funded by the European Regional Development Fund (ERDF) via Welsh Government, and the University of Southern Denmark and ICHEC, Ireland for use of computing facilities. We are grateful to the Hadron Spectrum Collaboration for the use of their zero temperature ensemble in our Generation 2 work.
2303.14845
Multi-task Learning of Histology and Molecular Markers for Classifying Diffuse Glioma
Most recently, the pathology diagnosis of cancer is shifting to integrating molecular makers with histology features. It is a urgent need for digital pathology methods to effectively integrate molecular markers with histology, which could lead to more accurate diagnosis in the real world scenarios. This paper presents a first attempt to jointly predict molecular markers and histology features and model their interactions for classifying diffuse glioma bases on whole slide images. Specifically, we propose a hierarchical multi-task multi-instance learning framework to jointly predict histology and molecular markers. Moreover, we propose a co-occurrence probability-based label correction graph network to model the co-occurrence of molecular markers. Lastly, we design an inter-omic interaction strategy with the dynamical confidence constraint loss to model the interactions of histology and molecular markers. Our experiments show that our method outperforms other state-of-the-art methods in classifying diffuse glioma,as well as related histology and molecular markers on a multi-institutional dataset.
Xiaofei Wang, Stephen Price, Chao Li
2023-03-26T23:00:00Z
http://arxiv.org/abs/2303.14845v3
# Multi-task Learning of Histology and Molecular Markers for Classifying Diffuse Glioma ###### Abstract Most recently, the pathology diagnosis of cancer is shifting to integrating molecular makers with histology features. It is a urgent need for digital pathology methods to effectively integrate molecular markers with histology, which could lead to more accurate diagnosis in the real world scenarios. This paper presents a first attempt to jointly predict molecular markers and histology features and model their interactions for classifying diffuse glioma bases on whole slide images. Specifically, we propose a hierarchical multi-task multi-instance learning framework to jointly predict histology and molecular markers. Moreover, we propose a co-occurrence probability-based label correction graph network to model the co-occurrence of molecular markers. Lastly, we design an inter-omic interaction strategy with the dynamical confidence constraint loss to model the interactions of histology and molecular markers. Our experiments show that our method outperforms other state-of-the-art methods in classifying diffuse glioma,as well as related histology and molecular markers on a multi-institutional dataset. Keywords:Diffuse Glioma Digital Pathology Multi-task learning Multi-label Classification. ## 1 Introduction Diffuse glioma is the most common and aggressive primary brain tumors in adults, accounting for more deaths than any other type [7]. Pathology diagnosis is the gold standard for diffuse glioma but is usually time-consuming and highly depends on the expertise of senior pathologists [13]. Hence, automatic algorithms based on histology whole slide images (WSIs) [15], namely digital pathology, promise to offer rapid diagnosis and aid precise treatment. Recently, deep learning has achieved success in diagnosing various tumors [2, 21]. Most methods are mainly predicting histology based on WSI, less concerning molecular markers. However, the paradigm of pathological diagnosis of glioma has shifted to molecular pathology, reflected by the 2021 WHO Classification of Tumors of the Central Nervous System [14]. The role of key molecular markers, i.e, isocitrate dehydrogenase (IDH) mutations, co-deletion of chromosome 1p/19q and homozygous deletion (HOMDEL) of cyclin-dependent kinase inhibitor 2A/B (CDKN), have been highlighted as major diagnostic markers for glioma, while histology features that are traditionally emphasized are now considered as reference, although still relevant in many cases. For instance, in the new pathology scheme, glioblastoma is increasingly diagnosed according to IDH mutations, while previously its diagnosis mostly relies on histology features, including necrosis and microvascular proliferation (NMP). 1. Footnote 1: Similar changes of the diagnostic protocol can also be found in endometrial cancer [9], renal neoplasia [18], thyroid carcinomas [19], etc. However, the primary approaches to assess molecular markers include gene sequencing and immuno-staining, which are time-consuming and expensive than histology assessment. As histology features are closely associated with molecular alterations, algorithm predicting molecular markers based on histology WSIs is feasible and have clinical significance. Moreover, under the new paradigm of integrating molecular markers with histological features into tumor classification, it is helpful to model the interaction of histology and molecular makers for a more accurate diagnosis. Therefore, there is an urgent need for developing novel digital pathology methods based on WSI to predict molecular markers and histology jointly and modeling their interactions for final tumor classification, which could be valuable for the clinically relevant diagnosis of diffuse glioma. This paper proposes a deep learning model (DeepMO-Glioma) for glioma classification based on WSIs, aiming to reflect the molecular pathology paradigm. Previous methods are proposed to integrate histology and genomics for tumour diagnosis [10, 20, 3]. For instance, Chen _et al._[3] proposed a multimodal fusion strategy to integrate WSIs and genomics for survival prediction. Xing _et al._[20] devised a self-normalizing network to encode genomics. Nevertheless, most existing approaches of tumor classification only treat molecular markers as additional input, incapable to simultaneously predict the status of molecular markers, thus clinically less relevant under the current clinical diagnosis scheme. To jointly predict histology and molecular markers following clinical diagnostic pathway, we propose a novel hierarchical multi-task multi-instance learning (HMT-MIL) framework based on vision transformer [4], with two partially weight-sharing parts to jointly predict molecular markers and histology. Moreover, multiple molecular markers are needed for classifying cancers, due to complex tumor biology. To reflect real-world clinical scenarios, we formulate predicting multiple molecular markers as a multi-label classification (MLC) task. Previous MLC methods have successfully modeled the correlation among labels [12, 22]. For example, Yazici _et al._[22] proposed an orderless recurrent method, while Li _et al._ designed a label attention transformer network with graph embedding. In medical domain, Zhang _et al._[25] devised a dual-pool contrastive learning for classifying fundus and X-ray images. Despite success, when applied to predicting multiple molecular markers, most existing methods may ignore the co-occurrence of molecular markers, which have intrinsic associations [23]. Hence, we propose a co-occurrence probability-based, label-correlation graph (CPLC-Graph) network to model the co-occurrence of molecular markers, i.e, intra-omic relationship. Lastly, we focus on modeling the interaction between molecular markers and histology. Specifically, we devise a novel inter-omic interaction strategy to model the interaction between the predictions of molecular markers and histology, e.g., IDH mutation and NMP, both of which are relevant in diagnosing glioblastoma. Particularly, we design a dynamical confidence constraint (DCC) loss that constrains the model to focus on similar areas of WSIs for both tasks. To the best of our knowledge, this is the first attempt to classify diffuse gliomas via modeling the interaction of histology and molecular markers. Our main contributions are: (1) We propose a multi-task multi-instance learning framework to jointly predict molecular markers and histology and finally classify diffuse glioma, reflecting the new paradigm of pathology diagnosis. (2) We design a CPLC-Graph network to model the intra-omic relationship of multiple molecular markers. (3) We design a DCC learning strategy to model the inter-omic interaction between histology and molecular markers for glioma classification. ## 2 Preliminaries **Database:** We use publicly available TCGA GBM-LGG dataset [6]. Following [15], we remove the WSIs of low quality or lack of labels. Totally, we include 2,633 Figure 1: Architecture of DeepMO-Glioma. WSIs from 940 cases, randomly split into training (2,087 WSIs of 752 cases), validation (282 WSIs of 94 cases) and test (264 WSIs of 94 cases) sets. All the WSIs are crop into patches of size 224px \(\times\) 224px at 0.5 \(\mu\)m px\({}^{-1}\). **Training labels:** Original tables for genomic markers and histology of WSIs are obtained from TCGA database [6]. According to the up-to-date WHO criteria [14], we generate the classification labels for each case as grade 4 glioblastoma (defined as IDH widetype), oligodendoma (defined as IDH mutant and 1p/19q co-deletion), grade 4 astrocytoma (defined as IDH mutant, 1p/19q non co-deletion with CDKN HOMDEL or NMP), or low-grade astrocytoma (other cases). ## 3 Methodology Figure 1 illustrates the proposed DeepMO-Glioma. As shown above, the up-to-date WHO criteria incorporates molecular markers and histology features. Therefore, our model is designed to jointly learn the tasks of predicting molecular markers and histology features in a unified framework. DeepMO-Glioma consists four modules, i.e, stem, genomic marker prediction, histology prediction and cross-omics interaction. Given the cropped patches \(\{\mathbf{X}_{i}\}_{1}^{N}\) as the input, DeepMO-Glioma outputs 1) the status of molecular markers, including IDH mutation \(\hat{l}_{idh}\in\mathbb{R}^{2}\), 1p/19q co-deletion \(\hat{l}_{1p/19q}\in\mathbb{R}^{2}\) and CDKN HOMDEL \(\hat{l}_{cdkn}\in\mathbb{R}^{2}\), 2) existence of NMP \(\hat{l}_{nmp}\in\mathbb{R}^{2}\) and 3) final diagnosis of diffuse gliomas \(\hat{l}_{glio}\in\mathbb{R}^{4}\). ### Hierarchical multi-task multi-instance learning To extract global information from input \(\{\mathbf{X}_{i}\}_{1}^{N}\), we propose a hierarchical multi-task multi-instance learning (HMT-MIL) framework for both histology and molecular marker predictions. Different from methods using one [24] or several [3, 20] representative patches per slide, HMT-MIL framework can extract information from N=2,500 patches per WSI via utilizing the MIL learning paradigm with transformer blocks [4] embedded. Note for WSIs with patch number\(<\) N, we adopt a biological repeat strategy for dimension alignment. ### Co-occurrence probability-based, label-correlation graph In predicting molecular markers, i.e., IDH, 1p/19q and CDKN, existing MLC methods based on label correlation may ignore the co-occurrence of the labels. Figure 2: Pipelines of CPLC-Graph network (a) and DCC loss (b). We proposed a co-occurrence probability-based, label-correlation graph (CPLC-Graph) network and a label correlation (LC) loss for intra-omic modeling of the co-occurrence probability of the three markers. **1) CPLC-Graph network:** CPLC-Graph (Figure 2) is defined as \(\mathcal{G}=(\mathbf{V},\mathbf{E})\), where \(\mathbf{V}\) indicates the nodes, while \(\mathbf{E}\) represents the edges. Given the intermediate features in predicting the three molecular markers subnets \(\mathbf{F}^{\text{in}}=[\mathbf{F}^{\text{in}}_{i}]_{i=1}^{3}\in\mathbb{R}^{3 \times C}\) as input nodes, we construct a co-occurrence probability based correlation matrix \(\mathbf{A}\in\mathbb{R}^{3\times 3}\) to reflect the relationships among each node feature, with a weight matrix \(\mathbf{W}_{g}\in\mathbb{R}^{C\times C}\) to update the value of \(\mathbf{F}^{\text{in}}\). Formally, the output nodes \(\mathbf{F}^{\text{mid}}\in\mathbb{R}^{3\times C}\) are formulated by a single graph convolutional network layer as \[\mathbf{F}^{\text{mid}}=\delta(\mathbf{A}\mathbf{F}^{\text{in}}\mathbf{W}_{g} ),\text{where}\mathbf{A}=[A_{i}^{j}]_{i,j=1}^{3},A_{i}^{j}=\frac{1}{2}\big{(} p(\mathbf{F}^{\text{in}}_{i}|\mathbf{F}^{\text{in}}_{j})+p(\mathbf{F}^{ \text{in}}_{j}|\mathbf{F}^{\text{in}}_{i})\big{)}. \tag{1}\] In (1), \(\delta(\cdot)\) is an activation function and \(p(\mathbf{F}^{\text{in}}_{i}|\mathbf{F}^{\text{in}}_{j})\) denote the probability of the status of \(i\)-th marker given the status of \(j\)-th marker. Besides, residual structure is utilized to generate the final output \(\mathbf{F}^{\text{out}}\) of CPLC-Graph network, defined as \(\mathbf{F}^{\text{out}}=\alpha\mathbf{F}^{\text{mid}}+(1-\alpha)\mathbf{F}^{ \text{in}}\), where \(\alpha\) is a graph balancing hyper-parameter. **2) LC loss:** In order to fully exploit the co-occurrence probability of different molecular markers, we further devise the LC loss that constrains the similarity between any two output molecular markers \(\mathbf{F}^{\text{out}}_{i}\) and \(\mathbf{F}^{\text{out}}_{j}\) to approach their correspondent co-occurrence probability \(A_{i}^{j}\). Formally, the LC loss is defined as \[\mathcal{L}_{\text{LC}}=\mathcal{MSE}(\mathbf{A},\mathbf{D}_{\text{cos}}), \text{where}\mathbf{D}_{\text{cos}}=[D_{cos}^{i,j}]_{i,j=1}^{3},D_{cos}^{i,j }=\frac{(\mathbf{F}^{\text{out}}_{i})^{\top}\mathbf{F}^{\text{out}}_{j}}{\| \mathbf{F}^{\text{out}}_{i}\|\,\big{\|}\mathbf{F}^{\text{out}}_{j}\big{\|}}. \tag{2}\] In (2), \(\mathcal{MSE}\) denotes the function of mean square error, while \(D_{cos}^{i,j}\) is the cosine similarity of features \(\mathbf{F}^{\text{out}}_{i}\) and \(\mathbf{F}^{\text{out}}_{j}\). ### Dynamical confidence constraint We design a dynamical confidence constraint (DCC) strategy to model the interaction between molecular markers and histological features. Taking IDH and NMP as an example, the final outputs for IDH widetype4 and NMP predictions can be defined as \(\hat{l}_{wt}=\sum_{n=1}^{N}\omega_{wt}^{n}f_{wt}^{n}\) and \(\hat{l}_{nmp}=\sum_{n=1}^{N}\omega_{nmp}^{n}f_{nmp}^{n}\), respectively. Note that \(f_{wt}^{n}\) and \(\omega_{wt}^{n}\) are values of the extracted feature and the corresponding decision weight of \(n\)-th patch, respectively. We then reorder \([\omega_{wt}^{n}]_{n=1}^{N}\) to \([\hat{\omega}_{wt}^{n}]_{n=1}^{N}\) based on their values. Similarly, we obtain \([\hat{\omega}_{nmp}^{n}]_{n=1}^{N}\) for NMP confidence weights. Footnote 4: Note that, IDH widetype is incorporated in diagnosing glioblasoma in current clinical paradigm; while previously, diagnosis of glioblasoma is puley based on NMP Based on ordered confidence weights, we constrain the prediction networks of histology and molecular markers to focus on the WSI areas important for both predictions, thus modeling inter-omic interactions. Specifically, we achieve the confidence constraint through a novel DCC loss focusing on top \(K\) important patches for both prediction. Formally, the DCC loss in \(m\)-th training epoch is defined as: \[\mathcal{L}_{\text{DCC}}=\frac{1}{2K_{m}}\sum\nolimits_{k=1}^{K_{m}}\big{(} \mathcal{S}(\hat{\omega}_{wt}^{k},\hat{\omega}_{nmp})+\mathcal{S}(\hat{\omega}_{ nmp}^{k},\hat{\omega}_{wt})\big{)}, \tag{3}\] where \(\mathcal{S}(\hat{\omega}_{wt}^{k},\hat{\omega}_{nmp})\) is the indicator function taking the value 1 when the \(k\)-th important patch of IDH widetype is in the set of top \(K_{m}\) important patches for NMP, and vice versa. In addition, to facilitate the learning process with DCC loss, we adopt a curriculum-learning based training strategy dynamically focusing on hard-to-learn patches, regarded as the patches with higher decision importance weight, as patches with lower confidence weight, e.g., patches with fewer nuclei, are usually easier to learn in both tasks. Hence, \(K_{m}\) is further defined as \[K_{m}=K_{0}\beta^{\lfloor\frac{m}{m_{0}}\rfloor}. \tag{4}\] In (4), \(K_{0}\) and \(m_{0}\) are hyper-parameters to adjust \(\mathcal{L}_{\text{DCC}}\) in training process. ## 4 Experiments & Results ### Implementation details The proposed DeepMO-Glioma is trained on the training set for 70 epochs, with batch size 8 and learning rate 0.003 with Adam optimizer [11] together with the weight decay. Key hyper-parameters are in Table I of supplementary material. All hyper-parameters are tuned to achieve the best performance over the validation set. All experiments are conducted on a computer with an Intel(R) Xeon(R) E5-2698 CPU @2.20GHz, 256GB RAM and 4 Nvidia Tesla V100 GPUs. Additionally, our method is implemented on PyTorch with the Python environment. ### Performance evaluation **1) Glioma classification.** We compare our model with five other state-of-the-art methods: CLAM[15], TransMIL [16], ResNet-18 [5], DenseNet-121 [8] and VGG-16 [17]. Note CLAM [15] and TransMIL [16] are MIL framework, while others are commonly-used image classification methods, set as our baseline. The left panel of Table 1 shows that DeepMO-Glioma performs the best, achieving at \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \hline Method & \multicolumn{5}{c}{Diffuse glioma classification, \%} & \multicolumn{5}{c}{Ablation study (W/O), \%} \\ \cline{2-10} & Ours. & CLAM & TransMIL & ResNet* & DenseNet* & VGG-16* & Graph & LC loss & DCC \\ \hline Acc. & **77.3** & 71.2 & 68.2 & 59.1 & 62.9 & 60.6 & 65.5 & 71.2 & 68.2 \\ Sen. & **76.0** & 62.9 & 60.2 & 51.6 & 52.5 & 49.8 & 47.7 & 61.0 & 59.8 \\ Spec. & **86.6** & 82.9 & 79.9 & 71.4 & 83.5 & 74.5 & 83.0 & 82.3 & 84.7 \\ F\({}_{1}\)-score & **71.0** & 60.0 & 59.2 & 51.6 & 50.9 & 49.6 & 49.4 & 61.2 & 53.5 \\ \hline \hline \end{tabular} * In this paper, all these methods are slighted modified to adjust to the MIL setting. \end{table} Table 1: Performance of classifying glioma based on WHO 2021 criteria [1]. least 6.1%, 13.1%, 3.1% and 11.0% improvement over other models in accuracy, sensitivity, specificity and AUC, respectively, indicating that our model could effectively integrate molecular markers and histology in classifying diffuse gliomas. **2) Predictions of genomic markers and histology features.** From the left panel of Table 2, we observe that DeepMO-Glioma achieves the AUC of 92.0%, 88.1%, 77.2% and 94.5% for IDH mutation, 1p/19q co-deletion, CDKN HOMDEL and NMP prediction, respectively, considerably better than all the comparison models. Figure 3 (b) plots the ROC curves of all models. demonstrating the superior performance of our model over other comparison models. **3) Network interpretability.** An additional visualization experiment is conducted based on patch decision scores to test the interpretability of our method. Due to the page limit, the results are presented in supplementary Figure 1. ### Results of ablation experiments **1) CPLC-Graph network.** The right panels of Table 1 shows that, by setting graph balancing weight \(\alpha\) to 0 for the proposed CPLC-Graph, the accuracy, sensitivity, specificity and F\({}_{1}\)-score decreases by 7.8%, 29.0%, 3.6% and 21.6%, respectively. Similar results are observed for the prediction tasks of molecular Figure 3: ROC curves of our model, comparison and ablation models for predicting IDH, 1p/19q, CDKN and NMP. markers and histology (Table 2). Also, the ROC curve of removing the CPLC-Graph network is shown in Figure 3. These results indicate the utility of the proposed CPLC-Graph network. **2) LC loss.** The right panels of Table 1 shows that the performance after removing LC loss decreases in all metrics, causing a reduction of 6.1%, 15.0%, 4.3% and 9.8%, in accuracy, sensitivity, specificity and F\({}_{1}\)-score, respectively. Similar results for the tasks of molecular marker and histology prediction are observed in the right panel of Table 2 with ROC curves in Figure 3, indicating the effectiveness of the proposed LC loss. **3) DCC loss.** From Table 1, we observe that the proposed DCC loss improves the performance in terms of accuracy by 9.1%. Similar results can be found for sensitivity, specificity and F\({}_{1}\)-score. From Table 2, we observe that the AUC decreases 2.9%, 2.9%, 0.5% and 2.8% for the prediction of IDH, 1p/19q, CDKN and NMP, respectively, when removing the DCC loss. Such performance is also found in comparing the ROC curves in Figure 3, suggesting the importance of the DCC loss for all the tasks. ## 5 Summary The paradigm of pathology diagnosis has shifted to integrating molecular makers with histology features. In this paper, we aim to classify diffuse gliomas under up-to-date diagnosis criteria, via jointly learning the tasks of molecular marker prediction and histology classification. Inputting histology WSIs, our model incorporates a novel HMT-MIL framework to extract global information for both predicting both molecular markers and histology. We also design a CPLC-Graph network and a DCC loss to model both intra-omic and inter-omic interactions. Our experiments demonstrate that our model has achieved superior performance over other state-of-the-art methods, serving as a potentially useful tool for digital pathology based on WSIs in the era of molecular pathology. \begin{table} \begin{tabular}{|c|c|c c c c c c|c c c|} \hline & & Ours. & CLAM & TransMIL & ResNet & DenseNet & VGG-16 & No Graph & No LC & loss & No DCC \\ \cline{2-13} \multirow{4}{*}{**DCC**} & Acc. & **86.4** & 81.4 & 83.7 & 67.8 & 72.0 & 70.5 & 80.7 & 84.1 & 84.1 \\ & Sen. & 80.5 & 93.8 & 82.3 & 68.1 & 70.8 & 70.8 & 82.3 & 82.3 & 88.5 \\ & Spec. & 90.7 & 72.2 & 84.8 & 67.5 & 72.8 & 70.2 & 79.5 & 85.4 & 80.8 \\ & AUC & **92.0** & 91.1 & 90.7 & 72.7 & 80.3 & 79.6 & 86.1 & 90.8 & 89.1 \\ \hline \multirow{4}{*}{**DCC**} & Acc. & 81.4 & 81.8 & 80.3 & 70.1 & 72.3 & 79.2 & 76.1 & **84.1** & 75.0 \\ & Sen. & 75.0 & 48.3 & 43.3 & 71.7 & 66.7 & 43.3 & 75.0 & 61.7 & 78.3 \\ & Spec. & 83.3 & 91.7 & 91.2 & 69.6 & 74.0 & 89.7 & 76.5 & 90.7 & 74.0 \\ & AUC & **88.1** & 82.0 & 82.9 & 76.8 & 77.1 & 75.5 & 83.0 & 86.7 & 85.2 \\ \hline \multirow{4}{*}{**DCC**} & Acc. & **68.6** & 67.8 & 60.2 & 58.7 & 59.1 & 58.7 & 60.2 & 58.3 & 60.2 \\ & Sen. & 63.2 & 65.8 & 55.9 & 59.9 & 57.9 & 57.2 & 47.4 & 38.2 & 46.1 \\ & Spec. & 75.9 & 70.5 & 66.1 & 57.1 & 60.7 & 60.7 & 77.7 & 85.7 & 79.5 \\ & AUC & **77.2** & 77.0 & 65.5 & 62.8 & 62.9 & 59.9 & 72.6 & 76.7 & 76.7 \\ \hline \multirow{4}{*}{**DCC**} & Acc. & **87.5** & 83.7 & 85.6 & 68.6 & 69.3 & 76.9 & 82.2 & 84.8 & 83.0 \\ & Sen. & 85.7 & 81.4 & 81.4 & 62.9 & 76.4 & 74.3 & 81.4 & 89.3 & 77.9 \\ \cline{1-1} & Spec. & 89.5 & 86.3 & 90.3 & 75.0 & 61.3 & 79.8 & 83.1 & 79.8 & 88.7 \\ \cline{1-1} & AUC & **94.5** & 90.7 & 92.7 & 74.0 & 74.7 & 86.1 & 86.7 & 93.4 & 91.7 \\ \hline \end{tabular} \end{table} Table 2: Performance in predicting genomic markers, histology and ablation studies.
2305.01678
Adams spectral sequences for non-vector-bundle Thom spectra
When $R$ is one of the spectra $\mathit{ku}$, $\mathit{ko}$, $\mathit{tmf}$, $\mathit{MTSpin}^c$, $\mathit{MTSpin}$, or $\mathit{MTString}$, there is a standard approach to computing twisted $R$-homology groups of a space $X$ with the Adams spectral sequence, by using a change-of-rings isomorphism to simplify the $E_2$-page. This approach requires the assumption that the twist comes from a vector bundle, i.e. the twist map $X\to B\mathrm{GL}_1(R)$ factors through $B\mathrm{O}$. We show this assumption is unnecessary by working with Baker-Lazarev's Adams spectral sequence of $R$-modules and computing its $E_2$-page for a large class of twists of these spectra. We then work through two example computations motivated by anomaly cancellation for supergravity theories.
Arun Debray, Matthew Yu
2023-05-02T18:00:01Z
http://arxiv.org/abs/2305.01678v1
# Adams spectral sequences for non-vector-bundle Thom spectra ###### Abstract. When \(R\) is one of the spectra \(ku\), \(ko\), \(tmf\), \(MTSpin^{c}\), \(MTSpin\), or \(MTSring\), there is a standard approach to computing twisted \(R\)-homology groups of a space \(X\) with the Adams spectral sequence, by using a change-of-rings isomorphism to simplify the \(E_{2}\)-page. This approach requires the assumption that the twist comes from a vector bundle, i.e. the twist map \(X\to B\mathrm{GL}_{1}(R)\) factors through \(B\mathrm{O}\). We show this assumption is unnecessary by working with Baker-Lazarev's Adams spectral sequence of \(R\)-modules and computing its \(E_{2}\)-page for a large class of twists of these spectra. We then work through two example computations motivated by anomaly cancellation for supergravity theories. It is a pleasure to thank Andrew Baker, Ivano Basile, Jonathan Beardsley, Bob Bruner, Matilda Delgado, Sanath Devalapurkar, Dan Isaksen, Theo Johnson-Freyd, Cameron Krulewski, Miguel Montero, Natalia Pacheco-Tallaj, Luuk Stehouwer, Mayuko Yamashita, and the winter 2023 electronic Computational Homotopy Theory reading seminar for helpful discussions related to this work. Part of this project was completed while AD visited the Perimeter Institute for Theoretical Physics; research at Perimeter is supported by the Government of Canada through Industry Canada and by the Province of Ontario through the Ministry of Research & Innovation. ## 0. Introduction There is a standard formula for computing Steenrod squares in the cohomology of the Thom space or spectrum of a vector bundle \(V\to X\): if \(U\) is the Thom class, \[\mathrm{Sq}^{n}(Ux)=\sum_{i+j=n}Uw_{i}(V)\mathrm{Sq}^{j}(x). \tag{0.1}\] The ubiquity of the Steenrod algebra in computational questions in algebraic topology means this formula has been applied to questions in topology and geometry, and recently even in physics, where it is used to run the Atiyah-Hirzebruch and Adams spectral sequences computing groups of invertible field theories. It is possible to build Thom spectra using more general data than vector bundles, and recently these Thom spectra have appeared in questions motivated by anomaly cancellation in supergravity theories [10, 11]. Motivated by these applications (which we discuss more in SS3), our goal in this paper is to understand the analogue of (0.1) for non-vector-bundle twists of commonly studied generalized cohomology theories. We found that the most direct generalization of (0.1) is true; in a sense, for the theories we study, these more general Thom spectra behave just like vector bundle Thom spectra for the purpose of computing their homotopy groups with the Adams spectral sequence. ### Statement of results Now for a little more detail: our main theorem and the language needed to define it. We use Ando-Blumberg-Gepner-Hopkins-Rezk's approach to twisted generalized cohomology theories [1, 2], which generalizes the notion of a local system. Twists of \(\mathbb{Z}\)-valued cohomology on a pointed, connected space \(X\) are specified by _local systems_ with fiber \(\mathbb{Z}\), which are equivalent data to homomorphisms \(\pi_{1}(X)\to\mathrm{Aut}(\mathbb{Z})\), or, since \(\mathbb{Z}\) is discrete, to maps \(X\to B\mathrm{Aut}(\mathbb{Z})\). Ando-Blumberg-Gepner-Hopkins-Rezk generalize this to \(E_{\infty}\)-ring spectra.1,2 If \(R\) is an \(E_{\infty}\)-ring spectrum, Ando-Blumberg-Gepner-Hopkins-Rezk define a notion of local system of free rank-1 \(R\)-module spectra that is classified by maps to an object called \(B\mathrm{GL}_{1}(R)\), making \(B\mathrm{GL}_{1}(R)\) the classifying space for twists of \(R\)-homology. Given a twist \(f\colon X\to B\mathrm{GL}_{1}(R)\), they then define a _Thom spectrum_\(Mf\), and the homotopy groups of \(Mf\) are the \(f\)-twisted \(R\)-homology groups of \(X\). This construction simultaneously generalizes twisted ordinary homology, twisted \(K\)-theory, and the vector bundle twists mentioned above. Footnote 1: An _\(E_{\infty}\)-ring spectrum_ is the avatar in stable homotopy theory of a generalized cohomology theory with a commutative ring structure. Examples include ordinary cohomology, real and complex \(K\)-theory, and many cobordism theories. We are interested in twisted \(R\)-homology for several \(E_{\infty}\)-ring spectra, so our first step is to give examples of twists. Most of these examples are known, but by using a theorem of Beardsley [1, Theorem 1], one can produce them in a unified way. **Theorem**.: 1. _(Equations (_1.20_) and (_1.27_)) There is a map_ \(K(\mathbb{Z}/2,1)\times K(\mathbb{Z},3)\to B\mathrm{GL}_{1}(\text{MTSpin}^{c})\)_, meaning spin_2 _bordism can be twisted on a space_ \(X\) _by_ \(H^{1}(X;\mathbb{Z}/2)\times H^{3}(X;\mathbb{Z})\)_. The induced maps to_ \(B\mathrm{GL}_{1}(ku)\) _and_ \(B\mathrm{GL}_{1}(KU)\) _recover the usual notion of_ \(K\)_-theory twisted by_ \(H^{1}(X;\mathbb{Z}/2)\times H^{3}(X;\mathbb{Z})\)_._ _._ 2. _(Equations (_1.33_) and (_1.41_)) There is a map_ \(K(\mathbb{Z}/2,1)\times K(\mathbb{Z}/2,2)\to B\mathrm{GL}_{1}(\mathit{MTSpin})\)_, meaning spin bordism can be twisted on a space_ \(X\) _by_ \(H^{1}(X;\mathbb{Z}/2)\times H^{2}(X;\mathbb{Z}/2)\)_. The induced maps to_ \(B\mathrm{GL}_{1}(ko)\) _and_ \(B\mathrm{GL}_{1}(KO)\) _recover the usual notion of KO-theory twisted by_ \(H^{1}(X;\mathbb{Z}/2)\times H^{2}(X;\mathbb{Z}/2)\)_._ 3. _(Equations (_1.53_) and (_1.59_)) Let_ \(\mathit{SK}(4)\) _be a classifying space for degree-_\(4\) _supercohomology classes. Then there is a map_ \(K(\mathbb{Z}/2,1)\times\mathit{SK}(4)\to B\mathrm{GL}_{1}(\mathit{MTSring})\)_, meaning string bordism can be twisted on a space_ \(X\) _by_ \(H^{1}(X;\mathbb{Z}/2)\times\mathit{SH}^{4}(X)\)_. The induced map_ (0.2) \[K(\mathbb{Z},4)\longrightarrow\mathit{SK}(4)\longrightarrow B\mathrm{GL}_{1 }(\mathit{MTSring})\longrightarrow B\mathrm{GL}_{1}(\mathit{tmf})\] _recovers the Ando-Blumberg-Gepner twist of_ \(\mathit{tmf}\) _(and_ \(\mathit{Tmf}\) _and_ \(\mathit{TMF}\)_) by degree-_\(4\) _cohomology classes._ _Supercohomology_ refers to a generalized cohomology theory \(\mathit{SH}\) introduced by Freed [10, SS1] and Gu-Wen [11]: \(\pi_{-2}\mathit{SH}=\mathbb{Z}/2\) and \(\pi_{0}\mathit{SH}=\mathbb{Z}\), with the unique nontrivial \(k\)-invariant, and no other nonzero homotopy groups. We explicitly define \(\mathit{SK}(4)\) in (1.49). Though twists of \(\mathit{tmf}\) by degree-\(4\) cohomology classes are relatively well-studied, this supercohomology generalization appears to only be suggested at in the literature by various authors including [12, 13, 14], and sees more of the homotopy type of \(B\mathrm{GL}_{1}(\mathit{tmf})\). It would be interesting to study instances of this twist. We call the twists in the above theorem _fake vector bundle twists_: when the twist is given by a vector bundle \(V\), these cohomology classes appear as characteristic classes of \(V\), but these twists exist whether or not there is a vector bundle with the prescribed characteristic classes. If \(R\) is one of the spectra mentioned in the above theorem, the Thom spectrum \(Mf\) of a fake vector bundle twist \(f\colon X\to B\mathrm{GL}_{1}(R)\) is an \(R\)-module spectrum. This grants us access to Baker-Lazarev's variant of the Adams spectral sequence [1]. **Theorem** (Baker-Lazarev [1]).: _Let \(p\) be a prime number and \(R\) be an \(E_{\infty}\)-ring spectrum such that \(\pi_{0}(R)\) surjects onto \(\mathbb{Z}/p\), so that \(H\coloneqq H\mathbb{Z}/p\) acquires the structure of an \(R\)-algebra. For \(R\)-module spectra \(M\) and \(N\), let \(N^{*}_{R}M\coloneqq\pi_{-*}\mathrm{Map}_{R}(M,N)\). Then there is an Adams-type spectral sequence with signature_ \[E_{2}^{s,t}=\mathrm{Ext}_{H^{*}_{R}H}^{s,t}(H^{*}_{R}(M),\mathbb{Z}/p) \Longrightarrow\pi_{t-s}(M)^{\wedge}_{p}, \tag{0.3}\] _which converges for all \(M\) and all \(E_{\infty}\)-ring spectra \(R\) we consider in this paper._ What Baker-Lazarev prove is more general than what we state here: we stated only the generality we need. For \(H\mathbb{Z}\), \(ko\), and \(ku\) (\(p=2\)), and \(\mathit{tmf}\), (\(p=2\) and \(p=3\)) \(H^{*}_{R}H\) is known due to work of various authors: let \(\mathcal{A}(n)\) be the subalgebra of the mod \(2\) Steenrod algebra generated by \(\mathrm{Sq}^{1},\ldots,\mathrm{Sq}^{2^{n}}\). Then, at \(p=2\), 1. \(H^{*}_{H\mathbb{Z}}H\cong\mathcal{A}(0)\), 2. \(H^{*}_{ku}H\cong\mathcal{E}(1)\coloneqq\langle\mathrm{Sq}^{1},\mathrm{Sq}^{2} \mathrm{Sq}^{1}+\mathrm{Sq}^{1}\mathrm{Sq}^{2}\rangle\), 3. \(H^{*}_{ko}H\cong\mathcal{A}(1)\), and 4. \(H^{*}_{\mathit{tmf}}H\cong\mathcal{A}(2)\). See (2.3) and the surrounding text. For \(\mathit{tmf}\) at \(p=3\), see Equation (2.16). These algebras are small enough for computations to be tractable, so if we can compute the \(H^{*}_{R}H\)-module structure on \(H^{*}_{R}(Mf)\) for \(f\) a fake vector bundle twist, we can run the Adams spectral sequence and hope to compute \(\pi_{*}(Mf)\). This is the content of our main theorem, Equation (2.28). The first step is to understand \(H^{*}_{R}(Mf)\) as a vector space. In Equation (2.15), we establish a Thom isomorphism \[H^{*}_{R}(Mf)\stackrel{{\cong}}{{\longrightarrow}}H^{*}(X;\mathbb{Z} /2)\cdot U, \tag{0.4}\] where \(U\in H^{0}_{R}(Mf)\) is the Thom class. Using this, we can state our main theorem: **Theorem** (Equation (2.28)).: _Let \(X\) be a topological space._ 1. _Given_ \(a\in H^{1}(X;\mathbb{Z}/2)\) _and_ \(c\in H^{3}(X;\mathbb{Z})\)_, let_ \(f_{a,c}:X\to B\mathrm{GL}_{1}(ku)\) _be the corresponding fake vector bundle twist._ \(H^{*}_{ku}(M^{ku}f_{a,c})\) _is a_ \(\mathcal{E}(1)\)_-module with_ \(Q_{0}\)_-and_ \(Q_{1}\)_-actions by_ \[Q_{0}(Ux) \coloneqq Uax+UQ_{0}(x)\] \[Q_{1}(Ux) \coloneqq U(c\bmod 2+a^{3})x+UQ_{1}(x).\] 2. _Given_ \(a\in H^{1}(X;\mathbb{Z}/2)\) _and_ \(b\in H^{2}(X;\mathbb{Z}/2)\)_, let_ \(f_{a,b}\colon X\to B\mathrm{GL}_{1}(ko)\) _be the corresponding fake vector bundle twist._ \(H^{*}_{ko}(M^{ko}f_{a,b})\) _is an_ \(\mathcal{A}(1)\)_-module with_ \(\mathrm{Sq}^{1}\)_-and_ \(\mathrm{Sq}^{2}\)_-actions_ \[\mathrm{Sq}^{1}(Ux) \coloneqq U(ax+\mathrm{Sq}^{1}(x))\] \[\mathrm{Sq}^{2}(Ux) \coloneqq U(bx+a\mathrm{Sq}^{1}(x)+\mathrm{Sq}^{2}(x)).\] 3. _Given_ \(a\in H^{1}(X;\mathbb{Z}/2)\)_, and_ \(d\in SH^{4}(X)\)_, let_ \(f_{a,d}\colon X\to B\mathrm{GL}_{1}(\text{tmf})\) _be the corresponding fake vector bundle twist._ \(H^{*}_{\text{tmf}}(M^{\text{tmf}}f_{a,d})\) _is an_ \(\mathcal{A}(2)\)_-module with_ \(\mathrm{Sq}^{1}\)_-and_ \(\mathrm{Sq}^{2}\)_-action the same as (2) above, and_ \(\mathrm{Sq}^{4}\)_-action_ \[\mathrm{Sq}^{4}(Ux)=U(\delta x+(t(d)a+\mathrm{Sq}^{1}(t(d)))\mathrm{Sq}^{1}(x) +t(d)\mathrm{Sq}^{2}(x)+a\mathrm{Sq}^{3}(x)+\mathrm{Sq}^{4}(x)).\] _Furthermore,_ \(H^{*}_{\text{tmf}}(M^{\text{tmf}}f_{a,d};\mathbb{Z}/3)\) _is an_ \(\mathcal{A}^{\text{tmf}}\)_-module with_ \(\beta\) _and_ \(\mathcal{P}^{1}\) _actions_ \[\beta(Ux) \coloneqq U\beta(x)\] \[\mathcal{P}^{1}(Ux) \coloneqq U((d\bmod 3)x+\mathcal{P}^{1}(x)).\] With this, we have the input to the Baker-Lazarev spectral sequence in Equations (2.34) and (2.37), from which many computations open up. We give three examples of applications of our techniques. 1. In SS3.1, we use Equation (2.28) to compute low-dimensional \(G\)-bordism groups for \(G=\mathrm{Spin}\times_{\{\pm 1\}}\mathrm{SU}_{8}\). These are the twisted spin bordism groups for a twist over \(B(\mathrm{SU}_{8}/\{\pm 1\})\) which is not a vector bundle twist. In [10], we discussed an application of \(\Omega^{G}_{5}\) to an anomaly cancellation question in 4-dimensional \(\mathcal{N}=8\) supergravity; using Equation (2.37), we can give a much simpler calculation of \(\Omega^{G}_{5}\) than appears in [10, Theorem 4.26]. 2. In SS3.2, we study twisted string bordism groups for a non-vector bundle twist over \(B((E_{8}\times E_{8})\rtimes\mathbb{Z}/2)\), where \(\mathbb{Z}/2\) acts on \(E_{8}\times E_{8}\) by swapping the factors. These bordism groups have applications in the study of the \(E_{8}\times E_{8}\) heterotic string; see [11] for more information. Here, we work through the 3-primary calculation, simplifying a computation in [11]. 3. In SS3.3, we reprove a result of Devalapurkar [11, Remark 2.3.16] describing \(H\mathbb{Z}/2\) as a \(ku\)-module Thom spectrum; Devalapurkar's proof uses different methods. Our theorems proceed similarly for several different families of spectra. One naturally wonders if there are more families out there. Specifically, there is a spectrum for which many but not all of the ingredients of our proofs are present. _Question 0.5_ (Equation (1.17)).: Let \(\text{tmf}_{1}(3)\) denote the connective spectrum of topological modular forms with a level structure for the congruence subgroup \(\Gamma_{1}(3)\subset\mathrm{SL}_{2}(\mathbb{Z})\)[12]. Is there a tangential structure \(\xi\colon B\to B\mathrm{O}\) such that \(\mathit{MT}\xi\) is an \(E_{\infty}\)-ring spectrum with an \(E_{\infty}\)-ring map \(\mathit{MT}\xi\to\mathit{tmf}_{1}(3)\) which is an isomorphism on low-degree homotopy groups? If such a spectrum exists, then one could use our approach to run the Baker-Lazarev Adams spectral sequence to compute twisted \(\mathit{tmf}_{1}(3)\)-homology; the needed change-of-rings formula for \(\mathit{tmf}_{1}(3)\) is due to Mathew [14, Theorem 1.2]. We would be interested in learning if such a spectrum \(\mathit{MT}\xi\) exists. **Outline**.: SS1 is about twists and Thom spectra. First, in SS1.1, we review Ando-Blumberg-Gepner-Hopkins-Rezk's theory of Thom spectra [1, 2] and discuss some constructions and lemmas we need later in the paper. Then, in SS1.2, we construct fake vector bundle twists for the four families of ring spectra that we study in this paper: \(\mathit{MTSO}\) and \(H\mathbb{Z}\) in SS1.2.1; \(\mathit{MTSpin}^{c}\), \(\mathit{ku}\), and \(\mathit{KU}\) in SS1.2.2; \(\mathit{MTSpin}\), \(\mathit{ko}\), and \(\mathit{KO}\) in SS1.2.3; and \(\mathit{MTString}\), \(\mathit{tmf}\), \(\mathit{Tmf}\), and \(\mathit{TMF}\) in SS1.2.4. In SS2 we study the Adams spectral sequence for the Thom spectra of these twists. We begin in SS2.1 by reviewing how the change-of-rings story simplifies Adams computations for vector bundle Thom spectra. Then, in SS2.2, we introduce Baker-Lazarev's \(R\)-module Adams spectral sequence [1]. In SS2.3 we prove Equation (2.28) computing the input to the Baker-Lazarev Adams spectral sequence for the Thom spectra of our fake vector bundle twists. We conclude in SS3 with some applications and examples of computations using the main theorem: a twisted spin bordism example in SS3.1 and an application to U-duality anomaly cancellation; a twisted string bordism example in SS3.2 motivated by anomaly cancellation in heterotic string theory; and a twisted \(\mathit{ku}\)-homology example in SS3.3 exhibiting \(H\mathbb{Z}/2\) as the \(2\)-completion of a \(\mathit{ku}\)-module Thom spectrum. ## 1. Thom spectra and twists a la Ando-Blumberg-Gepner-Hopkins-Rezk ### The Ando-Blumberg-Gepner-Hopkins-Rezk approach to Thom spectra In this subsection we introduce Ando-Blumberg-Gepner-Hopkins-Rezk's theory of Thom spectra [1, 2] and recall the key facts we need for our theorems.3 In this paper, we only need to work with \(E_{\infty}\)-ring spectra, and we will state some theorems in only the generality we need, which is less general than what Ando-Blumberg-Gepner-Hopkins-Rezk prove. Footnote 3: Here and throughout the paper, we work with the symmetric monoidal \(\infty\)-category of spectra constructed by Lurie [14, §1.4], where by “\(\infty\)-category” we always mean quasicategory. In §2, we use work of Baker-Lazarev [1], who work with a different model of spectra, the \(\mathbb{S}\)-modules of Elmendorf-Kriz-Mandell-May [2, S 2.1]. The equivalence between the \(\infty\)-category presented by the model category of \(\mathbb{S}\)-modules and Lurie’s \(\infty\)-category of spectra follows from work of Mandell-May-Schwede-Shipley [16], Schwede [17], and Mandell-May [18]. Likewise, these papers show that commutative algebras in the category of \(\mathbb{S}\)-modules correspond to \(E_{\infty}\)-rings in the \(\infty\)-category of spectra. By an \(\infty\)_-group_ we mean a grouplike \(E_{1}\)-space, which is a homotopically invariant version of topological group. By an _abelian \(\infty\)-group_\(A\) we mean a grouplike \(E_{\infty}\)-space. **Definition 1.1** (May [15, SSIII.2]).: Let \(R\) be an \(E_{\infty}\)-ring spectrum. The _group of units_ of \(R\) is the abelian \(\infty\)-group \(\mathrm{GL}_{1}(R)\) defined to be the following pullback: (1.2) The pullback (1.2) takes place in the \(\infty\)-category of abelian \(\infty\)-groups. As the three legs of the pullback diagram (1.2) are functorial in \(R\), \(\mathrm{GL}_{1}(R)\) is also functorial in \(R\). Since \(\operatorname{GL}_{1}(R)\) is an \(\infty\)-group, it has a classifying space \(B\!\operatorname{GL}_{1}(R)\); we refer to a map \(X\to B\!\operatorname{GL}_{1}(R)\) as a _twist_ of \(R\) over \(X\). There is a sense in which \(B\!\operatorname{GL}_{1}(R)\) carries the universal local system of \(R\)_-lines_, or free \(R\)-module spectra of rank \(1\): see [1, Corollary 2.14]. **Example 1.3**.: If \(A\) is a commutative ring and \(R=HA\), then the equivalence of abelian \(\infty\)-groups \(\pi_{0}\colon\Omega^{\infty}HA\stackrel{{\simeq}}{{\to}}A\) induces an equivalence of abelian \(\infty\)-groups \(\operatorname{GL}_{1}(R)\simeq A^{\times}\). Let \(\mathcal{M}\!od_{R}\) denote the \(\infty\)-category of \(R\)-module spectra and \(\mathcal{L}\!\mathit{ine}_{R}\) denote the \(\infty\)-category of \(R\)-lines, and let \(\pi_{\leq\infty}(X)\) denote the fundamental \(\infty\)-groupoid of a space \(X\). The identification \(|\mathcal{L}\!\mathit{ine}_{R}|\stackrel{{\simeq}}{{\to}}B \!\operatorname{GL}_{1}(R)\)[1, Corollary 2.14] allows us to reformulate the inclusion \(\mathcal{L}\!\mathit{ine}_{R}\hookrightarrow\mathcal{M}\!od_{R}\) as a functor \(M\colon\pi_{\leq\infty}(B\!\operatorname{GL}_{1}(R))\to\mathcal{M}\!od_{R}\), which one can think of as sending a point in \(B\!\operatorname{GL}_{1}(R)\) to the \(R\)-line which is the fiber of the universal local system of \(R\)-lines on \(B\!\operatorname{GL}_{1}(R)\). In the rest of this paper, we will simply write \(X\) for \(\pi_{\leq\infty}(X)\), as we will never be in a situation where this causes ambiguity. **Definition 1.4** ([1, Definition 2.20]).: Let \(R\) be an \(E_{\infty}\)-ring spectrum and \(f\colon X\to B\!\operatorname{GL}_{1}(R)\) be a twist of \(R\). The _Thom spectrum_\(M^{R}f\) of the map \(f\) is the colimit of the \(X\)-shaped diagram \[X\stackrel{{ f}}{{\longrightarrow}}B\!\operatorname{GL}_{1}(R) \longrightarrow\mathcal{M}\!od_{R}\,. \tag{1.5}\] When \(R\) is clear from context, we will write \(Mf\) for \(M^{R}f\). By construction, \(Mf\) is an \(R\)-module spectrum. If the reader is familiar with the definition of a Thom spectrum associated to a virtual vector bundle, this definition is related but more general. **Example 1.6** (Thom spectra from vector bundles).: Let \(V\to X\) be a virtual stable vector bundle of rank zero; \(V\) is classified by a map \(f_{V}\colon X\to B\!\operatorname{O}\). There is a map of abelian \(\infty\)-groups \(J\colon B\!\operatorname{O}\to B\!\operatorname{GL}_{1}(\mathbb{S})\) called the \(J\)_-homomorphism_, where \(B\!\operatorname{O}\) has the abelian \(\infty\)-group structure induced by direct sum of (rank-zero virtual) vector bundles [16]. Theorems of Lewis [13, Chapter IX] and Ando-Blumberg-Gepner-Hopkins-Rezk [1, Corollary 3.24] together imply that the Thom spectrum \(X^{V}\) in the usual sense is naturally equivalent to the Thom spectrum \(M(J\circ f_{V})\) in the Ando-Blumberg-Gepner-Hopkins-Rezk sense. **Example 1.7** (Trivial twists).: Suppose that the map \(f\colon X\to B\!\operatorname{GL}_{1}(R)\) is null-homotopic. Then by definition, the colimit of (1.5) is \(R\wedge X_{+}\); more precisely, a null-homotopy of \(f\) induces an equivalence of \(R\)-module spectra \(Mf\simeq R\wedge X_{+}\). We will need the following fact a few times. **Lemma 1.8**.: _Let \(g\colon R_{1}\to R_{2}\) be a map of \(E_{\infty}\)-ring spectra and \(f\colon X\to B\!\operatorname{GL}_{1}(R_{1})\) be a twist. Then there is an equivalence of \(R_{2}\)-module spectra_ \[M^{R_{2}}(g\circ f)\stackrel{{\simeq}}{{\longrightarrow}}M^{R_{ 1}}f\wedge_{R_{1}}R_{2}. \tag{1.9}\] When \(R_{1}=\mathbb{S}\), Ando-Blumberg-Gepner-Hopkins-Rezk [1, SS1.2] mention that this lemma is a straightforward consequence of a different, equivalent definition of the Thom spectrum [1, Definition 3.13]. Proof.: We will show that the diagram (1.10) is (homotopy) commutative, where just as above we identify the spaces \(B\mathrm{GL}_{1}(R_{i})\) with their fundamental \(\infty\)-groupoids. Once we know this, the lemma is immediate from the colimit definition of \(M^{R_{2}}(g\circ f)\) in Equation (1.4): replace \(M^{R_{2}}\circ g\circ f\) with \((\text{--}\wedge_{R_{1}}R_{2})\circ M^{R_{1}}\circ f\). The key obstacle in establishing commutativity of (1.10) is that \(g\colon B\mathrm{GL}_{1}(R_{1})\to B\mathrm{GL}_{1}(R_{2})\) comes from maps of spectra via (1.2), but - \(\wedge_{R_{1}}R_{2}\) has a more module-theoretic flavor. The resolution, which is the same as in the proof of [1, Proposition 2.9], is that the three other pieces of the pullback (1.2) defining \(\mathrm{GL}_{1}\), namely \(\Omega^{\infty}\), \(\pi_{0}\), and \(\pi_{0}(\text{--})^{\times}\), have module-theoretic interpretations: there are homotopy equivalences of abelian \(\infty\)-groups \(\Omega^{\infty}R\xrightarrow{\simeq}\mathrm{End}_{R}(R)\), and likewise \(\pi_{0}(R)\xrightarrow{\simeq}\pi_{0}(\mathrm{End}_{R}(R))\) and \(\pi_{0}(R)^{\times}\xrightarrow{\simeq}\pi_{0}(\mathrm{End}_{R}(R))^{\times}\). And all of these identifications are compatible with the tensor product functor \(\mathcal{M}od_{R_{1}}\to\mathcal{M}od_{R_{2}}\), thus also likewise for their classifying spaces, establishing commutativity of (1.10). The usual Thom diagonal for a Thom space \(X^{V}\) gives \(H^{*}(X^{V};\mathbb{Z}/2)\) the structure of a module over \(H^{*}(X;\mathbb{Z}/2)\). One can generalize this for \(R\)-module Thom spectra as follows. **Definition 1.11** (Thom diagonal [1, SS3.3]).: Let \(R\) be an \(E_{\infty}\)-ring spectrum and \(f\colon X\to B\mathrm{GL}_{1}(R)\) be a twist. The _Thom diagonal_ for \(Mf\) is an \(R\)-module map \[Mf\xrightarrow{\Delta^{t}}Mf\wedge R\wedge X_{+} \tag{1.12}\] defined by applying the Thom spectrum functor to the maps \(f\colon X\to B\mathrm{GL}_{1}(R)\) and \((f,0)\colon X\times X\to B\mathrm{GL}_{1}(R)\): if \(\Delta\colon X\to X\times X\) is the diagonal map, then \(f=\Delta^{*}(f,0)\), so \(\Delta\) induces the desired map \(\Delta^{t}\) of \(R\)-module Thom spectra in (1.12). See Beardsley [1, SS4.3] for a nice coalgebraic interpretation of the Thom diagonal. ### Constructing non-vector-bundle twists Let \(X\) and \(Y\) be \(E_{\infty}\)-spaces and \(f_{1}\colon X\to Y\) and \(f_{2}\colon Y\to B\mathrm{GL}_{1}(\mathbb{S})\) be \(E_{\infty}\)-maps. Ando-Blumberg-Gepner [1, Theorem 1.7] show that the \(E_{\infty}\)-structure on \(f_{2}\circ f_{1}\) induces an \(E_{\infty}\)-ring structure on \(M(f_{2}\circ f_{1})\). **Lemma 1.13**.: _Let \(R\) be an \(E_{\infty}\)-ring spectrum. The data of an \(E_{\infty}\)-ring map \(M(f_{2}\circ f_{1})\to R\) induces a map \(T_{f_{1},f_{2}}\colon Y/X\to B\mathrm{GL}_{1}(R)\)._ An \(E_{\infty}\)-ring map of this kind is often called an \(M(f_{2}\circ f_{1})\)_-orientation_ of \(R\). Proof.: Ando-Blumberg-Gepner-Hopkins-Rezk [1, Theorem 3.19] show that the \(M(f_{2}\circ f_{1})\)-orientation of \(R\) is equivalent to a null-homotopy of the map \[X\xrightarrow{f_{1}}Y\xrightarrow{f_{2}}B\mathrm{GL}_{1}(\mathbb{S}) \xrightarrow{1_{R}}B\mathrm{GL}_{1}(R), \tag{1.14}\] where \(1_{R}\) denotes the unit map for \(R\). That is, we have a map \(g\colon Y\to B\mathrm{GL}_{1}(R)\) and a nullhomotopy of \(f_{1}\circ g\colon X\to Y\to B\mathrm{GL}_{1}(R)\), which is precisely the data needed to descend \(g\) to the cofiber of \(f_{1}\), giving us a map \(Y/X\to B\mathrm{GL}_{1}(R)\) as desired. A theorem of Beardsley [1] uses a special case of Equation (1.13) to obtain many commonly-studied twists of various cohomology theories. We will usually apply it for maps to \(B\mathrm{O}\) and implicitly compose with the \(E_{\infty}\)-map \(J\colon B\mathrm{O}\to B\mathrm{GL}_{1}(\mathbb{S})\), like in Equation (1.6). **Theorem 1.15** (Beardsley [1, Theorem 1]).: _For \(R=M(f_{2}\circ f_{1})\), there is a natural equivalence \(M^{R}T_{f_{1},f_{2}}\stackrel{{\simeq}}{{\to}}M^{\mathbb{S}}f_{2}\)._ In this paper we consider twisted \(R\)-(co)homology for several different ring spectra \(R\). These spectra are organized into several families: in each family there is a Thom spectrum \(Mf\), another ring spectrum \(R\), and a map of ring spectra \(Mf\to R\) which is an isomorphism on homotopy groups in low degrees. In the context of a specific family, we will refer to \(Mf\) as the _big sibling_ and \(R\) as the _little sibling_. The four families we consider in this paper are \((\mathit{MTSO},H\mathbb{Z})\), \((\mathit{MTSpin},\mathit{ko})\), \((\mathit{MTSpin}^{c},\mathit{ku})\), and \((\mathit{MTString},\mathit{tmf})\):4 Footnote 4: In the homotopy theory literature, it is common to refer to bordism spectra \(\mathit{MSO}\), \(\mathit{MSpin}\), etc., corresponding to the bordism groups of manifolds with orientations, resp. spin structures, on the stable normal bundle. In the mathematical physics literature, one sees \(\mathit{MTSO}\), \(\mathit{MTSpin}\), etc., corresponding to the same structures on the stable _tangent_ bundle. If \(\xi\colon B\to B\mathrm{O}\) is a tangential structure such that the map \(\xi\) is a map of abelian \(\infty\)-groups, as is the case for O, SO, \(\mathrm{Spin}^{c}\), Spin, and String, there is a canonical equivalence \(\mathit{M\xi}\stackrel{{\simeq}}{{\to}}\mathit{MT\xi}\). For other tangential structures, this is not necessarily true: in particular, \(\mathit{MPin}^{\pm}\simeq\mathit{MTPin}^{\mp}\). * The map \(\Omega^{\mathrm{SO}}_{0}\stackrel{{\cong}}{{\to}}\mathbb{Z}\) counting the number of points refines to a map of \(E_{\infty}\)-ring spectra \(\mathit{MTSO}\to H\mathbb{Z}\). Work of Thom [10, Theoreme IV.13] shows this map is an isomorphism on homotopy groups in degrees \(3\) and below. * The Atiyah-Bott-Shapiro map \(\mathit{MTSpin}^{c}\to\mathit{ku}\)[1] was shown to be a map of \(E_{\infty}\)-ring spectra by Joachim [1], and Anderson-Brown-Peterson [1] showed this map is an isomorphism on homotopy groups in degrees \(3\) and below. * Joachim [1] also showed the real Atiyah-Bott-Shapiro map \(\mathit{MTSpin}\to\mathit{ko}\)[1] is a map of \(E_{\infty}\)-ring spectra, and Milnor [11] showed this map is an isomorphism on homotopy groups in degrees \(7\) and below. * Ando-Hopkins-Rezk [1] produced a map of \(E_{\infty}\)-ring spectra \(\sigma\colon\mathit{MTString}\to\mathit{tmf}\), which Hill [11, Theorem 2.1] shows is an isomorphism on homotopy groups in degrees \(15\) and below. For all of these cases but \(\mathit{MTString}\), one can \(2\)-locally decompose the big sibling into a sum of cyclic modules over the little sibling: Wall [11] produced a \(2\)-local equivalence \[\mathit{MTSO}_{(2)}\stackrel{{\simeq}}{{\longrightarrow}}H \mathbb{Z}_{(2)}\vee\Sigma^{4}H\mathbb{Z}_{(2)}\vee\Sigma^{5}H\mathbb{Z}/2\vee\cdots, \tag{1.16a}\] and Anderson-Brown-Peterson [1] produced \(2\)-local equivalences \[\mathit{MTSpin}_{(2)}\stackrel{{\simeq}}{{\longrightarrow}}ko _{(2)}\vee\Sigma^{8}ko_{(2)}\vee\Sigma^{10}(\mathit{ko}\wedge J)_{(2)}\vee\ldots \tag{1.16c}\] \[\mathit{MTSpin}^{c}_{(2)}\stackrel{{\simeq}}{{ \longrightarrow}}ku_{(2)}\vee\Sigma^{4}ku_{(2)}\vee\Sigma^{8}ku_{(2)}\vee \Sigma^{8}ku_{(2)}\vee\cdots, \tag{1.16b}\] where \(J\) is a certain spectrum such that \(\Sigma^{2}\mathit{ko}\wedge J\) is the Postnikov \(2\)-connected cover of \(\mathit{ko}\).5 It is not known whether \(\mathit{tmf}\) is a summand of \(\mathit{MTString}\) (see, e.g., [1, 1, 10, 11, 12]) so we do not know if there is a splitting like in the three other cases. Footnote 5: The spin\({}^{c}\) decomposition is implicit in [1]; see Bahri-Gilkey [1] for an explicit reference. _Remark 1.17_ (String bordism with level structures?).: Associated to congruence subgroups \(\Gamma\subset\mathrm{SL}_{2}(\mathbb{Z})\) there are "topological modular forms with level structure:" Hill-Lawson [13] construct \(E_{\infty}\)-ring spectra \(\mathit{TMF}(\Gamma)\), \(\mathit{Tmf}(\Gamma)\), and \(\mathit{tmf}(\Gamma)\) with maps between them like for vanilla \(\mathit{tmf}\). The case \(\Gamma=\Gamma_{1}(3)\) is especially interesting, as all but one of the several ingredients we need for the proof of our main theorem are known to be true for \(\mathit{tmf}(\Gamma_{1}(3))\) (usually written \(\mathit{tmf}_{1}(3)\)): by work of Mathew [16, Theorem 1.2], there is a change-of-rings theorem allowing one to simplify 2-primary Adams spectral sequence computations to an easier subalgebra (see SS2.1), but it is not yet known how to construct an \(E_{\infty}\)-ring Thom spectrum \(M\) with an orientation \(M\to\mathit{tmf}_{1}(3)\) that is an isomorphism on low-degree homotopy groups.6 The existence of such a spectrum \(M\) would lead to generalizations of our main theorems to twists of \(\mathit{tmf}_{1}(3)\)-homology,7 and we would be interested in learning whether this is possible. Footnote 6: See Wilson [11] for results on closely related questions. Footnote 7: A theorem of Meier [16, Theorem 1.4] suggests this may also apply to twists of \(\mathit{tmf}_{1}(n)\)-homology for other values of \(n\). #### 1.2.1. Twists of MTSO and \(H\mathbb{Z}\) We walk through the implications of Equations (1.13) and (1.15) in a relatively simple setting, addressing * what cohomology classes define twists of _MTSO_ and \(H\mathbb{Z}\) by way of Equation (1.13), * what the corresponding twisted bordism and cohomology groups are, and * what Equation (1.15) implies the Thom spectrum of the universal twist is. Letting \(f_{2}\colon X\to B\mathrm{O}\) be the identity and \(f_{1}\colon X\to Y\) be \(B\mathrm{SO}\to B\mathrm{O}\), we obtain twists of _MTSO_-oriented ring spectra, notably _MTSO_ and \(H\mathbb{Z}\), by maps to \(B\mathrm{SO}/B\mathrm{O}\simeq K(\mathbb{Z}/2,1)\). The map \(B\mathrm{O}\to B\mathrm{O}/B\mathrm{SO}\) admits a section defined by regarding a map to \(K(\mathbb{Z}/2,1)\) as a real line bundle, so these twists are given by real line bundles in the sense of Equation (1.6). Specifically, a class \(a\in H^{1}(X;\mathbb{Z}/2)\) defines a twist \(f_{a}\colon X\to B\mathrm{GL}_{1}(\textit{MTSO})\) by interpreting \(a\) as a map \(X\to B\mathrm{O}/B\mathrm{SO}\) and invoking Equation (1.13), and \(a\) defines a second twist \(g_{a}\) by choosing a real line bundle \(L_{a}\) with \(w_{1}(L_{a})=a\) (a contractible choice) and making the vector bundle twist as in Equation (1.6), but \(f_{a}\simeq g_{a}\) and so \(M^{\textit{MTSO}}f_{a}\simeq\textit{MTSO}\wedge X^{L_{a}-1}\). Thus in a sense this example is redundant, as the main theorems of this paper are long known for vector bundle twists, but we include this example because we found it a useful parallel to to other families we study. Let \(\Omega_{*}^{\mathrm{SO}}(X,a)\coloneqq\pi_{*}(M^{\textit{MTSO}}f_{a})\). Using the vector bundle interpretation of this twist, \(\Omega_{*}^{\mathrm{SO}}(X,a)\) has an interpretation as twisted oriented bordism groups, specifically the bordism groups of manifolds \(M\) with a map \(h\colon M\to X\) and an orientation on \(TM\oplus h^{*}L_{a}\). Alternatively, one could think of this as the bordism groups of manifolds \(M\) with a map \(h\colon M\to X\) and a trivialization of the class \(w_{1}(M)-h^{*}a\); this perspective will be useful in later examples of non-vector-bundle twists. Equation (1.15) then implies the Thom spectrum of \[K(\mathbb{Z}/2,1)\stackrel{{\sim}}{{\to}}B\mathrm{O}_{1} \stackrel{{\sigma}}{{\longrightarrow}}B\mathrm{O}\longrightarrow B \mathrm{GL}_{1}(\mathbb{S})\longrightarrow B\mathrm{GL}_{1}(\textit{MTSO}), \tag{1.18}\] is equivalent to _MTO_. Equation (1.8) implies the Thom spectrum of (1.18) is \(\textit{MTSO}\wedge(B\mathrm{O}_{1})^{\sigma-1}\), so we have reproved a theorem of Atiyah: \(\textit{MTSO}\wedge(B\mathrm{O}_{1})^{\sigma-1}\simeq\textit{MTO}\)[16, Proposition 4.1]. The twist of \(H\mathbb{Z}\) defined by \(a\) recovers the usual notion of integral cohomology twisted by a class in \(H^{1}(X;\mathbb{Z}/2)\). #### 1.2.2. Twists of MTSpin\({}^{c}\), \(ku\), and \(Ku\) Our next family of examples includes \(\mathrm{spin}^{c}\) bordism and complex \(K\)-theory. In Equation (1.20) we use Equation (1.13) to construct a map \(K(\mathbb{Z}/2,1)\times K(\mathbb{Z},3)\to B\mathrm{GL}_{1}(\textit{MTSpin}^{c})\), defining twists of \(\textit{MTSpin}^{c}\), \(ku\), and \(\textit{KU}\) by classes in \(H^{1}(\neg;\mathbb{Z}/2)\) and \(H^{3}(\neg;\mathbb{Z})\). These recover the usual twists of \(K\)-theory by these cohomology classes studied by [10, 11, 12, 13] (Equation (1.27)), and in Equation (1.25) we use work of Hebestreit-Joachim [11, Proposition 3.3.6] to describe the homotopy groups of the corresponding _MTSpin\({}^{c}\)_-module Thom spectra as bordism groups of manifolds with certain kinds of twisted spin\({}^{c}\) structures. The Atiyah-Bott-Shapiro orientation [1, 1] defines ring homomorphisms \(\mathit{Td}\colon\mathit{MTSpin}^{c}\to\mathit{ku}\to KU\), so by Equation (1.13) there are maps \[B\mathrm{O}/B\mathrm{Spin}^{c}\longrightarrow B\mathrm{GL}_{1}(\mathit{MTSpin} ^{c})\stackrel{{\mathit{Td}}}{{\longrightarrow}}B\mathrm{GL}_{1}( \mathit{ku})\longrightarrow B\mathrm{GL}_{1}(\mathit{KU}), \tag{1.19}\] i.e. twists of \(\mathit{MTSpin}^{c}\), \(\mathit{ku}\), and \(\mathit{KU}\) by maps to \(\mathrm{BO}/B\mathrm{Spin}^{c}\).8 Footnote 8: The map (1.19) is nowhere near a homotopy equivalence; for example, it misses the “higher twists” of \(\mathit{KU}\) studied in, e.g., [1, 13]. **Proposition 1.20**.: _The map \(K(\mathbb{Z}/2,1)\to B\mathrm{O}\) defined by the tautological line bundle induces a homotopy equivalence of spaces_ \[B\mathrm{O}/B\mathrm{Spin}^{c}\stackrel{{\simeq}}{{ \longrightarrow}}K(\mathbb{Z}/2,1)\times K(\mathbb{Z},3), \tag{1.21}\] _implying that \(\mathit{MTSpin}^{c}\), \(\mathit{ku}\), and \(\mathit{KU}\) can be twisted over a space \(X\) by classes \(a\in H^{1}(X;\mathbb{Z}/2)\) and \(c\in H^{3}(X;\mathbb{Z})\)._ Proof.: We want to apply the third isomorphism theorem to the sequence of maps of abelian \(\infty\)-groups \(B\mathrm{Spin}^{c}\to B\mathrm{SO}\to B\mathrm{O}\) to obtain a short exact sequence \[1\xrightarrow{}B\mathrm{SO}/B\mathrm{Spin}^{c}\xrightarrow{}B\mathrm{O}/B \mathrm{Spin}^{c}\xrightarrow{}B\mathrm{O}/B\mathrm{SO}\xrightarrow{}1. \tag{1.22}\] It is not immediate how to do this in the \(\infty\)-categorical setting, but we can do it. Instead of a short exact sequence, we obtain a cofiber sequence, and in a stable \(\infty\)-category, the third isomorphism theorem for cofiber sequences is a consequence of the octahedral axiom. The \(\infty\)-category of abelian \(\infty\)-groups is not stable, as it is equivalent to the \(\infty\)-category of connective spectra, but this \(\infty\)-category embeds in the stable \(\infty\)-category \(\mathcal{S}p\) of all spectra, allowing us to make use of stability in certain settings: specifically, cofiber sequences \(A\to B\to C\) of abelian \(\infty\)-groups for which the induced map \(\pi_{0}(B)\to\pi_{0}(C)\) is surjective; these cofiber diagrams map to cofiber diagrams in \(\mathcal{S}p\), so we may invoke the octahedral axiom in \(\mathcal{S}p\). All cofiber sequences of abelian \(\infty\)-groups we discuss in this paper satisfy this \(\pi_{0}\)-surjectivity property, so we will not discuss it further. In particular, we obtain the cofiber sequence (1.22). Throughout this paper, whenever we write a short exact sequence of abelian \(\infty\)-groups, we mean a cofiber sequence. A similar argument allows one to deduce that fiber and cofiber sequences coincide for abelian \(\infty\)-groups from the analogous fact for stable \(\infty\)-categories, assuming the same \(\pi_{0}\)-surjectivity hypothesis. Since \(B\mathrm{Spin}^{c}\) is the fiber of \(\beta w_{2}\colon B\mathrm{SO}\to K(\mathbb{Z},3)\), which is a map of abelian \(\infty\)-groups since \(\beta w_{2}\) satisfies the Whitney sum formula for oriented vector bundles, the cofiber \(B\mathrm{SO}/B\mathrm{Spin}^{c}\) is equivalent, as abelian \(\infty\)-groups, to \(K(\mathbb{Z},3)\). Here, \(\beta\colon H^{k}(\neg;\mathbb{Z}/2)\to H^{k+1}(\neg;\mathbb{Z})\) is the Bockstein. Likewise, \(B\mathrm{SO}\) is the fiber of \(w_{1}\colon B\mathrm{O}\to K(\mathbb{Z}/2,1)\), which is a map of abelian \(\infty\)-groups, so \(B\mathrm{O}/B\mathrm{SO}\simeq K(\mathbb{Z}/2,1)\). The quotient \(B\mathrm{O}\to B\mathrm{O}/B\mathrm{SO}\simeq K(\mathbb{Z}/2,1)\) admits a section given by the tautological real line bundle \(K(\mathbb{Z}/2,1)\simeq B\mathrm{O}_{1}\to B\mathrm{O}\); composing \(K(\mathbb{Z}/2,1)\to B\mathrm{O}\) with the quotient \(B\mathrm{O}\to B\mathrm{O}/B\mathrm{Spin}^{c}\) we obtain a section of (1.22). That section splits (1.22), which implies the proposition statement. **Definition 1.23**.: Given classes \(a\in H^{1}(X;\mathbb{Z}/2)\) and \(c\in H^{3}(X;\mathbb{Z})\), we call the twist \(f_{a,c}\colon X\to B\mathrm{GL}_{1}(\mathit{MTSpin}^{c})\) that Equation (1.20) associates to \(a\) and \(c\) the _fake vector bundle twist_ for \(a\) and \(c\), and likewise for the induced twists of \(\mathit{ku}\) and \(\mathit{KU}\). The twist \(f_{a,c}\) arises from a vector bundle twist if there is a vector bundle \(V\to X\) such that \(w_{1}(V)=a\) and \(\beta(w_{2}(V))=c\), but there are choices of \(X\), \(a\), and \(c\) for which no such vector bundle exists, e.g. if \(c\) is not \(2\)-torsion. Now that we have defined these twists, we get to the business of interpreting them. **Definition 1.24**.: Given \(X\), \(a\), and \(c\) as above, let \(\Omega_{*}^{\operatorname{Spin}^{c}}(X,a,c)\) denote the groups of bordism classes of manifolds \(M\) with a map \(f\colon M\to X\) and trivializations of \(w_{1}(M)-f^{*}(a)\) and \(\beta(w_{2}(M))-f^{*}(c)\). This notion of twisted \(\operatorname{spin}^{c}\) bordism, in the special case \(a=0\), was first studied by Douglas [11, SS5], and implicitly appears in Freed-Witten's work [12] on anomaly cancellation. **Lemma 1.25** (Hebestreit-Joachim [15, Corollary 3.3.8]).: _There is a natural isomorphism \(\pi_{*}(M^{\text{MTSpin}^{c}}f_{a,c})\xrightarrow{\cong}\Omega_{*}^{ \operatorname{Spin}^{c}}(X,a,c)\)._ _Remark 1.26_.: Hebestreit-Joachim [15] use a different framework for twists based on May-Sigurdsson's parametrized homotopy theory [13]; Ando-Blumberg-Gepner [1, Appendix B] prove a comparison theorem that allows us to pass between May-Sigurdsson's framework and Ando-Blumberg-Gepner-Hopkins-Rezk's. Additionally, Hebestreit-Joachim work with twisted spin bordism and _KO_-theory, but for the complex case the arguments are essentially the same. **Lemma 1.27**.: _With \(X\), \(a\), and \(c\) as above, the homotopy groups of \(M^{\text{KU}}f_{a,c}\) are naturally isomorphic to the twisted \(K\)-theory groups of [1, 13, 14, 1]._ This is because the methods in [1] are nearly the same as ours, allowing for a direct comparison.9 Footnote 9: Alternatively, one could use a uniqueness result of Antieau-Gepner-Gómez [1, Theorem 1.1] that \([K(\mathbb{Z},3),B\text{GL}_{1}(\text{KU})]\cong\mathbb{Z}\) to reduce to checking whether these notions of twisted \(K\)-theory agree on a single example. **Example 1.28**.: Equation (1.15) computes a few example of \(\text{MTSpin}^{c}\)-module Thom spectra for us. 1. Letting \(X=Y=B\text{O}/B\text{Spin}^{c}\) and \(f_{1}=\text{id}\), Equation (1.15) implies that the Thom spectrum of the universal twist \(B\text{O}/B\text{Spin}^{c}\to B\text{GL}_{1}(\text{MTSpin}^{c})\) is \(\text{MTO}\). From a bordism point of view, this is the fact that since \(a\) and \(c\) pull back from \(K(\mathbb{Z}/2,1)\times K(\mathbb{Z},3)\), they can be arbitrary classes, so the required trivializations of \(w_{1}(M)-f^{*}(a)\) and \(\beta(w_{2}(M))-f^{*}(c)\) are uniquely specified by \(a=w_{1}(M)\) and \(c=\beta(w_{2}(M))\), so this notion of twisted \(\operatorname{spin}^{c}\) structure is no structure at all. 2. Let \(Y\) be as in the previous example and let \(f_{1}\colon X\to Y\) be the map \(K(\mathbb{Z},3)\simeq B\text{SO}/B\text{Spin}^{c}\to B\text{O}/B\text{Spin}^{c}\). Equation (1.15) says the Thom spectrum of (1.29) \[K(\mathbb{Z},3)\longrightarrow B\text{O}/B\text{Spin}^{c}\longrightarrow B \text{GL}_{1}(\text{MTSpin}^{c})\] is equivalent to \(\text{MTSO}\). We stress that this twist by \(K(\mathbb{Z},3)\) does not come from a vector bundle because all vector bundle twists of \(\text{MTSpin}^{c}\) are torsion and of the form \(\beta(w_{2}(M))\), but the universal twist over \(K(\mathbb{Z},3)\) is not. **Lemma 1.30**.: _The equivalence of spaces \(B\text{O}/B\text{Spin}^{c}\simeq K(\mathbb{Z}/2,1)\times K(\mathbb{Z},3)\) from (1.21) is not an equivalence of \(\infty\)-groups._ Proof.: Suppose that this is an equivalence of \(\infty\)-groups. Then the inclusion \(K(\mathbb{Z}/2,1)\to K(\mathbb{Z}/2,1)\times K(\mathbb{Z},3)\to B\mathrm{O}/B \mathrm{Spin}^{c}\) is a map of \(\infty\)-groups, so the composition \[\varphi\colon K(\mathbb{Z}/2,1)\longrightarrow K(\mathbb{Z}/2,1)\times K( \mathbb{Z},3)\longrightarrow B\mathrm{O}/B\mathrm{Spin}^{c}\longrightarrow B \mathrm{GL}_{1}(\textit{MTSpin}^{c}) \tag{1.31}\] is a map of \(\infty\)-groups. By work of Ando-Blumberg-Gepner [1, Theorem 1.7], this implies the Thom spectrum \(M\varphi\) is an \(E_{1}\)-ring spectrum. We will explicitly identify \(M\varphi\) and show this is not the case. We saw above that the map \(K(\mathbb{Z}/2,1)\to B\mathrm{O}/B\mathrm{Spin}^{c}\) factors through the map \(K(\mathbb{Z}/2,1)\to B\mathrm{O}\) defined by the tautological line bundle \(\sigma\to B\mathrm{O}_{1}\simeq K(\mathbb{Z}/2,1)\), meaning that the twist (1.31) is the vector bundle twist of \(\textit{MTSpin}^{c}\) for the tautological line bundle \(\sigma\to B\mathrm{O}_{1}\). Applying Equation (1.8) with \(R_{1}=\mathbb{S}\) and \(R_{2}=\textit{MTSpin}^{c}\), we conclude \(M\varphi\simeq\textit{MTSpin}^{c}\wedge(B\mathrm{O}_{1})^{\sigma-1}\). Bahri-Gilkey [1, 1] identify this spectrum with \(\textit{MTPin}^{c}\), which is known to not be an \(E_{1}\)-ring spectrum: for example, a \(E_{1}\)-ring structure induces a graded ring structure on homotopy groups, making \(\pi_{k}(\textit{MTPin}^{c})\) into a \(\pi_{0}(\textit{MTPin}^{c})\)-module for all \(k\), but \(\pi_{0}\textit{MTPin}^{c}\cong\mathbb{Z}/2\) and \(\pi_{2}(\textit{MTPin}^{c})\cong\mathbb{Z}/4\)[1, Theorem 2]. #### 1.2.3. Twists of \(\textit{MTSpin}\), \(ko\), and \(\mathit{KO}\) The real analogue of SS1.2.2 is very similar; we summarize the story here, highlighting the differences. Again there is an Atiyah-Bott-Shapiro ring spectrum map \(\textit{MTSpin}\stackrel{{\widehat{A}}}{{\to}}ko\to\mathit{KO}\)[1, 1, 1], allowing us to use Equation (1.13) to produce a sequence of maps \[B\mathrm{O}/B\mathrm{Spin}\longrightarrow B\mathrm{GL}_{1}(\textit{MTSpin}) \stackrel{{\widehat{A}}}{{\longrightarrow}}B\mathrm{GL}_{1}(ko) \longrightarrow B\mathrm{GL}_{1}(\mathit{KO}). \tag{1.32}\] Freed-Hopkins [1, SS10] use the \(\infty\)-group \(B\mathrm{O}/B\mathrm{Spin}\) to study vector bundle twists of spin bordism; they call it \(\mathbf{P}\). **Proposition 1.33**.: _The map \(K(\mathbb{Z}/2,1)\to B\mathrm{O}\) defined by the tautological line bundle induces a homotopy equivalence of spaces_ \[B\mathrm{O}/B\mathrm{Spin}\stackrel{{\simeq}}{{\longrightarrow} }K(\mathbb{Z}/2,1)\times K(\mathbb{Z}/2,2), \tag{1.34}\] _implying \(\textit{MTSpin}\), \(ko\), and \(\mathit{KO}\) can be twisted over a space \(X\) by classes \(a\in H^{1}(X;\mathbb{Z}/2)\) and \(b\in H^{2}(X;\mathbb{Z}/2)\)._ The proof is nearly the same as the proof of Equation (1.20): fit \(B\mathrm{O}/B\mathrm{Spin}\) into a split cofiber sequence with \(B\mathrm{SO}/B\mathrm{Spin}\simeq K(\mathbb{Z}/2,2)\) (because \(B\mathrm{Spin}\to B\mathrm{O}\) is the fiber of \(w_{2}\colon B\mathrm{SO}\to K(\mathbb{Z}/2,2)\)) and \(B\mathrm{O}/B\mathrm{SO}\simeq K(\mathbb{Z}/2,1)\). **Definition 1.35**.: We call the twist \(f_{a,b}\colon X\to B\mathrm{GL}_{1}(\textit{MTSpin})\) associated to \(a\) and \(b\) the _fake vector bundle twist_ for \(a\) and \(b\), and likewise for the induced twists of \(ko\) and \(\mathit{KO}\). _Remark 1.36_.: The space of homotopy self-equivalences of \(K(\mathbb{Z}/2,1)\times K(\mathbb{Z}/2,2)\) is not connected: for example, if \(a\) denotes the tautological class in \(H^{1}(K(\mathbb{Z}/2,1);\mathbb{Z}/2)\) and \(b\) is the tautological class in \(H^{2}(K(\mathbb{Z}/2,2);\mathbb{Z}/2)\), the homotopy class of maps \(\Phi\colon K(\mathbb{Z}/2,1)\times K(\mathbb{Z}/2,2)\to K(\mathbb{Z}/2,1) \times K(\mathbb{Z}/2,2)\) defined by the classes \((a,a^{2}+b)\) is not the identity and is invertible. The choice of identification we made in (1.34) matters: if one uses a different identification, one obtains a different notion of fake vector bundle twist and a different formula in Equation (2.20) to make Equation (2.28) true. **Lemma 1.37**.: (1.34) _is not an equivalence of \(\infty\)-groups._ One can prove this lemma in the same way as Equation (1.30), by pulling back along the section \(K(\mathbb{Z}/2,1)\to B\mathrm{O}/B\mathrm{Spin}\) and observing that the Thom spectrum \(\mathit{MTSpin}\wedge(\mathrm{BO}_{1})^{\sigma-1}\) is not a ring spectrum in much the same way:10 using the equivalence \(\mathit{MTSpin}\wedge(\mathrm{BO}_{1})^{\sigma-1}\simeq\mathit{MTPin}^{-}\)[11, SS7] and the groups \(\pi_{0}(\mathit{MTPin}^{-})\cong\mathbb{Z}/2\) and \(\pi_{2}(\mathit{MTPin}^{-})\cong\mathbb{Z}/8\)[1, 1] to show \(\mathit{MTPin}^{-}\) is not a ring spectrum. There is also another nice proof, which we give below. Footnote 10: This is the first place where the choice of identification (1.34) has explicit consequences, as promised in Equation (1.36): if we compose with the identification of \(K(\mathbb{Z}/2,1)\times K(\mathbb{Z}/2,2)\) given by the classes\((a,a^{2}+b)\) described in that remark, we would instead obtain \(\mathit{MTSpin}\wedge(\mathrm{BO}_{1})^{3\sigma-3}\). This is not a ring spectrum either, as it can be identified with \(\mathit{MTPin}^{+}\)[11, §8], and \(\pi_{0}(\mathit{MTPin}^{+})\cong\mathbb{Z}/2\) and \(\pi_{4}(\mathit{MTPin}^{+})\cong\mathbb{Z}/16\)[10]. Proof.: If \(X\) is a space and \(Y\) is an \(\infty\)-group, the set \([X,Y]\) has a natural group structure. Therefore it suffices to find a space such that \([X,\mathrm{BO}/B\mathrm{Spin}]\) and \([X,K(\mathbb{Z}/2,1)\times K(\mathbb{Z}/2,2)]\) are non-isomorphic groups. To calculate the addition in \([\!-,\mathrm{BO}/B\mathrm{Spin}]\), we use the fact that if two maps \(f,g\colon X\to\mathrm{O}/B\mathrm{Spin}\) factor through \(B\mathrm{O}\), meaning they are represented by rank-zero virtual vector bundles \(V_{f},V_{g}\to X\), then \(f+g\) is the image of \(V_{f}\oplus V_{g}\) under \(B\mathrm{O}\to B\mathrm{O}/B\mathrm{Spin}\). This implies that for classes in the image of that quotient map, if we use (1.34) to identify two classes \(\phi_{1},\phi_{2}\in[X,B\mathrm{O}/B\mathrm{Spin}]\) with pairs \(\phi_{i}=(a_{i}\in H^{1}(X;\mathbb{Z}/2),b_{i}\in H^{2}(X;\mathbb{Z}/2))\), then addition follows the Whitney sum formula: \[(a_{1},b_{1})\oplus(a_{2},b_{2})=(a_{1}+a_{2},b_{1}+b_{2}+a_{1}a_{2}). \tag{1.38}\] This is different from the componentwise addition on \(K(\mathbb{Z}/2,1)\times K(\mathbb{Z}/2,2)\): for example, \([B\mathbb{Z}/2,K(\mathbb{Z}/2,1)\times K(\mathbb{Z}/2,2)]\cong\mathbb{Z}/2 \oplus\mathbb{Z}/2\), but the map \([B\mathbb{Z}/2,B\mathrm{O}]\to[B\mathbb{Z}/2,B\mathrm{O}/B\mathrm{Spin}]\) is surjective, so using (1.38), one can show that \([B\mathbb{Z}/2,B\mathrm{O}/B\mathrm{Spin}]\cong\mathbb{Z}/4\). **Definition 1.39**.: Given \(X\), \(a\), and \(b\) as above, let \(\Omega_{*}^{\mathrm{Spin}}(X,a,b)\) denote the groups of bordism classes of manifolds \(M\) with a map \(f\colon M\to X\) and trivializations of \(w_{1}(M)-f^{*}(a)\) and \(w_{2}(M)-f^{*}(b)\). B.L. Wang [10, Definition 8.2] first studied these twists of spin bordism in the case \(a=0\). **Lemma 1.40** (Hebestreit-Joachim [15, Corollary 3.3.8]).: _There is a natural isomorphism \(\pi_{*}(\mathit{M}^{\mathit{MTSpin}}f_{a,b})\stackrel{{\cong}}{{ \rightarrow}}\Omega_{*}^{\mathrm{Spin}}(X,a,b)\)._ **Lemma 1.41**.: _With \(X\), \(a\), and \(b\) as above, the homotopy groups of \(\mathit{M}^{\mathit{KO}}f_{a,b}\) are naturally isomorphic to the twisted \(\mathit{KO}\)-theory groups of [10, 15]._ One can show this by appealing to Antieau-Gepner-Gomez' calculation [1, Theorem 1.1] that there is a unique nontrivial twist of \(\mathit{KO}\) over \(K(\mathbb{Z}/2,2)\). **Example 1.42**.: Equation (1.15) implies the Thom spectrum of the universal twist of \(\mathit{MTSpin}\) over \(B\mathrm{O}/B\mathrm{Spin}\) is \(\mathit{MTO}\), and of the universal twist over \(K(\mathbb{Z}/2,2)\simeq B\mathrm{SO}/B\mathrm{Spin}\) is \(\mathit{MTSO}\). The latter equivalence was first shown by Beardsley [1, SS3]. #### 1.2.4. Twists of MTString, \(\mathit{tmf}\), \(\mathit{Tmf}\), and \(\mathit{TMF}\) The final family we consider in this paper is string bordism and topological modular forms. The story has a similar shape: we obtain twists by \(B\mathrm{O}/B\mathrm{String}\), and we simplify \(B\mathrm{O}/B\mathrm{String}\) to define fake vector bundle twists. However, in Equation (1.46) we learn that \(B\mathrm{O}/B\mathrm{String}\) is not homotopy equivalent to a product of Eilenberg-Mac Lane spaces. For this reason, the fake vector bundle twist uses a generalized cohomology theory called _supercohomology_ and denoted \(\mathit{SH}\) (Equation (1.48)); we finish this subsubsection by studying cohomology classes associated to a degree-\(4\) supercohomology class, which we will need in the proof of Equation (2.28). If \(V\to X\) is a spin vector bundle, it has a characteristic class \(\lambda(V)\in H^{4}(X;\mathbb{Z})\) such that \(2\lambda(V)=p_{1}(V)\); a _string structure_ on \(V\) is a trivialization of \(\lambda\). It is not hard to check that \(\lambda\) is additive in direct sums, so defines a map of abelian \(\infty\)-groups \(\lambda\colon B\mathrm{Spin}\to K(\mathbb{Z},4)\). The fiber of this map is an \(\infty\)-group \(B\mathrm{String}\), which is the classifying space for string structures. Unlike for \(K\)-theory, there are three different kinds of topological modular forms: a connective spectrum _tmf_, a periodic spectrum _TMF_, and a third spectrum _Tmf_ which is neither connective nor periodic. All three are \(E_{\infty}\)-ring spectra, and there are ring spectrum maps _tmf_\(\to\)_Tmf_\(\to\)_TMF_. Ando-Hopkins-Rezk [1] constructed a ring spectrum map \(\sigma\colon\)_MTString_\(\to\)_tmf_, so Equation (1.13) gives us twists of _tmf_, _Tmf_, and _TMF_ from \(B\mathrm{O}/B\mathrm{String}\): \[B\mathrm{O}/B\mathrm{String}\to B\mathrm{GL}_{1}(\textit{ MTString})\stackrel{{\sigma}}{{\to}}B\mathrm{GL}_{1}(\textit{tmf})\to B \mathrm{GL}_{1}(\textit{Tmf})\to B\mathrm{GL}_{1}(\textit{TMF}). \tag{1.43}\] Like in SS1.2.2 and SS1.2.3, the section \(B\mathrm{O}/B\mathrm{SO}\to B\mathrm{O}\) defines a homotopy equivalence of spaces \[B\mathrm{O}/B\mathrm{String}\stackrel{{\simeq}}{{\longrightarrow} }K(\mathbb{Z}/2,1)\times B\mathrm{SO}/B\mathrm{String}, \tag{1.44}\] and there is a central extension of abelian \(\infty\)-groups (1.45) but now something new happens. **Proposition 1.46**.: _(1.45) is not split._ Proof.: A splitting of (1.45) defines a section \(s\colon B\mathrm{SO}/B\mathrm{String}\to B\mathrm{Spin}/B\mathrm{String}\), meaning \(s\circ\iota=\mathrm{id}\). Therefore the map \(\lambda\colon B\mathrm{Spin}\to B\mathrm{Spin}/B\mathrm{String}\stackrel{{ \simeq}}{{\to}}K(\mathbb{Z},4)\) factors through \(B\mathrm{SO}\): (1.47) We let \(\mu\) denote the extension of \(\lambda\) to \(B\mathrm{SO}\). Brown [1, Theorem 1.5] shows that \(H^{4}(B\mathrm{SO};\mathbb{Z})\cong\mathbb{Z}\) with generator \(p_{1}\), so for any class \(x\in H^{4}(B\mathrm{SO};\mathbb{Z})\), the pullback of \(x\) to \(B\mathrm{Spin}\) is some integer multiple of \(p_{1}\). But the pullback of \(\mu\) is \(\lambda\), which is not an integer multiple of \(p_{1}\), so we have found a contradiction. We want an analogue of the fake vector bundle twists from SS1.2.2 and SS1.2.3 for _MTString_, _tmf_, _Tmf_, and _TMF_, but since we just saw that \(B\mathrm{SO}/B\mathrm{String}\) is not a product of Eilenberg-Mac Lane spaces, we have to figure out what exactly it is. The answer turns out to be the analogue of an Eilenberg-Mac Lane space for a relatively simple generalized cohomology theory. Postnikov theory implies that if \(E\) is a spectrum with only two nonzero homotopy groups \(\pi_{m}(E)=A\) and \(\pi_{n}(E)=B\) (assume \(m<n\) without loss of generality), then \(E\) is classified by the data of \(m\), \(n\), \(A\), \(B\), and the _\(k\)-invariant_\(k_{E}\in[\Sigma^{m}HA,\Sigma^{n+1}HB]\), a stable cohomology operation. **Definition 1.48** (Freed [10, SS1], Gu-Wen [11]).: Let \(\mathit{SH}\) be the spectrum with \(\pi_{-2}(\mathit{SH})=\mathbb{Z}/2\), \(\pi_{0}(\mathit{SH})=\mathbb{Z}\), and the \(k\)-invariant \(\mathit{k}_{\mathit{SH}}=\beta\circ\mathrm{Sq}^{2}\colon H^{*}(-;\mathbb{Z}/2) \to H^{*+3}(-;\mathbb{Z})\). The generalized cohomology theory defined by \(\mathit{SH}\) is called _(restricted) supercohomology_.11 Footnote 11: The adjective “restricted” is to contrast this theory with “extended” supercohomology of Kapustin-Thorngren [12] and Wang-Gu [11]. See [11, §5.3, 5.4]. Just as the Eilenberg-Mac Lane spectrum \(\mathit{H}\mathbb{Z}\) is assembled from Eilenberg-Mac Lane spaces \(K(\mathbb{Z},n)\) and there is a natural isomorphism \(H^{n}(X,\mathbb{Z})\stackrel{{\cong}}{{\to}}[X,K(\mathbb{Z},n)]\), if one defines \(\mathit{SK}(n)\) to be the abelian \(\infty\)-group which is the extension (1.49) classified by \(\beta(\mathrm{Sq}^{2}(T))\in H^{n+1}(K(\mathbb{Z}/2,n-2);\mathbb{Z})\), where \(T\in H^{n-2}(K(\mathbb{Z}/2,n-2);\mathbb{Z}/2)\) is the tautological class and \(\beta\) is the integral Bockstein, then the spaces \(\mathit{SK}(n)\) assemble into a model for the spectrum \(\mathit{SH}\) and there is a natural isomorphism \(\mathit{SH}^{n}(X)\stackrel{{\cong}}{{\to}}[X,\mathit{SK}(n)]\). **Proposition 1.50**.: _There is an equivalence of abelian \(\infty\)-groups \(\mathit{BSO}/\mathit{BString}\stackrel{{\cong}}{{\to}}\mathit{SK}(4)\). Moreover, the space of such equivalences is connected. Therefore there is a natural isomorphism of abelian groups \([X,\mathit{BSO}/\mathit{BString}]\cong\mathit{SH}^{4}(X)\)._ The point of the last sentence in Equation (1.50) is that in our proof, we do not specify an isomorphism, so a priori there could be ambiguity like in Equation (1.36). But since the space of such identifications is connected, there is a unique identification in the homotopy category, which suffices for the calculations we make in this paper. Proof of Equation (1.50).: We are trying to identify the extension (1.45) of abelian \(\infty\)-groups to relate it to \(\mathit{SH}\). Because \(\mathit{BSO}/\mathit{BString}\) is an abelian \(\infty\)-group, this extension, a priori classified by \(H^{5}(K(\mathbb{Z}/2,2),\mathbb{Z})\), actually is classified by the stabilization \([\Sigma^{2}H\mathbb{Z}/2,\Sigma^{5}H\mathbb{Z}]\): this extension is equivalent data to a fiber sequence of connective spectra, so we get to use stable Postnikov theory. Our first step is to understand \([\Sigma^{2}H\mathbb{Z}/2,\Sigma^{5}H\mathbb{Z}]\). **Lemma 1.51**.: _For all \(k\in\mathbb{Z}\), \([H\mathbb{Z}/2,\Sigma^{k}H\mathbb{Z}]\cong[H\mathbb{Z},\Sigma^{k-1}H\mathbb{Z }/2]\)._ Proof.: This follows by using the universal coefficient theorem to relate both groups to homology groups: the short exact sequences in the universal coefficient theorem simplify to identify the two groups in the lemma statement with \(H_{k-1}(H\mathbb{Z};\mathbb{Z}/2)\), resp. \(H_{k-1}(H\mathbb{Z}/2;\mathbb{Z})\) (the latter because the homology of \(H\mathbb{Z}/2\) is torsion). Both of these groups are isomorphic to \(\pi_{k-1}(H\mathbb{Z}\wedge H\mathbb{Z}/2)\), so the lemma follows. **Corollary 1.52**.: \([\Sigma^{2}H\mathbb{Z}/2,\Sigma^{4}H\mathbb{Z}]=0\) _and \([\Sigma^{2}H\mathbb{Z}/2,\Sigma^{5}H\mathbb{Z}]\cong\mathbb{Z}/2\)._ Proof.: By Equation (1.51), we need to compute \([H\mathbb{Z},\Sigma^{i}H\mathbb{Z}/2]=H^{i}(H\mathbb{Z};\mathbb{Z}/2)\) for \(i=1,2\). Let \(\mathcal{A}\) denote the mod 2 Steenrod algebra; then \(H^{*}(H\mathbb{Z};\mathbb{Z}/2)\cong\mathcal{A}\otimes_{\mathcal{A}(0)}\mathbb{ Z}/2\)[12, SS9]. This vanishes in degree 1 and is isomorphic to \(\mathbb{Z}/2\) in degree 2. Equation (1.46) implies (1.45) is classified by a nonzero element of \([\Sigma^{2}H\mathbb{Z}/2,\Sigma^{5}H\mathbb{Z}]\). And by definition, \(\mathit{SK}(4)\) is an extension of \(K(\mathbb{Z}/2,2)\) by \(K(\mathbb{Z},4)\) classified by \(\beta\circ\mathrm{Sq}^{2}\), which is a nonzero element of \([\Sigma^{2}H\mathbb{Z}/2,\Sigma^{5}H\mathbb{Z}]\). Since this group is isomorphic to \(\mathbb{Z}/2\) by Equation (1.52), these two nonzero elements must coincide, so there is an equivalence of abelian \(\infty\)-groups \(\mathit{BSO}/\mathit{BString}\simeq\mathit{SK}(4)\). There is a homotopy type of such equivalences, and \(\pi_{0}\) of that homotopy type is a torsor over \([\Sigma^{2}H\mathbb{Z}/2,\Sigma^{4}H\mathbb{Z}]\), which vanishes by Equation (1.52), so the space of identifications is connected. **Corollary 1.53**.: _The map \(K(\mathbb{Z}/2,1)\to B\mathrm{O}\) defined by the tautological line bundle induces a homotopy equivalence of spaces_ \[B\mathrm{O}/B\mathrm{String}\stackrel{{\simeq}}{{\longrightarrow}}K (\mathbb{Z}/2,1)\times SK(4), \tag{1.54}\] _implying that MTString, \(\mathit{tmf}\), and TMF can be twisted over a space \(X\) by classes \(a\in H^{1}(X;\mathbb{Z}/2)\) and \(d\in\mathit{SH}^{4}(X)\)._ **Definition 1.55**.: We call the twists associated to \(a\) and \(d\) in Equation (1.53) the _fake vector bundle twists_ for _MTString_, \(\mathit{tmf}\), \(\mathit{Tmf}\)_, and \(\mathit{TMF}\)_. _Remark 1.56_.: Another consequence of Equation (1.50), applied to the proof strategy of Equation (1.46), is that, even though \(\lambda\in H^{4}(B\mathrm{Spin};\mathbb{Z})\) does not pull back from \(B\mathrm{SO}\), its image in \(\mathit{SH}^{4}(B\mathrm{Spin})\)_does_ pull back from a class \(\lambda\in\mathit{SH}^{4}(B\mathrm{SO})\). This is a theorem of Freed [11, Proposition 1.9(i)], with additional proofs given by Jenquin [14, Proposition 4.6] and Johnson-Freyd and Treumann [13, SS1.4]. The map \(K(\mathbb{Z},4)\simeq B\mathrm{Spin}/B\mathrm{String}\to B\mathrm{SO}/B\mathrm{ String}\) means degree-4 ordinary cohomology classes also define degree-4 twists of string bordism and topological modular forms. Twists of this sort have already been studied, so we compare our twists to the literature. **Definition 1.57**.: Given \(X\), \(a\), and \(d\) as in Equation (1.53), let \(\Omega_{*}^{\mathrm{String}}(X,a,d)\) denote the groups of bordism classes of manifolds \(M\) equipped with maps \(f\colon M\to X\) and trivializations of \(w_{1}(M)-f^{*}(a)\in H^{1}(M;\mathbb{Z}/2)\) and \(\lambda(M)-f^{*}(d)\in\mathit{SH}^{4}(M)\). A priori we only defined \(\lambda\) as a characteristic class of oriented vector bundles; for an unoriented vector bundle \(V\), \(\lambda(V)\) is be defined to be \(\lambda(V\oplus\mathrm{Det}(V))\), as the latter bundle is oriented. Equation (1.57) first appears in work of B.L. Wang [12, Definition 8.4] in the special case when \(a=0\) and \(d\) comes from ordinary cohomology. **Lemma 1.58**.: _There is a natural isomorphism \(\pi_{*}(M^{\text{MTString}}f_{a,d})\stackrel{{\cong}}{{\to}} \Omega_{*}^{\mathrm{String}}(X,a,d)\)._ This follows from work of Hebestreit-Joachim [11], much like Equations (1.25) and (1.40). Though they do not discuss the _MTString_ case explicitly, their proof can be adapted to our setting. See [11, Remark 2.2.3]. We can also compare with preexisting twists of \(\mathit{tmf}\). **Lemma 1.59**.: _The fake vector bundle twist defined by \(K(\mathbb{Z},4)\to\mathit{SK}(4)\to B\mathrm{GL}_{1}(\mathit{tmf})\) is homotopy equivalent to the twist \(K(\mathbb{Z},4)\to B\mathrm{GL}_{1}(\mathit{tmf})\) constructed by Ando-Blumberg-Gepner [1, Proposition 8.2]._ Proof sketch.: This equivalence is not obvious, because Ando-Blumberg-Gepner construct their twist in a different way: beginning with a map \(\phi\colon\Sigma_{+}^{\infty}K(\mathbb{Z},3)\to\mathit{tmf}\) and using the adjunction [1, (1.4), (1.7)] between \(\Sigma_{+}^{\infty}\) and \(\mathrm{GL}_{1}\). However, their argument builds \(\phi\) out of the map \(\lambda\colon B\mathrm{Spin}\to B\mathrm{Spin}/B\mathrm{String}\simeq K( \mathbb{Z},4)\), allowing one to pass our construction through their argument and conclude that our twist, as a class in \([K(\mathbb{Z},4),B\mathrm{GL}_{1}(\mathit{tmf})]\), coincides with Ando-Blumberg-Gepner's. Though these twists by degree-4 cohomology are relatively well-studied, there are not so many examples of lower-degree twists of string bordism or topological modular forms in the literature. See Freed-Hopkins-Teleman [13, SS2], Johnson-Freyd [12, SS2.3], and Tachikawa-Yamashita [12, SS4] for some examples. **Example 1.60**.: Just as in Equations (1.28) and (1.42), Equation (1.15) calculates some _MTString_-module Thom spectra for us: over \(B\mathrm{O}/B\mathrm{String}\) we get _MTO_; over \(B\mathrm{SO}/B\mathrm{String}\) we get _MTSO_, and over \(K(\mathbb{Z},4)\) we get _MTSpin_. The last example is due to Beardsley [1, SS3]. _Remark 1.61_.: Like in Equations (1.30) and (1.37), (1.44) is not an equivalence of \(\infty\)-groups. The same two proofs are available to us: pulling back to \(K(\mathbb{Z}/2,1)\) and showing we do not obtain an \(E_{1}\)-ring spectrum, and comparing the group structures on \([\mathbb{RP}^{\infty},B\mathrm{O}/B\mathrm{String}]\) and \([\mathbb{RP}^{\infty},K(\mathbb{Z}/2,1)\times B\mathrm{SO}/B\mathrm{String}]\). For the second proof, one observes that \([\mathbb{RP}^{\infty},B\mathrm{O}/B\mathrm{String}]\cong\mathbb{Z}/8\) but \([\mathbb{RP}^{\infty},K(\mathbb{Z}/2,1)\times B\mathrm{SO}/B\mathrm{String}]\) has at least four elements of order \(4\), then concludes. For the first proof, we obtain _MTString_\(\wedge(B\mathrm{O}_{1})^{\sigma-1}\) like before; to our knowledge, this notion of bordism has not been studied.12 However, since this is a vector bundle Thom spectrum, the change-of-rings trick shows that in topological degrees \(15\) and below, the \(E_{2}\)-page of the Adams spectral sequence computing \(\Omega_{*}^{\mathrm{String}}((B\mathrm{O}_{1})^{\sigma-1})_{2}^{\wedge}\) is isomorphic to \(\mathrm{Ext}_{\mathcal{A}(2)}^{s,t}(H^{*}((B\mathbb{Z}/2)^{\sigma-1};\mathbb{Z} /2),\mathbb{Z}/2)\) (see SS2.1 for notation and an explanation). Davis-Mahowald [13, Table 3.2] have computed these Ext groups, and from their computation it directly follows using the Adams spectral sequence that \(\pi_{0}\cong\mathbb{Z}/2\) and \(\pi_{3}\cong\mathbb{Z}/8\), so just like for _MTPin\({}^{c}\)_ and _MTPin\({}^{-}\)_, _MTString_\(\wedge(B\mathrm{O}_{1})^{\sigma-1}\) does not admit an \(E_{1}\)-ring spectrum structure. Footnote 12: By analogy with SO and O and Spin and Pin\({}^{-}\), one could call this ring\({}^{-}\) bordism. We hope there is a better name for this spectrum. In the proof of Equation (2.28) we will need to understand the mod \(2\) cohomology classes naturally associated to a degree-\(4\) supercohomology class \(d\). The quotient \(t\colon\mathit{SH}\to\Sigma^{-2}H\mathbb{Z}/2\) gives us a degree-\(2\) class \(t(d)\), sometimes called the _Gu-Wen layer_ of \(d\). To proceed further, we study the Serre spectral sequence associated to the fibration \(K(\mathbb{Z},4)\to\mathit{SK}(4)\to K(\mathbb{Z}/2,2)\). Let \(\overline{\delta}\in H^{4}(K(\mathbb{Z},4);\mathbb{Z}/2)\) be the mod \(2\) reduction of the tautological class; this defines a class in \(E_{2}^{0,4}\) of our Serre spectral sequence, which we also call \(\overline{\delta}\). **Lemma 1.62**.: _The class \(\overline{\delta}\in E_{2}^{0,4}\) survives to the \(E_{\infty}\)-page._ Proof.: The only possible differential that could be nonzero on \(\overline{\delta}\) is the transgressing \(d_{5}\), which pulls back from the transgressing \(d_{5}\) on \(\overline{\delta}\) in the Serre spectral sequence for the universal fibration with fiber \(K(\mathbb{Z},4)\), namely \(K(\mathbb{Z},4)\to E(K(\mathbb{Z},4))\to B(K(\mathbb{Z},4))\simeq K(\mathbb{ Z},5)\). In the universal fibration, \(d_{5}(\overline{\delta})\) is the mod \(2\) tautological class \(\epsilon\in H^{5}(K(\mathbb{Z},5);\mathbb{Z}/2)\), so in the fibration with total space \(\mathit{SK}(4)\), \(d_{5}(\overline{\delta})\) is the pullback of \(\epsilon\) by the classifying map \(\beta\circ\mathrm{Sq}^{2}\colon K(\mathbb{Z}/2,2)\to K(\mathbb{Z},5)\). Thus \(\epsilon\mapsto(\beta\mathrm{Sq}^{2}(B))\) mod \(2=\mathrm{Sq}^{1}\mathrm{Sq}^{2}(B)\), where \(B\in H^{2}(K(\mathbb{Z}/2,2);\mathbb{Z}/2)\) is the tautological class, but \(\mathrm{Sq}^{1}\mathrm{Sq}^{2}(B)=\mathrm{Sq}^{3}(B)=0\), as \(B\) has degree \(2\). Thus \(d_{5}(\overline{\delta})=0\). _Remark 1.63_.: This is an unstable phenomenon: for \(n>2\), a similar argument shows the transgressing differential on the mod \(2\) tautological class of \(K(\mathbb{Z},n)\) is nonzero, so no analogue of \(\overline{\delta}\) exists in the cohomology of \(\mathit{SK}(n)\). We want to lift \(\overline{\delta}\in E_{\infty}^{0,4}\) to an element \(\delta\) of \(H^{4}(\mathit{SK}(4);\mathbb{Z}/2)\). If \(B\) is the tautological class of \(K(\mathbb{Z}/2,2)\), then there is an ambiguity between \(\delta\) and \(\delta+B^{2}\). To resolve this ambiguity, pull back across the map \(\lambda\colon B\mathrm{SO}\to\mathit{SK}(4)\). By comparing the Serre spectral sequences for the fibrations \(K(\mathbb{Z},4)\to\mathit{SK}(4)\to K(\mathbb{Z}/2,2)\) and \(B\mathrm{Spin}\to B\mathrm{SO}\to K(\mathbb{Z}/2,2)\), one learns that \(\lambda^{*}(\delta)\) is either \(w_{4}\) or \(w_{4}+w_{2}^{2}\). Choosing the former allows us to uniquely define \(\delta\). **Corollary 1.64**.: _There is a unique class \(\delta\in H^{4}(\mathit{SK}(4);\mathbb{Z}/2)\) such that \(\lambda^{*}(\delta)=w_{4}\)._ Phrased differently, associated to every \(d\in\mathit{SH}^{4}(X)\) is a class \(\delta\in H^{4}(X;\mathbb{Z}/2)\), such that if there is an oriented vector bundle \(V\to X\) with \(d=\lambda(V)\), then \(\delta=w_{4}(V)\). The same line of reasoning also shows that \(\lambda^{*}(t(d))=w_{2}\). ## 2. Computing the input to Baker-Lazarev's Adams spectral sequence ### Review: the change-of-rings theorem for vector bundle Thom spectra We begin by reviewing how the story goes for vector bundle Thom spectra, where we can take advantage of a general change-of-rings theorem. This is a standard technique dating back to work of Anderson-Brown-Peterson [1] and Giambalvo [14, 15, 16]; see Beaudry-Campbell [1, SS4.5] for a nice introduction. **Lemma 2.1** (Change of rings).: _Let \(\mathcal{B}\) be a graded Hopf algebra and \(\mathcal{C}\subset\mathcal{B}\) be a graded Hopf subalgebra. If \(M\) is a graded \(\mathcal{C}\)-module and \(N\) is a graded \(\mathcal{B}\)-module, then there is a natural isomorphism_ \[\operatorname{Ext}_{\mathcal{B}}^{s,t}(\mathcal{B}\otimes_{\mathcal{C}}M,N) \stackrel{{\cong}}{{\longrightarrow}}\operatorname{Ext}_{ \mathcal{C}}^{s,t}(M,N) \tag{2.2}\] For the little siblings we consider, we have the following isomorphisms of \(\mathcal{A}\)-modules: \[H^{*}(H\mathbb{Z};\mathbb{Z}/2) \cong\mathcal{A}\otimes_{\mathcal{A}(0)}\mathbb{Z}/2 \tag{2.3b}\] \[H^{*}(ku;\mathbb{Z}/2) \cong\mathcal{A}\otimes_{\mathcal{E}(1)}\mathbb{Z}/2\] (2.3c) \[H^{*}(ko;\mathbb{Z}/2) \cong\mathcal{A}\otimes_{\mathcal{A}(1)}\mathbb{Z}/2\] (2.3d) \[H^{*}(\mathit{tmf};\mathbb{Z}/2) \cong\mathcal{A}\otimes_{\mathcal{A}(2)}\mathbb{Z}/2. \tag{2.3a}\] Here \(\mathcal{A}(n)\) is the subalgebra of \(\mathcal{A}\) generated by \(\{\operatorname{Sq}^{1},\operatorname{Sq}^{2},\operatorname{Sq}^{4},\ldots, \operatorname{Sq}^{2^{n}}\}\) and \(\mathcal{E}(1)=\langle Q_{0},Q_{1}\rangle\), where \(Q_{0}=\operatorname{Sq}^{1}\) and \(Q_{1}=\operatorname{Sq}^{1}\operatorname{Sq}^{2}+\operatorname{Sq}^{2} \operatorname{Sq}^{1}\). The isomorphisms in (2.3) were proven by Wall [19, SS9] (\(H\mathbb{Z}\)), Adams [1] (\(ku\)), Stong [18] (\(ko\)), and Hopkins-Mahowald [16] (\(\mathit{tmf}\)). To use Equation (2.1), we need to make \(\mathcal{A}(0)\), \(\mathcal{A}(1)\), \(\mathcal{A}(2)\), and \(\mathcal{E}(1)\) into Hopf subalgebras of \(\mathcal{A}\). This is equivalent to specifying how these algebras interplay with the cup product, which the Cartan formula answers. For the Steenrod squares, this is standard; we also have \(Q_{i}(ab)=aQ_{i}(b)+Q_{i}(a)b\) for \(i=0,1\). Equation (2.1), paired with (2.3), greatly simplifies many computations: for any spectrum which splits as \(X=R\wedge Y\) where \(R\) is one of \(H\mathbb{Z}\), \(ku\), \(ko\), or \(\mathit{tmf}\), the \(E_{2}\)-page of the Adams spectral sequence computing the 2-completed homotopy groups of \(X\) (or the \(R\)-homology of \(Y\)) is identified with \(\operatorname{Ext}\) groups over \(\mathcal{A}(0)\), \(\mathcal{E}(1)\), \(\mathcal{A}(1)\), or \(\mathcal{A}(2)\). These are much smaller than the entire 2-primary Steenrod algebra, so the \(\operatorname{Ext}\) groups are easier to calculate; thus one often hears the slogan that \(\mathit{ko}\)-, \(ku\)-, and \(\mathit{tmf}\)-homology groups are relatively easy to compute with the Adams spectral sequence,13 and by (1.16) and (2.3), those computations also compute \(\operatorname{spin}^{c}\), \(\operatorname{spin}\), and string bordism (the latter in dimensions 15 and below). See Douglas-Henriques-Hill [16] for a nice related computation of vector bundle twists of string bordism. Footnote 13: The Adams spectral sequence computing \(H\mathbb{Z}\)-homology is essentially a repackaging of the Bockstein spectral sequence; see May-Milgram [16]. _Remark 2.4_.: Another way to phrase this is that, though (2.3) is about the little siblings only, combining it with (1.16) allows us to write down change-of-rings results for the Adams spectral sequences of the big siblings. Specifically, there is an \(\mathcal{A}(0)\)-module \(W_{1}\), an \(\mathcal{E}(1)\)-module \(W_{2}\), and an \(\mathcal{A}(1)\)-module \(W_{3}\) such that \[H^{*}(\mathit{MTSO};\mathbb{Z}/2) \cong\mathcal{A}\otimes_{\mathcal{A}(0)}W_{1} \tag{2.5b}\] \[H^{*}(\mathit{MTSpin}^{c};\mathbb{Z}/2) \cong\mathcal{A}\otimes_{\mathcal{E}(1)}W_{2}\] (2.5c) \[H^{*}(\mathit{MTSpin};\mathbb{Z}/2) \cong\mathcal{A}\otimes_{\mathcal{A}(1)}W_{3}, \tag{2.5a}\] so that the \(E_{2}\)-pages of the Adams spectral sequences computing the 2-completions of \(\Omega_{*}^{\mathrm{SO}}\), \(\Omega_{*}^{\mathrm{Spin}^{c}}\), and \(\Omega_{*}^{\mathrm{Spin}}\) are the Ext groups of \(W_{1}\), \(W_{2}\), and \(W_{3}\), respectively, over \(\mathcal{E}(1)\), \(\mathcal{A}(1)\), and \(\mathcal{A}(2)\) respectively. Explicitly, these modules begin in low degrees with (compare (1.16)) \[W_{1} \cong\mathbb{Z}/2\oplus\Sigma^{4}\mathbb{Z}/2\oplus\Sigma^{5} \mathcal{A}(0)\oplus\Sigma^{8}\mathbb{Z}/2\oplus\Sigma^{8}\mathbb{Z}/2\oplus\cdots \tag{2.6b}\] \[W_{2} \cong\mathbb{Z}/2\oplus\Sigma^{4}\mathbb{Z}/2\oplus\Sigma^{8} \mathbb{Z}/2\oplus\Sigma^{8}\mathbb{Z}/2\oplus\Sigma^{10}\mathcal{E}(1)\oplus\cdots\] (2.6c) \[W_{3} \cong\mathbb{Z}/2\oplus\Sigma^{8}\mathbb{Z}/2\oplus\Sigma^{10} \mathcal{A}(1)/\mathrm{Sq}^{3}\oplus\ldots \tag{2.6a}\] Often, though, what one wants is twisted. For vector bundle twists in the sense of Equation (1.6), this is not a problem: if \(f\colon X\to B\mathrm{GL}_{1}(R)\) is a vector bundle twist specified by a rank-\(r\) virtual vector bundle \(V\to X\), or strictly speaking by the rank-0 virtual vector bundle \(V-r\coloneqq V-\mathbb{\mathbb{\mathbb{\mathbb{missing}}}}^{r}\), then \(f\) factors through \(B\mathrm{GL}_{1}(\mathbb{S})\), so Equation (1.8) provides a natural homotopy equivalence14 Footnote 14: For a different, less abstract proof of this splitting, see [11, §10] or [13, §10.4]. \[Mf\xrightarrow{\simeq}R\wedge X^{V-r}. \tag{2.7}\] Thus, for the ring spectra \(R\) we discussed above, one can also use the change-of-rings isomorphism to simplify the computation of twisted \(R\)-homology for vector bundle twists: for \(ko\), the \(E_{2}\)-page is \[E_{2}^{s,t}=\mathrm{Ext}_{\mathcal{A}(1)}^{s,t}(H^{*}(X^{V-r};\mathbb{Z}/2), \mathbb{Z}/2)\Rightarrow\mathit{ko}_{t-s}(X)_{2}^{\wedge}, \tag{2.8}\] and the other choices of \(R\) are analogous. The \(\mathcal{A}\)-action (and hence also the \(\mathcal{A}(n)\) and \(\mathcal{E}(1)\)-actions) on \(H^{*}(X^{V-r};\mathbb{Z}/2)\) is easy to compute: the Thom isomorphism tells us the cohomology as a vector space, and the Stiefel-Whitney classes of \(V\) twist the Steenrod squares as described in [1, Remark 3.3.5]. This is a powerful generalization: many bordism spectra of interest arise as twists in this way, including \(\mathrm{pin}^{\pm}\) bordism and all of the bordism spectra studied in [1, 1, 10, 10, 11]. ### Baker-Lazarev's \(R\)-module Adams spectral sequence For \(R\) an \(E_{\infty}\)-ring spectrum,15 Baker-Lazarev [1] develop an \(R\)-module spectrum generalization of the Adams spectral sequence which reduces to the usual Adams spectral sequence when \(R=\mathbb{S}\). Footnote 15: Baker-Lazarev work with commutative algebras in Elmendorf-Kriz-Mandell-May’s \(\mathbb{S}\)-modules; as we discussed in Footnote 3, we may equivalently work with \(E_{\infty}\)-ring spectra. **Definition 2.9**.: For \(R\)-modules \(H\) and \(M\), the \(R\)_-module \(H\)-homology_ of \(M\) is \[H_{*}^{R}(M)\coloneqq\pi_{*}(H\wedge_{R}M), \tag{2.10a}\] and the \(R\)_-module \(H\)-cohomology_ of \(M\) is \[H_{R}^{*}(M)\coloneqq\pi_{-*}\mathrm{Map}_{R}(M,H). \tag{2.10b}\] For the purposes of this paper, \(R\) will be one of the little siblings. For each such \(R\), there is a canonical isomorphism \(\pi_{0}(R)\stackrel{{\cong}}{{\to}}\mathbb{Z}\), which lifts to identify the Postnikov quotient \(\tau_{\leq 0}R\stackrel{{\cong}}{{\to}}H\mathbb{Z}\); as \(\tau_{\leq 0}R\) is an \(R\)-module spectrum via the quotient map \(R\to\tau_{\leq 0}R\), this data provides a canonical \(R\)-algebra structure on \(H\mathbb{Z}\). Composing with the mod \(n\) reduction map \(H\mathbb{Z}\to H\mathbb{Z}/n\), we also obtain canonical \(E_{\infty}\)\(R\)-algebra structures on \(H\mathbb{Z}/n\) for all \(n\). For \(n=2\) we have the following isomorphisms of "algebras of \(R\)-module cohomology operations:" **Theorem 2.11**.: _Let \(R\) be one of the little siblings and \(H=H\mathbb{Z}/2\) with the \(R\)-algebra structure defined above. Then there are isomorphisms_ \[R =H\mathbb{Z}\,,\quad H^{*}_{R}H\cong\mathcal{A}(0) \tag{2.12b}\] \[R =ko\,,\quad H^{*}_{R}H\cong\mathcal{A}(1)\] (2.12c) \[R =ku\,,\quad H^{*}_{R}H\cong\mathcal{E}(1)\] (2.12d) \[R =tmf\,,\quad H^{*}_{R}H\cong\mathcal{A}(2), \tag{2.12a}\] _and dualizing gives the corresponding algebras of homology operations, e.g. \(H^{H\mathbb{Z}}_{*}H\cong\mathcal{A}(0)_{*}\)._ This theorem was proven in pieces: the part for \(H\mathbb{Z}\) is standard; for \(ko\) and \(ku\) this is due to Baker [1, Theorem 5.1]; and for \(tmf\) it is due to Henriques [1]. We now present the spectral sequence; let \(M\) and \(N\) be \(R\)-modules, and let \(H\) be a commutative \(R\)-ring spectrum. **Theorem 2.13** (Baker-Lazarev [1]).: _Let \(M\) and \(N\) be \(R\)-modules and \(H\) be an \(E_{\infty}\)\(R\)-algebra, and suppose that \(H^{R}_{*}H\) is a flat \(\pi_{*}(H)\)-module. Then there is a spectral sequence of Adams type, natural in \(M\), \(N\), \(H\), and \(R\), with \(E_{2}\)-page_ \[E_{2}^{s,t}=\operatorname{Ext}_{H^{*}_{R}H}^{s,t}(H^{*}_{R}M,H^{*}_{R}N), \tag{2.14}\] _and if \(N\) is connective and \(M\) is a cellular \(R\)-module spectrum with finitely many cells in each degree,16 then this spectral sequence converges to the homotopy groups of the (\(R\)-module) \(H\)-nilpotent completion of \(N^{R}_{*}M\)._ Footnote 16: This condition on \(M\) is the analogue in \(\mathcal{M}od_{R}\) of the notion of a CW spectrum with finitely many cells in each degree. If \(M\) is the \(R\)-module Thom spectrum associated to a map \(f\colon X\to B\mathrm{GL}_{1}(R)\), which is the only case we consider in this paper, then this condition on \(M\) is met if \(X\) is a CW complex with finitely many cells in each dimension. Without the flatness assumption, one only has a description of the \(E_{1}\)-page, and it is more complicated.17 For example, this issue occurs when \(R=\mathbb{S}\) and \(H=ku\), \(ko\), or \(tmf\); see [12, 13, 14, 15, 16, 17, 18]. However, if \(p\) is a prime number, \(\pi_{*}(H\mathbb{Z}/p)\cong\mathbb{Z}/p\) is a field, so the flatness assumption is satisfied for all \(R\); as this is the only case we consider in this paper, we say no more about the flatness assumption in Equation (2.13). Footnote 17: In applications, this may be less bad than it seems: for example, McNamara-Reece [14, §6.2] interpret the \(E_{1}\)-page of the classical Adams spectral sequence in the context of quantum gravity. The notion of the _\(H\)-nilpotent completion_ of a spectrum is due to Bousfield [15, SS5]. When \(H=H\mathbb{Z}/p\), \(p\) prime, this is the usual \(p\)-completion [14, Example 1.16]. Thus if the homotopy groups of \(N\wedge_{R}M\) are finitely generated abelian groups, this as usual detects free and \(p^{k}\)-torsion summands, but not torsion for other primes. When \(R=\mathbb{S}\), Equation (2.13) reduces to the classical \(H\)-based Adams spectral sequence, with its standard convergence results. We will apply Equation (2.13) when \(R\) is one of the little siblings, \(H=H\mathbb{Z}/p\) for \(p\) prime, and \(N=R\): there is a canonical homotopy equivalence \(R\wedge_{R}M\simeq M\), so in this setting Baker-Lazarev's spectral sequence takes as input \(\operatorname{Ext}_{H^{*}_{R}H}(H^{*}_{R}(M),\mathbb{Z}/2)\), and converges to the \(p\)-completed homotopy groups of \(M\). For Thom spectra \(H^{*}_{R}\) is easy. **Lemma 2.15** (\(R\)-module cohomology Thom isomorphism).: _For any \(E_{\infty}\)-ring spectrum \(R\) such that \(H\coloneqq H\mathbb{Z}/2\) is an \(R\)-algebra and \(H^{*}_{R}H\) is a Poincare duality algebra, then for any map \(f\colon X\to B\mathrm{GL}_{1}(R)\), there is an isomorphism \(H^{*}(X;\mathbb{Z}/2)\to H^{*}_{R}(Mf)\)._ This means that \(H^{*}_{R}(Mf)\) is a free \(H^{*}(X;\mathbb{Z}/2)\)-module on a class \(U\in H^{0}_{R}(Mf)\), which is the Thom class in this setting. The Poincare duality algebra condition is met for all of the little siblings \(R\) we consider in this paper, thanks to the explicit identifications above. Proof.: Apply Equation (1.8) with \(R_{1}=R\) and \(R_{2}=H\mathbb{Z}/2\) to learn that \(Mf\wedge_{R}H\mathbb{Z}/2\), the object whose homotopy groups are \(H^{R}_{*}(Mf)\), is the Thom spectrum of a twist \(f^{\prime}\colon X\to B\mathrm{GL}_{1}(H\mathbb{Z}/2)\). By Equation (1.3), \(B\mathrm{GL}_{1}(H\mathbb{Z}/2)\) is contractible, so \(f^{\prime}\) is null-homotopic, so by Equation (1.7), \(Mf\wedge_{R}H\mathbb{Z}/2\simeq X_{+}\wedge H\mathbb{Z}/2\). Taking homotopy groups, we learn the analogue of the theorem statement for \(H^{R}_{*}(Mf)\). To pass from homology to cohomology, we need some version of the universal coefficient theorem. This would follow if \(H\mathbb{Z}/2\) were _shifted Spanier-Whitehead self-dual_ in \(\mathcal{M}\mathit{od}_{R}\), i.e. if we were given data of a homotopy equivalence of \(R\)-module spectra \(\mathrm{Map}_{R}(H\mathbb{Z}/2,R)\stackrel{{\sim}}{{\to}}\Sigma^{ k}H\mathbb{Z}/2\) for some \(k\): this correspondence would allow one to identify \(R\)-module \(H\mathbb{Z}/2\)-homology and \(R\)-module \(H\mathbb{Z}/2\)-cohomology after a shift by \(k\). Baker [1, Proposition 3.2] shows that whenever \(H^{*}_{R}H\) is a Poincare self-duality algebra, then \(H\mathbb{Z}/2\) is shifted Spanier-Whitehead self-dual, and Poincare self-duality holds for \(\mathcal{A}(0)\), \(\mathcal{A}(1)\), \(\mathcal{E}(1)\), and \(\mathcal{A}(2)\), so we can conclude. For most of our applications we will take \(H=H\mathbb{Z}/2\). **Example 2.16** (_tmf_ at the prime 3).: We will also work with an interesting odd-primary example, where \(H=H\mathbb{Z}/3\) and \(R=\textit{tmf}\). Let \(\mathcal{A}_{3}\coloneqq H^{*}H\), which is the mod 3 Steenrod algebra, and let \(\mathcal{A}^{\textit{tmf}}\coloneqq H^{*}_{\textit{tmf}}H\); Henriques and Hill, using the work of Behrens [1] and unpublished work of Hopkins-Mahowald, showed that \[\mathcal{A}^{\textit{tmf}}\cong\mathbb{Z}/3\langle\beta,\mathcal{P}^{1} \rangle/(\beta^{2},\beta(\mathcal{P}^{1})^{2}\beta-(\beta\mathcal{P}^{1})^{2 }-(\mathcal{P}^{1}\beta)^{2},(\mathcal{P}^{1})^{3}). \tag{2.17}\] Curiously, Rezk showed that \(H^{*}(\textit{tmf};\mathbb{Z}/3)\) is not isomorphic to \(\mathcal{A}_{3}\otimes_{\mathcal{A}^{\textit{tmf}}}\mathbb{Z}/3\): see [1, SS2]. The map \(\phi\colon H^{*}_{\textit{tmf}}H\to H^{*}H\) sends \(\beta\) to the Bockstein of \(0\to\mathbb{Z}/3\to\mathbb{Z}/9\to\mathbb{Z}/3\to 0\) and \(\mathcal{P}^{1}\) to the first Steenrod power. However, unlike in the previous examples we studied, \(\phi\) is not injective! The relation \(\beta(\mathcal{P}^{1})^{2}+\mathcal{P}^{1}\beta\mathcal{P}^{1}+(\mathcal{P}^{ 1})^{2}\beta=0\) is present in \(\mathcal{A}_{3}\) but not in \(\mathcal{A}^{\textit{tmf}}\) (see, e.g., [1, Corollary 13.7]). Baker-Lazarev's Equation (2.13) implies that for any _tmf_-module spectrum \(M\), \(H^{*}_{\textit{tmf}}(M)\) carries a natural \(\mathcal{A}^{\textit{tmf}}\)-module action, and there is an Adams spectral sequence \[E_{2}^{s,t}=\mathrm{Ext}_{\mathcal{A}^{\textit{tmf}}}^{s,t}(H^{*}_{\textit{tmf }}(M),\mathbb{Z}/3)\Longrightarrow\pi_{t-s}(M)_{3}^{\wedge}. \tag{2.18}\] In general, we will let \(H^{*}_{\textit{tmf}}(M)\) refer to the mod 2 _tmf_-module cohomology and denote the mod 3 _tmf_-module cohomology by \(H^{*}_{\textit{tmf}}(M;\mathbb{Z}/3)\). Because \((\mathbb{Z}/3)^{\times}\) is nontrivial, \(B\mathrm{GL}_{1}(H\mathbb{Z}/3)\) is not contractible, so the proof of Equation (2.15) does not directly generalize to this setting; however, as \(B\mathrm{GL}_{1}(H\mathbb{Z}/3)\cong B(\mathbb{Z}/3)^{\times}\) (see Equation (1.3)), for any twist \(f\colon X\to B\mathrm{GL}_{1}(\textit{tmf})\) factoring through a simply connected space, the induced twist of \(H\mathbb{Z}/3\) is trivial and the argument goes through to show \(H^{*}_{\textit{tmf}}(M^{\textit{tmf}}f;\mathbb{Z}/3)\cong H^{*}(X;\mathbb{Z}/3)\). As \(\mathit{SK}(4)\) is simply connected, this includes the fake vector bundle twists of _tmf_ whose components in \(H^{1}(\neg;\mathbb{Z}/2)\) vanish. Like for the mod 2 subalgebras of the Steenrod algebra that we discussed, we will want to know how \(\mathcal{A}^{\textit{tmf}}\) acts on products. The map \(\mathcal{A}^{\textit{tmf}}\to\mathcal{A}_{3}\) is a map of Hopf algebras [1, SS13.1], allowing us to use the Cartan formula and multiplicativity of the Bockstein in \(\mathcal{A}_{3}\) to conclude that in \(\mathcal{A}^{\mathit{tmf}}\), \[\mathcal{P}^{1}(ab) =\mathcal{P}^{1}(a)b+a\mathcal{P}^{1}(b)\,, \tag{2.19b}\] \[\beta(ab) =\beta(a)b+(-1)^{|a|}a\beta(b). \tag{2.19a}\] When \(R\) is one of the little siblings, Equation (2.11) implies that for any \(R\)-module spectrum \(M\), Baker-Lazarev's spectral sequence calculates \(pi_{*}(M)_{2}^{\wedge}\) as the Ext of _something_ over an algebra much smaller than \(\mathcal{A}\) -- one of \(\mathcal{A}(0)\), \(\mathcal{E}(1)\), \(\mathcal{A}(1)\), or \(\mathcal{A}(2)\). Thus the change-of-rings approach to computing \(\pi_{*}(R\wedge Y)_{2}^{\wedge}\) that we described in SS2.1 generalizes to other \(R\)-modules \(M\), in particular when \(M\) is an \(R\)-module Thom spectrum -- we just have to figure out \(H_{R}^{*}(M)\). This will be the main result of the next section. ### Proof of the main theorem At this point, we know from the previous section that even for non-vector-bundle Thom spectra \(M^{R}f\) over \(R=H\mathbb{Z}\), \(ku\), \(ko\) and \(\mathit{tmf}\), we can work over \(\mathcal{E}(1)\), \(\mathcal{A}(1)\), and \(\mathcal{A}(2)\) to compute the \(E_{2}\)-page of Baker-Lazarev's Adams spectral sequence, implying that a change-of-rings formula for these Thom spectra exists. Our next step is to determine the \(\mathcal{E}(1)\)-, \(\mathcal{A}(1)\)-, and \(\mathcal{A}(2)\)-modules \(H_{R}^{*}(M^{R}f)\). We describe the actions of the generators of \(\mathcal{E}(1)\), \(\mathcal{A}(1)\), and \(\mathcal{A}(2)\) below in Equation (2.20); however, it is not yet clear that they satisfy the Adem relations, so we describe these modules over freer algebras, then later in the proof of Equation (2.28) we show they are compatible with the Adem relations, hence are in fact \(H_{R}^{*}H\)-modules. **Definition 2.20**.: Let \(X\) be a space. 1. Given \(a\in H^{1}(X;\mathbb{Z}/2)\), let \(M_{H\mathbb{Z}}(a,X)\) be the \(\mathbb{Z}/2[s_{1}]\)-module which is a free \(H^{*}(X;\mathbb{Z}/2)\)-module on a single generator \(U\), and with \(s_{1}\)-action (2.21) \[s_{1}(Ux)\coloneqq U(ax+\mathrm{Sq}^{1}(x)).\] 2. Given \(a\in H^{1}(X;\mathbb{Z}/2)\) and \(c\in H^{3}(X;\mathbb{Z})\), let \(M_{ku}(a,c,X)\) be the \(\mathbb{Z}/2\langle q_{0},q_{1}\rangle\)-module which is a free \(H^{*}(X;\mathbb{Z}/2)\)-module on a single generator \(U\), and with \(q_{0}\)- and \(q_{1}\)-actions given by (2.22) \[q_{0}(Ux) \coloneqq U(ax+Q_{0}(x))\] (2.23) \[q_{1}(Ux) \coloneqq U((c\bmod 2+a^{3})x+Q_{1}(x)).\] 3. Given \(a\in H^{1}(X;\mathbb{Z}/2)\) and \(b\in H^{2}(X;\mathbb{Z}/2)\), let \(M_{ko}(a,b,X)\) be the \(\mathbb{Z}/2\langle s_{1},s_{2}\rangle\)-module which is a free \(H^{*}(X;\mathbb{Z}/2)\)-module on a single generator \(U\), and with \(s_{1}\)- and \(s_{2}\)-actions (2.24) \[s_{1}(Ux) \coloneqq U(ax+\mathrm{Sq}^{1}(x))\] 4. Given \(a\in H^{1}(X;\mathbb{Z}/2)\), and \(d\in SH^{4}(X)\), let \(M_{\mathit{tmf}}(a,d,X)\) be the \(\mathbb{Z}/2\langle s_{1},s_{2},s_{4}\rangle\)-module which is a free \(H^{*}(X;\mathbb{Z}/2)\)-module on a single generator \(U\), with \(s_{1}\)- and \(s_{2}\)-actions given by (2.24) with \(b=t(d)\), and \(s_{4}\)-action given by (2.25) \[s_{4}(Ux) =U(\delta x+t(d)a+\mathrm{Sq}^{1}(t(d)))\mathrm{Sq}^{1}(x)+t(d) \mathrm{Sq}^{2}(x)+a\mathrm{Sq}^{3}(x)+\mathrm{Sq}^{4}(x)).\] 5. Given \(d\in SH^{4}(X)\), let \(M^{\prime}_{\mathit{tmf}}(d,X)\) be the \(\mathbb{Z}/3\langle\beta,p^{1}\rangle/(\beta^{2})\)-module which is a free \(H^{*}(X;\mathbb{Z}/2)\)-module on a single generator \(U\) and \(\beta\)- and \(p_{1}\)-actions specified by (2.26) \[\beta(Ux) \coloneqq U\beta(x)\] (2.27) \[p^{1}(Ux) \coloneqq U((d\bmod 3)x+\mathcal{P}^{1}(x)).\] The mod 3 reduction of the supercohomology class \(d\) is defined as usual as the image of \(d\) after passing to the mod 3 Moore spectrum \(\mathbb{S}/3\): \[[X,\Sigma^{4}\mathit{SH}]\longrightarrow[X,\Sigma^{4}\mathit{SH}\wedge\mathbb{ S}/3]\stackrel{{\cong}}{{\longrightarrow}}[X,\Sigma^{4}H\mathbb{Z}/3], \tag{2.26}\] because \(H\mathbb{Z}/2\wedge\mathbb{S}/3\simeq 0\). Thus \(d\) mod 3 is well-defined as a class in \(H^{4}(X;\mathbb{Z}/3)\). **Lemma 2.27**.: _Keep the notation from Equation (2.20)._ 1. _The action of_ \(s_{1}\) _on_ \(M_{H\mathbb{Z}}(a,X)\) _squares to_ \(0\)_, so the_ \(\mathbb{Z}/2[s_{1}]\)_-module structure on_ \(M_{H\mathbb{Z}}(a,X)\) _refines to an_ \(\mathcal{A}(0)\)_-module structure with_ \(\mathrm{Sq}^{1}(x)\coloneqq s_{1}(x)\)_._ 2. _The actions of_ \(q_{0}\) _and_ \(q_{1}\) _on_ \(M_{ku}(a,c,X)\) _both square to_ \(0\)_, so the_ \(\mathbb{Z}/2\langle q_{0},q_{1}\rangle\)_-module structure on_ \(M_{ku}(a,c,X)\) _refines to an_ \(\mathcal{E}(1)\)_-module structure, where for_ \(i=0,1\)_,_ \(Q_{i}(x)\coloneqq q_{i}(x)\)_._ 3. _The actions of_ \(s_{1}\) _and_ \(s_{2}\) _on_ \(M_{ko}(a,b,X)\)_, and of_ \(s_{1}\)_,_ \(s_{2}\)_, and_ \(s_{4}\) _on_ \(M_{tmf}(a,b,c,X)\)_, satisfy the Adem relations with_ \(s_{i}\) _in place of_ \(\mathrm{Sq}^{i}\)_, hence refine to an_ \(\mathcal{A}(1)\)_-module structure on_ \(M_{ko}(a,b,X)\) _and an_ \(\mathcal{A}(2)\)_-module structure on_ \(M_{tmf}(a,c,d,X)\)_._ 4. _The actions of_ \(\beta\) _and_ \(p^{1}\) _on_ \(M^{\prime}_{tmf}(c,X)\) _satisfy the relations in (_2.17_), hence refine the_ \(\mathbb{Z}/3\langle\beta,p^{1}\rangle/(\beta^{2})\)_-module structure on_ \(M^{\prime}_{tmf}(c,X)\) _to an_ \(\mathcal{A}^{tmf}\)_-module structure, where the Bockstein acts as_ \(\beta\) _and_ \(\mathcal{P}^{1}\) _acts as_ \(p^{1}\)_._ Rather than prove this directly, we will obtain it as a corollary of Equation (2.28). This theorem says that the modules defined in Equation (2.20) are \(H^{*}_{R}\) of the Thom spectra for the corresponding twists. **Theorem 2.28**.: _Let \(X\) be a topological space._ 1. _Given_ \(a\in H^{1}(X;\mathbb{Z}/2)\)_, let_ \(f_{a}\colon X\to B\mathrm{GL}_{1}(H\mathbb{Z})\) _be the corresponding fake vector bundle twist. Then there is an isomorphism of_ \(\mathcal{A}(0)\)_-modules_ (2.29) \[H^{*}_{H\mathbb{Z}}(M^{H\mathbb{Z}}f_{a})\stackrel{{\cong}}{{ \longrightarrow}}M_{H\mathbb{Z}}(a,X).\] 2. _Given_ \(a\in H^{1}(X;\mathbb{Z}/2)\) _and_ \(c\in H^{3}(X;\mathbb{Z})\)_, let_ \(f_{a,c}\colon X\to B\mathrm{GL}_{1}(ku)\) _be the corresponding fake vector bundle twist. Then there is an isomorphism of_ \(\mathcal{E}(1)\)_-modules_ (2.30) \[H^{*}_{ku}(M^{ku}f_{a,c})\stackrel{{\cong}}{{\longrightarrow}}M_ {ku}(a,c,X).\] 3. _Given_ \(a\in H^{1}(X;\mathbb{Z}/2)\) _and_ \(b\in H^{2}(X;\mathbb{Z}/2)\)_, let_ \(f_{a,b}\colon X\to B\mathrm{GL}_{1}(ko)\) _be the corresponding fake vector bundle twist. Then there is an isomorphism of_ \(\mathcal{A}(1)\)_-modules_ (2.31) \[H^{*}_{ko}(M^{ko}f_{a,b})\stackrel{{\cong}}{{\longrightarrow}}M_ {ko}(a,b,X).\] 4. _Given_ \(a\in H^{1}(X;\mathbb{Z}/2)\)_, and_ \(d\in SH^{4}(X)\)_, let_ \(f_{a,d}\colon X\to B\mathrm{GL}_{1}(\text{tmf})\) _be the corresponding fake vector bundle twist. Then there is an isomorphism of_ \(\mathcal{A}(2)\)_-modules_ (2.32) \[H^{*}_{tmf}(M^{tmf}f_{a,d})\stackrel{{\cong}}{{\longrightarrow}}M _{tmf}(a,d,X),\] _and an isomorphism of_ \(\mathcal{A}^{tmf}\)_-modules_ (2.33) \[H^{*}_{tmf}(M^{tmf}f_{0,d};\mathbb{Z}/3)\stackrel{{\cong}}{{ \longrightarrow}}M^{\prime}_{tmf}(d,X).\] In the last isomorphism, we turn off degree-1 twists so that we have a Thom isomorphism for mod 3 cohomology. **Corollary 2.34**.: _Keep the notation from Equation (2.28)._ **Twisted \(\mathbb{Z}\)-homology:**: _The_ \(E_{2}\)_-page of Baker-Lazarev's Adams spectral sequence computing_ \[\pi_{*}(M^{H\mathbb{Z}}f_{a})_{2}^{\wedge}\] _is isomorphic as_ \[\operatorname{Ext}_{\mathcal{A}(0)}(\mathbb{Z}/2)\] _-modules to_ \[\operatorname{Ext}_{\mathcal{A}(0)}(M_{H\mathbb{Z}}(a,X),\mathbb{Z}/2)\] _._ **Twisted \(ku\)-homology:**: _The \(E_{2}\)-page of Baker-Lazarev's Adams spectral sequence computing \(\pi_{*}(M^{ku}f_{a,c})^{\wedge}_{2}\) is isomorphic as \(\operatorname{Ext}_{\mathcal{E}(1)}(\mathbb{Z}/2)\)-modules to \(\operatorname{Ext}^{s,t}_{\mathcal{E}(1)}(M_{ku}(a,c,X),\mathbb{Z}/2)\)._ **Twisted \(ko\)-homology:**: _The_ \(E_{2}\)_-page of Baker-Lazarev's Adams spectral sequence computing_ \(\pi_{*}(M^{ko}f_{a,b})^{\wedge}_{2}\) _is isomorphic as_ \(\operatorname{Ext}_{\mathcal{A}(1)}(\mathbb{Z}/2)\)_-modules to_ \(\operatorname{Ext}^{s,t}_{\mathcal{A}(1)}(M_{ko}(a,b,X),\mathbb{Z}/2)\)_._ **Twisted \(\mathit{tmf}\)-homology:**: 1. _The_ \(E_{2}\)_-page of Baker-Lazarev's Adams spectral sequence computing_ \(\pi_{*}(M^{\mathit{tmf}}f_{a,d})^{\wedge}_{2}\) _is isomorphic as_ \(\operatorname{Ext}_{\mathcal{A}(2)}(\mathbb{Z}/2)\)_-modules to_ \(\operatorname{Ext}^{s,t}_{\mathcal{A}(2)}(M_{\mathit{tmf}}(a,d,X),\mathbb{Z}/2)\)_._ 2. _The_ \(E_{2}\)_-page of Baker-Lazarev's Adams spectral sequence computing_ \(\pi_{*}(M^{\mathit{tmf}}f_{0,d})^{\wedge}_{3}\) _is isomorphic as_ \(\operatorname{Ext}_{\mathcal{A}^{\mathit{tmf}}}(\mathbb{Z}/3)\)_-modules to_ \(\operatorname{Ext}^{s,t}_{\mathcal{A}^{\mathit{tmf}}}(M^{\prime}_{\mathit{tmf }}(d,X),\mathbb{Z}/3)\)_._ _Remark 2.35_.: In SS1.2.1 we saw that the twists of \(H\mathbb{Z}\) discussed above are all vector bundle twists, so that the \(H\mathbb{Z}\) part of Equation (2.34) follows from the standard change-of-rings argument; the same is true for the twists of _MTSO_ appearing below in Equation (2.37). In both cases, the other calculations are new. _Remark 2.36_.: The analogue of Equation (2.34) is true for a few standard variants of the Adams spectral sequence. For example, one could switch the order of \(H^{*}_{R}(Mf)\) and \(\mathbb{Z}/2\) in \(\operatorname{Ext}_{H^{*}_{R}H}\) and obtain the \(E_{2}\)-page of Baker-Lazarev's Adams spectral sequence computing twisted \(R\)-cohomology. One could also work out a version of Equation (2.34) in terms of \(R\)-module \(H\)-homology with its \(H^{R}_{*}H\)-comodule structure. Recall the modules \(W_{1}\), \(W_{2}\), and \(W_{3}\) from Equation (2.4). **Corollary 2.37**.: _Keep the notation from Equation (2.28)._ **Twisted oriented bordism:**: _The_ \(E_{2}\)_-page of Baker-Lazarev's Adams spectral sequence computing_ \((\Omega^{\operatorname{SO}}_{*}(X,a))^{\wedge}_{2}\) _is isomorphic as_ \(\operatorname{Ext}_{\mathcal{A}(0)}(\mathbb{Z}/2)\)_-modules to_ \(\operatorname{Ext}^{s,t}_{\mathcal{A}(0)}(M_{H\mathbb{Z}}(a,X)\otimes W_{1}, \mathbb{Z}/2)\)_._ **Twisted spin\({}^{c}\) bordism:**: _The_ \(E_{2}\)_-page of Baker-Lazarev's Adams spectral sequence computing_ \((\Omega^{\operatorname{Spin}^{c}}_{*}(X,a,c))^{\wedge}_{2}\) _is isomorphic as_ \(\operatorname{Ext}_{\mathcal{E}(1)}(\mathbb{Z}/2)\)_-modules to_ \(\operatorname{Ext}^{s,t}_{\mathcal{E}(1)}(M_{ku}(a,c,X)\otimes W_{2},\mathbb{Z }/2)\)_._ **Twisted spin bordism:**: _The_ \(E_{2}\)_-page of Baker-Lazarev's Adams spectral sequence computing_ \((\Omega^{\operatorname{Spin}}_{*}(X,a,b))^{\wedge}_{2}\) _is isomorphic as_ \(\operatorname{Ext}_{\mathcal{A}(1)}(\mathbb{Z}/2)\)_-modules to_ \(\operatorname{Ext}^{s,t}_{\mathcal{A}(1)}(M_{ko}(a,b,X)\otimes W_{3},\mathbb{ Z}/2)\)_._ _For_ \(t-s<8\)_, this is isomorphic to_ \(\operatorname{Ext}^{s,t}_{\mathcal{A}(1)}(M_{ko}(a,b,X),\mathbb{Z}/2)\)_._ **Twisted string bordism:**: _For_ \(t-s<16\)_, the_ \(E_{2}\)_-pages of Baker-Lazarev's Adams spectral sequences computing_ \((\Omega^{\operatorname{String}}_{*}(X,a,d))^{\wedge}_{2}\)_, resp._ \((\Omega^{\operatorname{String}}_{*}(X,0,d))^{\wedge}_{3}\)_, are isomorphic to_ \(\operatorname{Ext}^{s,t}_{\mathcal{A}(2)}(M_{\mathit{tmf}}(a,d,X),\mathbb{Z} /2)\)_, resp._ \(\operatorname{Ext}_{\mathcal{A}^{\mathit{tmf}}}(M^{\prime}_{\mathit{tmf}}(d,X), \mathbb{Z}/3)\)_, as modules over_ \(\operatorname{Ext}_{\mathcal{A}(2)}(\mathbb{Z}/2)\)_, resp._ \(\operatorname{Ext}_{\mathcal{A}^{\mathit{tmf}}}(\mathbb{Z}/3)\)_._ Proof of Equation (2.28).: All five parts of the theorem have similar proofs, so we walk through the full proof in two cases -- \(R=ku\), whose proof carries through for \(H\mathbb{Z}\), \(ko\), and \(\mathit{tmf}\) at \(p=3\) with minor changes; and \(R=\mathit{tmf}\) at \(p=2\), where the presence of supercohomology means the proof is slightly different. Now we specialize to \(R=ku\) and a fake vector bundle twist \(f_{a,c}\colon X\to B\mathrm{GL}_{1}(ku)\). To begin, use Equation (2.15) to learn that \(H^{*}_{ku}(M^{ku}f_{a,c})\cong H^{*}(X;\mathbb{Z}/2)\) as \(\mathbb{Z}/2\)-vector spaces. (In the more familiar case where the twist is given by a vector bundle, this is the Thom isomorphism.) Next, the Thom diagonal (Equation (1.11)) and the Cartan formula provide a formula for \(Q_{i}(Ux)\), \(i=0,1\), in terms of \(Q_{i}(U)\) and \(Q_{i}(x)\). In particular, this formula implies that if we can show and \(Q_{1}(U)=U(a^{3}+c)\), then the \(\mathcal{E}(1)\)-module action defined on \(M_{ku}(X,a,c)\) in Equation (2.20) is identified with \(H^{*}_{ku}(Mf_{a,c})\). By the naturality of cohomology operations, it suffices to compute \(Q_{0}(U)\) and \(Q_{1}(U)\) for the universal twist over \(B\mathrm{O}/B\mathrm{Spin}^{c}\). Equation (1.15) then allows us to infer what the cohomology operations on the Thom class have to be in order to recover the correct \(\mathcal{A}\)-module structure on the Thom spectrum after applying the universal twist. Let \(f\colon B\mathrm{O}/B\mathrm{Spin}^{c}\to B\mathrm{GL}_{1}(\mathit{MTSpin}^{c})\) be the universal fake vector bundle twist, \(M^{\mathit{MTSpin}^{c}}f\) be its associated Thom spectrum, and \(M^{ku}f\) be the \(ku\)-module Thom spectrum obtained by composing \(f\) with the map \(B\mathrm{GL}_{1}(\mathit{MTSpin}^{c})\to B\mathrm{GL}_{1}(ku)\) induced by the Atiyah-Bott-Shapiro map. The Atiyah-Bott-Shapiro map is \(3\)-connected, so the map \(M^{\mathit{MTSpin}^{c}}f\to M^{ku}f\) is also \(3\)-connected. Thus, for example, \(\pi_{0}(M^{ku}f)\cong\pi_{0}(M^{\mathit{MTSpin}^{c}}f)\); by Equation (1.28) \(M^{\mathit{MTSpin}^{c}}f\simeq\mathit{MTO}\), so \(\pi_{0}(M^{ku}f)\cong\Omega_{0}^{\mathrm{O}}\cong\mathbb{Z}/2\). This and similar ideas will determine \(Q_{0}(U)\) and \(Q_{1}(U)\) for us: in particular we will find \(\mathrm{Sq}^{1}(U)=Ua\) and \(Q_{1}(U)=U(c\bmod 2+a^{3})\) because this is the unique choice that is compatible with the known homotopy groups of the Thom spectra of the universal twists from SS1.2.2: \(\mathit{MTO}\) over \(K(\mathbb{Z}/2,1)\times K(\mathbb{Z},3)\), \(\mathit{MTSO}\) over \(K(\mathbb{Z},3)\), and \(\mathit{MTPin}^{c}\) over \(K(\mathbb{Z}/2,1)\). We first consider \(Q_{0}\): \(Q_{0}(U)\) is either \(0\) or \(Ua\). For either of the two options for \(Q_{0}(U)\), one can explicitly write the \(\mathcal{E}(1)\)-module structure on \(H^{*}_{ku}(M^{ku}f)\) in low degrees. Then, using Baker-Lazarev's Adams spectral sequence, one finds that if \(Q_{0}(U)=0\), \(\pi_{0}(M^{ku}f)^{\wedge}_{2}\cong\pi_{0}(M^{\mathit{MTSpin}^{c}}f)\) has at least \(4\) elements, but since \(Mf\simeq\mathit{MTO}\), we know this group is \(\Omega_{0}^{\mathrm{O}}\cong\mathbb{Z}/2\). Thus \(Q_{0}(U)=Ua\). There are three options for \(Q_{1}(U)\): \(0\), \(Uc\bmod 2\), and \(U(c\bmod 2+a^{3})\). In order to verify the \(Q_{1}\) action, we pull back to \(K(\mathbb{Z},3)\) and \(K(\mathbb{Z}/2,1)\) separately, and then argue in a similar way. * For \(f\colon K(\mathbb{Z},3)\to B\mathrm{GL}_{1}(\mathit{MTSpin}^{c})\), \(M^{\mathit{MTSpin}^{c}}f\simeq\mathit{MTSO}\), which is incompatible with \(Q_{1}(U)=0\); the argument is similar to that for \(Q_{0}\). * For \(f\colon K(\mathbb{Z}/2,1)\to B\mathrm{GL}_{1}(\mathit{MTSpin}^{c})\), \(M^{\mathit{MTSpin}^{c}}f\simeq\mathit{MTPin}^{c}\). In \(H^{*}_{ku}(M^{ku}f)\), \(Q_{1}(U)\neq 0\), which one can show by pulling back further along (2.38) \[M^{ku}f\wedge H\mathbb{Z}/2\longrightarrow M^{ku}\wedge_{ku}H\mathbb{Z}/2.\] Thus \(Q_{1}(U)=U(c\bmod 2+a^{3})\). Using the fact that \(\mathcal{E}(1)=\langle Q_{0},Q_{1}\rangle\) and applying the Cartan formula recovers the actions in (2.22). Because the fake vector bundle twist for \(\mathit{tmf}\) uses supercohomology, its part of the proof is different enough that we go into the details. The reduction to the computation of \(\mathrm{Sq}^{1}(U)\), \(\mathrm{Sq}^{2}(U)\), and \(\mathrm{Sq}^{4}(U)\) in the case of the universal twist proceeds in the same way as for \(ku\). In SS1.2.4 we computed \(H^{*}(B\mathrm{SO}/B\mathrm{String};\mathbb{Z}/2)\) in low degrees; this and the Kunneth formula imply that in the mod \(2\) cohomology of \(K(\mathbb{Z}/2,1)\times B\mathrm{SO}/B\mathrm{String}\), \(H^{1}\) is spanned by \(a\), \(H^{2}\) is spanned by \(\{a^{2},t(d)\}\), and \(H^{4}\) is spanned by \(\{a^{4},a^{2}t(d),a\mathrm{Sq}^{1}t(d),\delta,t(d)^{2}\}\). Therefore there are \(\lambda_{1},\ldots,\lambda_{8}\in\mathbb{Z}/2\) such that \[\mathrm{Sq}^{1}(U) =U\lambda_{1}a \tag{2.39b}\] \[\mathrm{Sq}^{2}(U) =U(\lambda_{2}a^{2}+\lambda_{3}t(d))\] (2.39c) \[\mathrm{Sq}^{4}(U) =U(\lambda_{4}a^{4}+\lambda_{5}a^{2}t(d)+\lambda_{6}a\mathrm{Sq}^ {1}t(d)+\lambda_{7}\delta+\lambda_{8}t(d)^{2}). \tag{2.39a}\] We finish the proof by indicating how to find \(\lambda_{1}\) through \(\lambda_{8}\). To find \(\lambda_{7}\), consider the twist pulled back to \(f\colon K(\mathbb{Z},4)\simeq B\mathrm{Spin}/B\mathrm{String}\to B\mathrm{O}/B \mathrm{String}\). Like in the proof for twists of \(ku\), the action of \(\mathrm{Sq}^{4}\) on the Thom class can be detected on either \(M^{\mathit{MTSstring}}f\) or \(M^{\mathit{tmf}}f\); as we discussed in Equation (1.60), \(M^{\mathit{MTSstring}}f\simeq\mathit{MTSpin}\), so \(\pi_{3}(M^{\mathit{MTSstring}}f)\cong\Omega_{3}^{\mathrm{Spin}}=0\), and since the map \(M^{\mathit{MTSstring}}f\to M^{\mathit{tmf}}f\) is sufficiently connected, \(\pi_{3}(M^{\mathit{tmf}}f)=0\) as well. In \(H^{*}_{\mathit{tmf}}(M^{\mathit{tmf}}f)\), the only options for \(\mathrm{Sq}^{4}(U)\) are \(0\) or \(U\) times the tautological class. One can run the Baker-Lazarev Adams spectral sequence for these two options and see that only the latter choice is compatible with \(\pi_{3}(M^{\mathit{tmf}}f)=0\).18 Thus \(\lambda_{7}=1\). Footnote 18: To do so, it will be helpful to know \(\mathrm{Ext}_{\mathcal{A}(2)}(C\nu,\mathbb{Z}/2)\), where \(C\nu\) is the \(\mathcal{A}(2)\)-module with two \(\mathbb{Z}/2\) summands in degrees \(0\) and \(4\), joined by a \(\mathrm{Sq}^{4}\). These \(\mathrm{Ext}\) groups have been computed by Bruner-Rognes [1, Corollary 4.16, Figure 4.3]. For the other coefficients, we pull back to vector bundle twists for various vector bundles \(V\to X\), where we know \(\mathrm{Sq}^{k}(U)=Uw_{k}(V)\), \(a\mapsto w_{1}(V)\), \(t(d)\mapsto w_{2}(V)\), and \(\delta\mapsto w_{4}(V)\). Choosing vector bundles with auspicious values of \(w_{1}\), \(w_{2}\), and \(w_{4}\) quickly determines the remaining coefficients. * Pulling back the twist to \(K(\mathbb{Z}/2,1)\simeq B\mathrm{O}_{1}\) gives the Thom spectrum \(\mathit{tmf}\wedge(B\mathrm{O}_{1})^{\sigma-1}\), where \(\sigma\to B\mathrm{O}_{1}\) is the tautological line bundle. As \(w_{1}(\sigma)\neq 0\) but \(w_{2}(\sigma)=0\) and \(w_{4}(\sigma)=0\), we can plug these Stiefel-Whitney classes into (2.39b) (with \(w_{1}(V)\) in place of \(a\), \(w_{2}(V)\) in place of \(t(d)\), and \(w_{4}(V)\) in place of \(\delta\) as usual) to conclude \(\lambda_{1}=1\), \(\lambda_{2}=0\), and \(\lambda_{4}=0\). * Let \(V\coloneqq\mathcal{O}(1)\oplus\mathcal{O}(2)\to\mathbb{CP}^{2}\). If \(\alpha\in H^{2}(\mathbb{CP}^{2};\mathbb{Z}/2)\cong\mathbb{Z}/2\) is the unique nonzero element, then \(w_{1}(V)=0\), \(w_{2}(V)=\alpha\), and \(w_{4}(V)=0\). Plugging this into (2.39b), we find \(\mathrm{Sq}^{2}(U)=U\alpha=U\lambda_{3}\alpha\), so \(\lambda_{3}=1\). And plugging \(w_{1}(V)\), \(w_{2}(V)\), and \(w_{4}(V)\) into (2.39c), we obtain \(\mathrm{Sq}^{4}(U)=0=U\lambda_{8}\alpha^{2}\), so \(\lambda_{8}=0\). * Let \(x\), resp. \(y\) be the nonzero classes in \(H^{1}(\mathbb{RP}^{2}\times\mathbb{RP}^{2};\mathbb{Z}/2)\) pulled back from the first, resp. second copy of \(\mathbb{RP}^{2}\), and let \(\sigma_{x},\sigma_{y}\to\mathbb{RP}^{2}\times\mathbb{RP}^{2}\) be the real line bundles satisfying \(w_{1}(\sigma_{x})=x\) and \(w_{1}(\sigma_{y})=y\). Now let \(V\coloneqq\sigma_{x}\oplus\sigma_{y}^{\oplus 3}\); then \(w_{1}(V)=x+y\), \(w_{2}(V)=xy+y^{2}\), and \(w_{4}(V)=0\). Plugging into (2.39c), we have \(\mathrm{Sq}^{4}(U)=0=U\lambda_{5}x^{2}y^{2}\), so \(\lambda_{5}=0\). * Repeat the preceding example, but with \(\mathbb{RP}^{1}\times\mathbb{RP}^{3}\) in place of \(\mathbb{RP}^{2}\times\mathbb{RP}^{2}\); this time, \(w_{1}(V)=x+y\), \(w_{2}(V)=xy+y^{2}\), and \(w_{4}(V)=xy^{3}\). Plugging into (2.39c), we have \(\mathrm{Sq}^{4}(U)=Uxy^{3}=U(1+\lambda_{6})xy^{3}\), so \(\lambda_{6}=0\). ## 3. Applications In this section, we give examples in which we use Equations (2.34) and (2.37) to make computations of twisted (co)homology groups. ### U-duality and related twists of spin bordism Let \(G\) be a topological group and \[\begin{CD}1@>{}>{}>\{\pm 1\}@>{}>{}>\widetilde{G}@>{}>{}>G@>{}>{}>1\end{CD} \tag{3.1a}\] be a central extension classified by \(\beta\in H^{2}(BG;\{\pm 1\})\). Then the central extension \[\begin{CD}1@>{}>{}>\{\pm 1\}@>{}>{}>\mathrm{Spin}\times_{\{\pm 1\}}@>{}>{}> \widetilde{G}@>{p}>{}>\mathrm{SO}\times G@>{}>{}>1\end{CD} \tag{3.1b}\] is classified by \(w_{2}+\beta\in H^{2}(B(\mathrm{SO}\times G);\mathbb{Z}/2)\). One can prove this is the extension by pulling back along \(\mathrm{SO}\to\mathrm{SO}\times G\) and \(G\to\mathrm{SO}\times G\) and observing that both pulled-back extensions are non-split. Therefore given an oriented vector bundle \(E\to X\) and a principal \(G\)-bundle \(P\to X\), i.e. the data of an \(\mathrm{SO}\times G\) structure on \(E\), a lift of this data to a \(\mathrm{Spin}\times_{\{\pm 1\}}\widetilde{G}\)-structure is a trivialization of \(w_{2}(E)+f_{P}^{*}(\beta)\), where \(f_{P}\colon X\to BG\) is the classifying map of \(P\to X\). That is, if \(\xi\) denotes the composition \[\xi\colon B(\mathrm{Spin}\times_{\{\pm 1\}}\widetilde{G})\xrightarrow{Bp}B\mathrm{SO} \times BG\to B\mathrm{SO}\to B\mathrm{O}, \tag{3.2}\] then a \(\xi\)-structure on \(E\) is equivalent to a \((BG,\beta)\)-twisted spin structure, meaning that by Equation (1.40) the Thom spectrum \(\mathit{MT}\xi\) is canonically equivalent to the \(\mathit{MTSpin}\)-module Thom spectrum \(Mf_{0,\beta}\) associated to the fake vector bundle twist \(f_{0,\beta}\colon BG\to B\mathrm{GL}_{1}(\textit{MTSpin})\). This Thom spectrum may or may not split as \(\textit{MTSpin}\wedge X\) for a spectrum \(X\): a sufficient condition is the existence of a vector bundle \(V\to BG\) such that \(w_{2}(V)=\beta\), as we discussed in SS2.1, but as we will see soon, there are choices of \((G,\beta)\), even when \(G\) is a compact, connected Lie group, for which no such \(V\) exists. For these \(G\) and \(\beta\), Equation (2.28) significantly simplifies the calculation of \(\xi\)-bordism. As an example, consider \(G=\mathrm{SU}_{8}/\{\pm 1\}\) and \(\beta\) the nonzero element of \(H^{2}(BG;\mathbb{Z}/2)\cong\mathrm{Hom}(\pi_{1}(G),\mathbb{Z}/2)\cong\mathbb{Z }/2\), corresponding to the central extension (3.3) In [1], we studied \(\Omega^{\mathrm{Spin}\times\{\pm 1\}^{\mathrm{SU}_{8}}}_{*}\) as part of an argument that the \(E_{7(7)}(\mathbb{R})\) U-duality symmetry of four-dimensional \(\mathcal{N}=8\) supergravity is anomaly-free. Speyer [10] shows that all representations of \(G\) are spin, so \(\beta\neq w_{2}(V)\) for any vector bundle \(V\to BG\) induced from a representation of \(G\), and this can be upgraded to show \(Mf_{0,\beta}\not\simeq\textit{MTSpin}\wedge X\) for any spectrum \(X\) (see [1, 2]). This precludes the standard shearing/change-of-rings argument for computing \(\mathrm{Spin}\times_{\{\pm 1\}}\mathrm{SU}_{8}\) bordism, and indeed in [1, SS4.3] we had to give a more complicated workaround. However, thanks to Equation (2.28), we can now argue over \(\mathcal{A}(1)\). We need as input the low-degree cohomology of \(B(\mathrm{SU}_{8}/\{\pm 1\})\). **Proposition 3.4** ([1, Theorem 4.4]).: \(H^{*}(B(\mathrm{SU}_{8}/\{\pm 1\});\mathbb{Z}/2)\cong\mathbb{Z}/2[\beta,b,c,d,e, \ldots]/(\dots)\) _with \(|\beta|=2\), \(|b|=3\), \(|c|=4\), \(|d|=5\), and \(|e|=6\); there are no other generators below degree \(7\) and no relations below degree \(7\). The Steenrod squares are_ \[\mathrm{Sq}(\beta) =\beta+b+\beta^{2}\] \[\mathrm{Sq}(b) =b+d+b^{2}\] \[\mathrm{Sq}(c) =c+e+\mathrm{Sq}^{3}(c)+c^{2}\] \[\mathrm{Sq}(d) =d+b^{2}+\mathrm{Sq}^{3}(d)+\mathrm{Sq}^{4}(d)+d^{2}. \tag{3.5}\] Now we can calculate \(H^{*}_{ko}(M^{ko}f_{0,\beta})\); by Equation (2.37), the map \(M^{\textit{MTSpin}}f_{0,\beta}\to M^{ko}f_{0,\beta}\) is an isomorphism on homotopy groups in degrees we care about, so \(M^{ko}f_{0,\beta}\) suffices. Equation (2.28) tells us \(\mathrm{Sq}^{1}(U)=0\) and \(\mathrm{Sq}^{2}(U)=U\beta\); to make more computations, use the Cartan formula and the Steenrod squares in Equation (3.4): Then proceeding with the relation in (3.5) yields \[\mathrm{Sq}^{1}(U\beta) =U\mathrm{Sq}^{1}(\beta)+\mathrm{Sq}^{1}(U)\beta=Ub\] \[\mathrm{Sq}^{2}(U\beta) =U\mathrm{Sq}^{2}(\beta)+\mathrm{Sq}^{1}(U)\mathrm{Sq}^{1}(\beta )+\mathrm{Sq}^{2}(U)\beta=U(2\beta^{2})=0\] \[\mathrm{Sq}^{1}(Ub) =U\mathrm{Sq}^{1}(b)+\mathrm{Sq}^{1}(U)b=0 \tag{3.6b}\] \[\mathrm{Sq}^{2}(Ub) =U\mathrm{Sq}^{2}(b)+\mathrm{Sq}^{1}(U)\mathrm{Sq}^{1}(b)+ \mathrm{Sq}^{2}(U)b=U(d+b\beta)\] \[\mathrm{Sq}^{1}(U(d+b\beta)) =U\mathrm{Sq}^{1}(d+b\beta)+\mathrm{Sq}^{1}(U)(d+b\beta)=U(2b^{2})=0\] (3.6c) \[\mathrm{Sq}^{2}(U(d+b\beta)) =U\mathrm{Sq}^{2}(d+b\beta)+\mathrm{Sq}^{1}(U)\mathrm{Sq}^{1}(d+b \beta)+\mathrm{Sq}^{2}(U)(d+b\beta)=0. \tag{3.6a}\] See the red piece of Figure 1, left, for a picture of this data. This calculation implies the vector space generated by \(\{U,U\beta,Ub,U(d+b\beta)\}\) is an \(\mathcal{A}(1)\)-submodule of \(H^{*}_{ko}(M^{ko}f_{0,\beta})\); specifically, it is isomorphic to the "seagull" \(\mathcal{A}(1)\)-module \(M_{0}\coloneqq\mathcal{A}(1)\otimes_{\mathcal{A}(0)}\mathbb{Z}/2\). This is an \(\mathcal{A}(1)\)-module whose \(\mathcal{A}(1)\)-action does not compatibly extend to an \(\mathcal{A}\)-action. Continuing to compute \(\mathrm{Sq}^{1}\)- and \(\mathrm{Sq}^{2}\)-actions as in (3.6), we learn that there is an isomorphism of \(\mathcal{A}(1)\)-modules \[H^{*}_{ko}(M^{ko}f_{0,\beta})\cong M_{0}\oplus\Sigma^{4}M_{0}\oplus\Sigma^{4}M_{1 }\oplus\mathcal{A}(1)\oplus P, \tag{3.7}\] where \(P\) is concentrated in degrees \(6\) and above (so we can and will ignore it), and \(M_{1}\) is an \(\mathcal{A}(1)\)-module which is isomorphic to either \(M_{0}\) or \(C\eta\coloneqq\mathcal{A}(1)\otimes_{\mathcal{E}(1)}\mathbb{Z}/2\). We draw the decomposition (3.7) in Figure 1, left. The change-of-rings isomorphism (Equation (2.1)) and Koszul duality [1, Remark 4.5.4] allow us to compute \(\mathrm{Ext}_{\mathcal{A}(1)}(M_{0})\cong\mathbb{Z}/2[h_{0}]\) and \(\mathrm{Ext}_{\mathcal{A}(1)}(C\eta)\cong\mathbb{Z}/2[h_{0},v_{1}]\) with \(h_{0}\) in bidegree \((t-s,s)=(0,1)\) and \(v_{1}\) in bidegree \((t-s,s)=(2,1)\)[1, Examples 4.5.5 and 4.5.6]. Therefore we can draw the \(E_{2}\)-page of the Adams spectral sequence computing the twisted _ko_-homology associated to the fake vector bundle twist \(f_{0,\beta}\colon B(\mathrm{SU}_{8}/\{\pm 1\})\to B\mathrm{GL}_{1}(ko)\) in Figure 1, right, and by Equation (1.40) this also computes the corresponding twisted spin bordism groups, which we saw above are \(\Omega_{*}^{\mathrm{Spin}\times\{\pm 1\}\mathrm{SU}_{8}}\). This spectral sequence collapses on the \(E_{2}\)-page in degrees \(5\) and below, using \(h_{0}\)-linearity of differentials, so we have made the following computation. **Theorem 3.8** ([4, Theorem 4.26]).: (3.9) \[\begin{split}\Omega_{0}^{\operatorname{Spin}\times_{\{\pm 1\}} \operatorname{SUs}}&\cong\mathbb{Z}\\ \Omega_{1}^{\operatorname{Spin}\times_{\{\pm 1\}} \operatorname{SUs}}&\cong 0\\ \Omega_{2}^{\operatorname{Spin}\times_{\{\pm 1\}} \operatorname{SUs}}&\cong 0\\ \Omega_{3}^{\operatorname{Spin}\times_{\{\pm 1\}} \operatorname{SUs}}&\cong 0\\ \Omega_{4}^{\operatorname{Spin}\times_{\{\pm 1\}} \operatorname{SUs}}&\cong\mathbb{Z}^{2}\\ \Omega_{5}^{\operatorname{Spin}\times_{\{\pm 1\}} \operatorname{SUs}}&\cong\mathbb{Z}/2.\end{split}\] There are a few other choices of compact Lie groups \(G\) and classes \(\beta\in H^{2}(BG;\mathbb{Z}/2)\) such that \(\beta\) is not equal to \(w_{2}\) of any representation, including * \(\operatorname{SU}_{4n}/\{\pm 1\}\) for \(n>1\), where \(\beta\) corresponds to the double cover \(\operatorname{SU}_{4n}\to\operatorname{SU}_{4n}/\{\pm 1\}\)[10], * \(\operatorname{PSO}_{8n}\), where \(\beta\) corresponds to the double cover \(\operatorname{SO}_{8n}\to\operatorname{PSO}_{8n}\)[11], * \(\operatorname{PSp}_{n}\) and the double cover \(\operatorname{Sp}_{n}\to\operatorname{PSp}_{n}\) for \(n>1\), and * \(E_{7}/\{\pm 1\}\) and the double cover \(E_{7}\to E_{7}/\{\pm 1\}\). For the last two items, the proof is analogous to [4, Footnote 6] for \(\operatorname{SU}_{8}/\{\pm 1\}\): compute the low-degree mod \(2\) cohomology of \(BG\) and use this to show that if \(\beta\) is \(w_{2}\) of a representation \(V\), the \(\mathcal{A}\)-action on the cohomology of the corresponding Thom spectrum violates the Adem relations. For all of these choices of \(G\) and \(\beta\), one can define (at a physics level of rigor) unitary quantum field theories with fermions and a background \(\widetilde{G}\) symmetry, such that \(-1\in\widetilde{G}\) acts by \(-1\) on fermions and by \(1\) on bosons. Then, as described in [13, 14], these theories can be defined on manifolds with differential \(\operatorname{Spin}_{n}\times_{\{\pm 1\}}\widetilde{G}\) structures, so by work of Freed-Hopkins [11], the anomaly field theories of these QFTs are classified using the bordism groups \(\Omega_{*}^{\operatorname{Spin}\times_{\{\pm 1\}}\widetilde{G}}\), and computations such as Equation (3.8) are greatly simplified using Equation (2.28). _Remark 3.10_.: Though we focused on invertible field theories in this section, there are other applications of twisted spin bordism groups. For example, Kreck's modified surgery [10] uses twisted spin bordism to classify closed, smooth \(4\)-manifolds whose universal covers are spin up to stable diffeomorphism: given such a manifold \(M\), one shows that \(w_{1}(M)\) and \(w_{2}(M)\) pull back from \(B\pi_{1}(M)\), then considers twisted spin bordism for the fake vector bundle twist over \(B\pi_{1}(M)\) given by \(w_{1}(M)\) and \(w_{2}(M)\). Often one computes these bordism groups with Teichner's _James spectral sequence_[11, SSII], a version of the Atiyah-Hirzebruch spectral sequence for spin bordism that can handle non-vector-bundle twists. However, extension questions in this spectral sequence can be difficult, and it is helpful to have the Adams spectral sequence to resolve them (see [10] for an example for a vector bundle twist). Therefore Equation (2.37) could be a useful tool for studying stable diffeomorphism classes of \(4\)-manifolds, since not all of the relevant twists come from vector bundles. ### Twists of string bordism A story very similar to that of SS3.1 takes place one level up in the Whitehead tower for \(B\)O. Many supergravity theories require spacetime manifolds \(M\) to satisfy a _Green-Schwarz condition_ specified by a Lie group \(G\) and a class \(c\in H^{4}(BG;\mathbb{Z})\), which Sati-Schreiber-Stasheff [12] characterize as data of a spin structure on \(M\), a principal \(G\)-bundle \(P\to M\) and a trivialization of \(\lambda(M)-c(M)\), i.e. the data of a \((BG,c)\)-twisted string structure on \(M\) (see also [11, 12, 12]). In many example theories of interest, this twist does not come from a vector bundle, including the \(E_{8}\times E_{8}\) heterotic string and the CHL string [12, Lemma 2.2]. The corresponding twisted string bordism groups are used to study anomalies and defects for these theories; anomalies were touched on in SS3.1, and the use of bordism groups to learn about defects is through the McNamara-Vafa cobordism conjecture [13]. Equations (2.13) and (2.28) allow us to use the Adams spectral sequence at \(p=2\) and \(p=3\) to calculate these twisted string bordism groups in dimensions \(15\) and below, which suffices for applications to superstring theory. (Calculations at primes greater than \(3\) are easier and can be taken care of with other methods.) We will show an example computation, relevant for the \(E_{8}\times E_{8}\) heterotic string at \(p=3\); for applications of Equation (2.28) part (4) to twisted string bordism at \(p=2\), see [12, SS2.2, SS2.4.1] and [BDDM], and for more \(p=3\) calculations, see [BDDM]. Because \(E_{8}\) is a connected, simply connected, simple Lie group, there is an isomorphism \(c\colon H^{4}(BE_{8};\mathbb{Z})\xrightarrow{\cong}\mathbb{Z}\) uniquely specified by making the Chern-Weil class of the Killing form positive; let \(c\) be the preimage of \(1\) under this isomorphism. Bott-Samelson [10, Theorems IV, V(e)] showed that, interpreted as a map \(BE_{8}\to K(\mathbb{Z},4)\), \(c\) is \(15\)-connected. For \(i=1,2\), let \(c_{i}\in H^{4}(BE_{8}\times BE_{8};\mathbb{Z})\) be the copy of \(c\) coming from the \(i^{\text{th}}\) copy of \(E_{8}\). Let \(\mathbb{Z}/2\) act on \(E_{8}\times E_{8}\) by switching the two factors; then in the Serre spectral sequence for the fibration of classifying spaces induced by the short exact sequence (3.11) the class \(c_{1}+c_{2}\in E_{2}^{0,4}=H^{4}(BE_{8}\times BE_{8};\mathbb{Z})\) survives to the \(E_{\infty}\)-page and lifts uniquely to define a class \(c_{1}+c_{2}\in H^{4}(B((E_{8}\times E_{8})\rtimes\mathbb{Z}/2);\mathbb{Z})\). The Green-Schwarz condition for the \(E_{8}\times E_{8}\) heterotic string asks for an \((E_{8}\times E_{8})\rtimes\mathbb{Z}/2\)-bundle \(P\to M\) and a trivialization of \(\lambda(M)-(c_{1}+c_{2})(P)\), so we want to compute \(\Omega_{*}^{\text{String}}(B((E_{8}\times E_{8})\rtimes\mathbb{Z}/2),c_{1}+c_{2})\). Equation (2.37) allows us to use the change-of-rings theorem to simplify the Adams spectral sequence at \(p=2,3\) for this computation; we will give the \(3\)-primary computation here and point the interested reader to [12, SS2.2] for the longer \(2\)-primary computation. **Theorem 3.12** ([12, Theorem 2.65]).: _The \((B((E_{8}\times E_{8})\rtimes\mathbb{Z}/2),c_{1}+c_{2})\)-twisted string bordism groups lack \(3\)-primary torsion in degrees \(11\) and below._ Just like for \(\operatorname{Spin}\times_{\{\pm 1\}}\operatorname{SU}_{8}\) bordism and [10] in SS3.1, the computation in [12] does not take advantage of the change-of-rings theorem, works over the entire Steenrod algebra, and is significantly harder than our proof here. Proof.: Recall the notation \(\mathcal{A}^{\text{tmf}}\), \(\beta\), and \(\mathcal{P}^{1}\) from Equation (2.16). By Equation (1.58), the Thom spectrum for \((B((E_{8}\times E_{8})\rtimes\mathbb{Z}/2),c_{1}+c_{2})\)-twisted string bordism is identified with the _MTString_-module Thom spectrum \(M^{\textit{MTString}}f_{0,c_{1}+c_{2}}\), where \(f_{0,c_{1}+c_{2}}\) is the fake vector bundle twist defined by the image of the class \(c_{1}+c_{2}\in H^{4}(B(E_{8}\times E_{8})\rtimes\mathbb{Z}/2)\) in supercohomology. Let \(M^{\textit{tmf}}f_{0,c_{1}+c_{2}}\) be the _tmf_-module Thom spectrum induced by the Ando-Hopkins-Rezk map \(\sigma\colon\textit{MTString}\to\textit{tmf}\). As a consequence of Equation (2.37), the \((B((E_{8}\times E_{8})\rtimes\mathbb{Z}/2),c_{1}+c_{2})\)-twisted string bordism groups are isomorphic to \(\pi_{*}(M^{\textit{tmf}}f_{0,c_{1}+c_{2}})\) in degrees \(15\) and below, and Equation (2.28) describes the \(\mathcal{A}^{\textit{tmf}}\)-module structure on \(H^{*}_{\textit{tmf}}(M^{\textit{tmf}}f_{0,c_{1}+c_{2}};\mathbb{Z}/3)\) (and hence the input to the Adams spectral sequence) in terms of the \(\mathcal{A}^{\textit{tmf}}\)-module structure on \(H^{*}(B(E_{8}\times E_{8})\rtimes\mathbb{Z}/2;\mathbb{Z}/3)\). **Lemma 3.13**.: _Let \(x\coloneqq(c_{1}+c_{2})\bmod 3\) and \(y\coloneqq c_{1}c_{2}\bmod 3\). Then \(H^{*}(B(E_{8}\times E_{8})\rtimes\mathbb{Z}/2;\mathbb{Z}/3)\cong\mathbb{Z}/3[ x,\mathcal{P}^{1}(x),\beta\mathcal{P}^{1}(x),y,\ldots]/(\ldots)\); there are no other generators below degree \(12\), nor any relations below degree \(12\)._ The actions of \(\mathcal{P}^{1}\) and \(\beta\) are as specified via the names of the generators. Proof.: Because \(H^{*}(B\mathbb{Z}/2;\mathbb{Z}/3)\) vanishes in positive degrees, the Serre spectral sequence for (3.11) collapses at \(E_{2}\) to yield an isomorphism to the ring of invariants \[H^{*}(B(E_{8}\times E_{8})\rtimes\mathbb{Z}/2;\mathbb{Z}/3)\xrightarrow{\cong} (H^{*}(BE_{8}\times BE_{8};\mathbb{Z}/3))^{\mathbb{Z}/2}. \tag{3.14}\] The lemma thus follows once we know \(H^{*}(BE_{8};\mathbb{Z}/3)\cong\mathbb{Z}/3[c\bmod 3,\mathcal{P}^{1}(c\bmod 3), \beta\mathcal{P}^{1}(c\bmod 3),\ldots]/(\dots)\), where we have given all generators and relations in degrees \(11\) and below. Because \(c\colon BE_{8}\to K(\mathbb{Z},4)\) is \(15\)-connected [11, Theorems IV, V(e)], we may replace \(BE_{8}\) with \(K(\mathbb{Z},4)\), and the mod \(3\) cohomology of \(K(\mathbb{Z},4)\) was computed by Cartan [10] and Serre [11]; see Hill [15, Corollary 2.9] for an explicit description. To compute \(H^{*}_{t\!m\!f}(M^{t\!m\!f}f_{0,c_{1}+c_{2}})\), we also need to know \(\mathcal{P}^{1}(U)\) and \(\beta(U)\). The latter vanishes for degree reasons; the former is \(\mathcal{P}^{1}(U)=Ux\) by Equation (2.28). Then as usual we compute on all classes in degrees \(11\) and below using the Cartan formula. **Corollary 3.15**.: _Let \(N_{1}\coloneqq\mathcal{A}^{t\!m\!f}/(\beta,(\mathcal{P}^{1})^{2},\beta \mathcal{P}^{1}\beta)\) and \(N_{2}\coloneqq\mathcal{A}^{t\!m\!f}/(\beta,\beta\mathcal{P}^{1},\mathcal{P}^{ 1}\beta(\mathcal{P}^{1})^{2})\). Then there is a map of \(\mathcal{A}^{t\!m\!f}\)-modules_ \[H^{*}_{t\!m\!f}(M^{t\!m\!f}f_{0,c_{1}+c_{2}})\longrightarrow N_{2}\oplus \Sigma^{8}N_{1}\oplus\Sigma^{8}N_{1} \tag{3.16}\] _which is an isomorphism in degrees \(11\) and below._ We draw the decomposition (3.16) in Figure 6, left. The next step is to compute the Ext groups of \(N_{1}\) and \(N_{2}\) over \(\mathcal{A}^{t\!m\!f}\). To do so, we will repeatedly use the fact that a short exact sequence of \(\mathcal{A}^{t\!m\!f}\)-modules induces a long exact sequence in Ext; see [1, SS4.6] for more information on this technique, including how to depict the long exact sequence in an Adams chart along with some examples. Let \(C\nu\) denote the \(\mathcal{A}^{t\!m\!f}\)-module consisting of two \(\mathbb{Z}/3\) summands in degrees \(0\) and \(4\) linked by a nontrivial \(\mathcal{P}^{1}\)-action. Then there are short exact sequences (3.17a) \[0\xrightarrow{\ \ The action of \(h_{0}\) on the \(E_{\infty}\)-page of this Adams spectral sequence lifts to multiplying by \(3\) on the twisted _tmf_-homology groups that the spectral sequence converges to. In the long exact sequence in \(\operatorname{Ext}\) corresponding to (3.17a), let \(x\in\operatorname{Ext}^{0,0}\) be either generator of \(\operatorname{Ext}_{\mathcal{A}^{\text{tmf}}}(\mathbb{Z}/3)\) and \(y\in\operatorname{Ext}^{0,4}\) be either generator of \(\operatorname{Ext}_{\mathcal{A}^{\text{tmf}}}(\mathbb{C}^{4}\mathbb{Z}/3)\), both as modules over \(\operatorname{Ext}_{\mathcal{A}^{\text{tmf}}}(\mathbb{Z}/3)\). In both cases, there are exactly two generators and they differ by a sign. **Lemma 3.20**.: _In the long exact sequence in \(\operatorname{Ext}\) associated to (3.17a), \(\partial(y)=\pm\alpha x\), \(\partial(\beta y)=\pm\alpha\beta x\), and the boundary map vanishes on all other elements in degrees \(14\) and below (except for \(-c\) where \(c\) was a class already listed)._ We draw this in Figure 2, bottom left. Proof.: Apart from on \(\pm y\) and \(\pm\beta y\), the boundary map vanishes for degree reasons; since \(\partial\) commutes with the action of \(\operatorname{Ext}_{\mathcal{A}^{\text{tmf}}}(\mathbb{Z}/3)\), once we show \(\partial(y)=\pm\alpha x\), \(\partial(\beta y)=\pm\alpha\beta x\) follows. Since \(\operatorname{Ext}^{1,4}(\mathbb{Z}/3)\cong\mathbb{Z}/3\), if we show \(\partial(y)\neq 0\) the only options for \(\partial y\) are \(\pm\alpha x\). Since \(y\) and \(-y\) are the only nonzero elements in \(\operatorname{Ext}^{4,0}\) of both \(\mathbb{Z}/3\) and \(\Sigma^{4}\mathbb{Z}/3\), \(\partial(y)=0\) if and only if \(\operatorname{Ext}_{\mathcal{A}^{\text{tmf}}}^{0,4}(C\nu)=0\). And this \(\operatorname{Ext}\) group is \(\operatorname{Hom}_{\mathcal{A}^{\text{tmf}}}(C\nu,\Sigma^{4}\mathbb{Z}/3)=0\). _Remark 3.21_.: In \(\operatorname{Ext}_{\mathcal{A}^{\text{tmf}}}(C\nu)\), \(\alpha(\alpha y)=\beta x\),19 but this is not detected by the long exact sequence in \(\operatorname{Ext}\). This action is denoted with a dashed gray line in Figure 2, bottom right. We do not need this hidden \(\alpha\)-action, so we will not prove it;20 one way to check it is to compute \(\operatorname{Ext}_{\mathcal{A}_{3}}(C\nu)\) using the software developed by Bruner [1] or by Chatham-Chua [11], obtain the hidden \(\alpha\)-action in \(\operatorname{Ext}_{\mathcal{A}_{3}}(C\nu)\), and chase it across the map of \(\operatorname{Ext}\) groups induced by \(\mathcal{A}_{\text{tmf}}\to\mathcal{A}_{3}\). Footnote 19: This does not contradict the relation \(\alpha^{2}=0\) from Equation (3.18): since \(y\) was killed in the long exact sequence computing \(\operatorname{Ext}(C\nu)\), the class \(\alpha y\in\operatorname{Ext}(C\nu)\) is not \(\alpha\) times anything, so \(\alpha(\alpha y)\) need not vanish. Footnote 20: We do use this \(\alpha\)-action in the proof of Equation (3.24), but only to determine \(\operatorname{Ext}\) groups that will be in too high of a degree to matter in the final computation, so that part of the proof can be left out. Thus we obtain \(\operatorname{Ext}(C\nu)\) in Figure 2, bottom right. Figure 2. Top: the short exact sequence (3.17a) of \(\mathcal{A}^{\text{tmf}}\)-modules. Lower left: the induced long exact sequence in \(\operatorname{Ext}\); we compute the pictured boundary maps in Equation (3.20). Lower right: \(\operatorname{Ext}_{\mathcal{A}^{\text{tmf}}}(C\nu)\) as computed by the long exact sequence. The dashed line is a nonzero \(\alpha\)-action not visible to this computation; see Equation (3.21). Now we turn to (3.17b) and its long exact sequence in \(\operatorname{Ext}\), depicted in Figure 3. We keep the notation for elements of \(\operatorname{Ext}(C\nu)\) from above, so elements are specified by products of classes in \(\operatorname{Ext}(\mathbb{Z}/3)\) with \(x\) or \(y\). In the long exact sequence induced by (3.17b), let \(z\in\operatorname{Ext}^{0,5}\) be a generator of \(\operatorname{Ext}(\Sigma^{5}\mathbb{Z}/3)\) as a module over \(\operatorname{Ext}(\mathbb{Z}/3)\) (again, there is exactly one other generator, which is \(-z\)). **Lemma 3.22**.: _In the long exact sequence in \(\operatorname{Ext}\) associated to (3.17b), \(\partial(h_{0}^{i}z)=\pm h_{0}^{i}y\), \(\partial(h_{0}c_{4}z)=\pm h_{0}^{i}c_{4}y\), and the boundary map vanishes on all other elements in degrees \(14\) and below (except for \(-c\) where \(c\) was a class already listed)._ We draw this in Figure 3, bottom left. Proof.: The proof is essentially the same as for Equation (3.20): all boundary maps other than the ones in the theorem statement vanish for degree reasons; then, \(\operatorname{Ext}(\mathbb{Z}/3)\)-linearity of boundary maps reduces the theorem statement to the computation of \(\partial(z)\), which must be \(\pm h_{0}y\) because \(\operatorname{Ext}^{0,5}_{\mathcal{A}^{\text{\tiny{inf}}}}(N_{1})=\operatorname {Hom}_{\mathcal{A}^{\text{\tiny{inf}}}}(N_{1},\Sigma^{5}\mathbb{Z}/3)=0\). _Remark 3.23_.: Like in Equation (3.21), the long exact sequence does not fully specify the \(\operatorname{Ext}(\mathbb{Z}/3)\)-action on \(\operatorname{Ext}(N_{1})\). One can show that \(h_{0}\cdot\alpha z=\pm c_{4}x\), but this is missed by our long exact sequence calculation. We do not need this relation in our proof of Equation (3.12), so we do not prove it; one way to see \(h_{0}\cdot\alpha z=\pm c_{4}x\) would be to deduce it from the analogous \(h_{0}\)-action in \(\operatorname{Ext}(N_{2})\) via the long exact sequence in \(\operatorname{Ext}\) induced from (3.17c). To see the corresponding \(h_{0}\)-action in \(\operatorname{Ext}(N_{2})\), let \(N_{3}\) be a nonsplit \(\mathcal{A}^{\text{\tiny{tmf}}}\)-module extension of \(C\nu\) by \(\Sigma^{8}\mathbb{Z}/3\); this characterizes \(N_{3}\) up to isomorphism. Then there is a short exact sequence \(\Sigma^{9}\mathbb{Z}/3\to N_{2}\to N_{3}\), and the \(h_{0}\)-action we want to detect is visible to the corresponding long exact sequence in \(\operatorname{Ext}\). Thus we have \(\operatorname{Ext}(N_{1})\) in Figure 3, bottom right. The last long exact sequence we have to run is the one induced by (3.17c). We keep the notation for elements of \(\operatorname{Ext}(N_{1})\) from above -- classes in \(\operatorname{Ext}(\mathbb{Z}/3)\) times \(x\), \(y\), or \(z\). We let \(w\) denote a generator of \(\operatorname{Ext}(\mathbb{Z}/3)\) as an \(\operatorname{Ext}(\mathbb{Z}/3)\)-module; like before, the two generators are \(w\) and \(-w\). Figure 3. Top: the short exact sequence (3.17b) of \(\mathcal{A}^{\text{\tiny{tmf}}}\)-modules. Lower left: the induced long exact sequence in \(\operatorname{Ext}\). We compute the pictured boundary maps in Equation (3.22). Lower right: \(\operatorname{Ext}_{\mathcal{A}^{\text{\tiny{tmf}}}}(N_{1})\) as computed by the long exact sequence. The gray line joining \(\alpha z\) and \(c_{4}x\) indicates a nonzero \(h_{0}\)-action not visible to this computation; see Equation (3.23). **Lemma 3.24**.: _In the long exact sequence in Ext associated to (3.17c), the boundary map takes the values \(\partial(x)=\pm\alpha w\), \(\partial(\alpha y)=\pm\beta w\), and \(\partial(\beta x)=\pm\alpha\beta w\), and vanishes on all other classes in degrees \(14\) and below (except for \(-c\) where \(c\) was a class already listed)._ We draw this in Figure 5, bottom left. Proof.: As in Equations (3.20) and (3.22), apart from \(\partial(\pm x)\), \(\partial(\pm\alpha y)\), and \(\partial(\pm\beta x)\), the boundary map vanishes for degree reasons, and we infer \(\partial(x)=\pm\alpha w\) because this is the only way for \(\operatorname{Ext}^{0,4}(N_{2})=\operatorname{Hom}(N_{2},\Sigma^{4}\mathbb{Z}/3)\) to vanish. And since \(\alpha(\alpha y)=\beta x\), as we discussed in Equation (3.21), it remains only to prove \(\partial(\alpha y)=\pm\beta w\); then \(\partial(\beta x)=\alpha\beta w\) follows from \(\operatorname{Ext}(\mathbb{Z}/3)\)-linearity; and since \(\operatorname{Ext}^{2,12}_{\mathcal{A}^{\text{\tiny{inf}}}}(\mathbb{Z}/3)\) is one-dimensional, to show \(\partial(\alpha y)=\pm\beta w\) it suffices to show \(\partial(\alpha y)\) is nonzero. To compute \(\partial(\alpha y)\), we use the characterization of \(\operatorname{Ext}^{1,t}_{\mathcal{A}^{\text{\tiny{inf}}}}(M,N)\) as a set of equivalence classes of \(\mathcal{A}^{\text{\tiny{inf}}}\)-module extensions \(0\to\Sigma^{t}N\to L\to M\to 0\). We will represent \(\alpha y\) as an explicit extension of \(\Sigma^{4}N_{1}\) by \(\Sigma^{12}\mathbb{Z}/3\) and then show this extension cannot be the pullback of an extension of \(N_{2}\) by \(\Sigma^{12}\mathbb{Z}/3\), which implies \(\partial(\alpha y)\neq 0\) by exactness. Up to isomorphism, there is only one non-split extension of \(\Sigma^{4}N_{1}\) by \(\Sigma^{12}\mathbb{Z}/3\), with \(\alpha y\) and \(-\alpha y\) distinguished by a sign in the extension maps; we draw this extension in Figure 4, left. In Figure 4, right, we illustrate what goes wrong if we try to obtain this extension as the pullback of an extension of \(N_{2}\): the relation \((\mathcal{P}^{1})^{3}=0\) in \(\mathcal{A}^{\text{\tiny{inf}}}\) is violated. Thus \(\partial(\alpha y)\neq 0\). Now that we know the Ext groups of all \(\mathcal{A}^{\text{\tiny{tmf}}}\)-modules appearing in (3.16), we can draw the \(E_{2}\)-page of the Adams spectral sequence computing \(\pi_{*}(M^{\text{\tiny{tmf}}}f_{0,c_{1}+c_{2}})^{\wedge}_{3}\) in Figure 6, right. For degree reasons, this spectral sequence collapses at \(E_{2}\) in degrees \(t-s\leq 11\); since \(h_{0}\)-actions lift to multiplication by \(3\), there is no \(3\)-torsion in this range, and we conclude. _Remark 3.25_.: Other examples of twisted string structures appear in the math and physics literature; see Dierigl-Oehlmann-Schimmaek [11, SS3.4] for another \(3\)-primary example. _Remark 3.26_.: Just as in Equation (3.10), Kreck's modified surgery gives a classification of some closed, smooth \(8\)-manifolds up to stable diffeomorphism in terms of twisted string bordism. There is work applying this in examples corresponding to vector bundle twists [13, 14, 15, 16, 17]; it would be interesting to apply the _tmf_-module Adams spectral sequence to classes of manifolds where the twist is not given by a vector bundle. Figure 4. Left: an extension of \(\mathcal{A}(1)\)-modules representing the class \(\alpha y\in\operatorname{Ext}^{1,12}_{\mathcal{A}^{\text{\tiny{tmf}}}}(\Sigma ^{4}N_{1})\). Right: if we try to form an analogous extension of \(N_{2}\), we are obstructed by the fact that \((\mathcal{P}^{1})^{3}=0\) in \(\mathcal{A}^{\text{\tiny{tmf}}}\). This is part of the proof of Equation (3.24). ### \(H\mathbb{Z}/2\) as a \(ku\)-module Thom spectrum Devalapurkar uses methods from chromatic homotopy theory to prove the following result. We will reprove it using the tools in this paper. **Theorem 3.27** (Devalapurkar [13, Remark 2.3.16]).: _There is a map \(f\colon\mathrm{U}_{2}\to B\mathrm{GL}_{1}(ku)\) and a \(2\)-local equivalence \(Mf\simeq H\mathbb{Z}/2\)._ Proof.: Borel [13, Theoremes 8.2 et 8.3] proved that \(H^{*}(\mathrm{U}_{2};\mathbb{Z})\cong\mathbb{Z}[b_{1},b_{3}]/(b_{1}^{2},b_{3}^ {2})\) and \(H^{*}(\mathrm{U}_{2};\mathbb{Z}/2)\cong\mathbb{Z}/2[\overline{b}_{1},\overline {b}_{3}]/(\overline{b}_{1}^{2},\overline{b}_{3}{}^{2})\), where \(\overline{b}_{i}=b_{i}\bmod 2\), \(|b_{1}|=|\overline{b}_{1}|=1\) and \(|b_{3}|=|\overline{b}_{3}|=3\); for degree reasons, \(\mathrm{Sq}^{1}\) and \(\mathrm{Sq}^{2}\) act trivially on the mod \(2\) cohomology ring. Let \(f\colon\mathrm{U}_{2}\to B\mathrm{GL}_{1}(ku)\) be the fake vector bundle twist given by \((\overline{b}_{1},b_{3})\) (see SS1.2.2 for the definition of this class of twists). Equation (2.28) shows that \(H^{*}_{ku}(Mf)\) is isomorphic to \(H^{*}(\mathrm{U}_{2};\mathbb{Z}/2)\) as \(\mathbb{Z}/2\)-vector spaces, and that the \(\mathcal{E}(1)\)-action is twisted by \(Q_{0}(U)=U\overline{b}_{1}\) and \(Q_{1}(U)=U\overline{b}_{3}\). This and the Cartan rule imply \(H^{*}_{ku}(Mf)\cong\mathcal{E}(1)\) as \(\mathcal{E}(1)\)-modules, so \(\mathrm{Ext}_{\mathcal{E}(1)}(H^{*}_{ku}(Mf),\mathbb{Z}/2)\) consists of Figure 5. Top: the short exact sequence (3.17c) of \(\mathcal{A}^{\mathit{tmf}}\)-modules. Lower left: the induced long exact sequence in Ext. We compute the boundary maps in Equation (3.24). Lower right: \(\mathrm{Ext}_{\mathcal{A}^{\mathit{tmf}}}(N_{2})\) as computed by the long exact sequence. Figure 6. Left: the \(\mathcal{A}^{\mathit{tmf}}\)-module structure on \(H^{*}_{\mathit{tmf}}(M)\) in low degrees; the pictured submodule contains all elements in degrees \(11\) and below. Right: the \(E_{2}\)-page of the Adams spectral sequence computing \(\pi_{*}(M)^{\wedge}_{3}\), which as we discuss in the proof of Equation (3.12) is isomorphic to the \(3\)-completion of the twisted string bordism groups relevant for \(E_{8}\times E_{8}\) heterotic string theory. a single \(\mathbb{Z}/2\) in bidegree \((0,0)\) and vanishes elsewhere. Thus the \(ku\)-module Adams spectral sequence immediately collapses, and we learn \(\pi_{0}(Mf)_{2}^{\wedge}\cong\mathbb{Z}/2\) and all other homotopy groups vanish. This property characterizes \(H\mathbb{Z}/2\) up to \(2\)-local equivalence (e.g. it implies \(H^{0}(Mf;\mathbb{Z}/2)\cong\mathbb{Z}/2\), giving a map \(Mf\to H\mathbb{Z}/2\) which is an isomorphism on \(2\)-completed homotopy groups, allowing us to conclude by Whitehead). _Remark 3.28_.: Devalapurkar also shows that the equivalence in Equation (3.27) can be upgraded to an equivalence of \(E_{1}\)-ring spectra; this is not accessible to our methods. _Remark 3.29_.: The statement and proof of Equation (3.27) can be upgraded to show that the Thom spectrum of the analogous map \(f_{n}\colon\mathrm{U}_{n}\to B\mathrm{GL}_{1}(ku)\) is \(2\)-locally equivalent to a wedge sum of shifts of copies of \(H\mathbb{Z}/2\) for all \(n>1\).
2305.16304
Candidate Set Re-ranking for Composed Image Retrieval with Dual Multi-modal Encoder
Composed image retrieval aims to find an image that best matches a given multi-modal user query consisting of a reference image and text pair. Existing methods commonly pre-compute image embeddings over the entire corpus and compare these to a reference image embedding modified by the query text at test time. Such a pipeline is very efficient at test time since fast vector distances can be used to evaluate candidates, but modifying the reference image embedding guided only by a short textual description can be difficult, especially independent of potential candidates. An alternative approach is to allow interactions between the query and every possible candidate, i.e., reference-text-candidate triplets, and pick the best from the entire set. Though this approach is more discriminative, for large-scale datasets the computational cost is prohibitive since pre-computation of candidate embeddings is no longer possible. We propose to combine the merits of both schemes using a two-stage model. Our first stage adopts the conventional vector distancing metric and performs a fast pruning among candidates. Meanwhile, our second stage employs a dual-encoder architecture, which effectively attends to the input triplet of reference-text-candidate and re-ranks the candidates. Both stages utilize a vision-and-language pre-trained network, which has proven beneficial for various downstream tasks. Our method consistently outperforms state-of-the-art approaches on standard benchmarks for the task. Our implementation is available at https://github.com/Cuberick-Orion/Candidate-Reranking-CIR.
Zheyuan Liu, Weixuan Sun, Damien Teney, Stephen Gould
2023-05-25T17:56:24Z
http://arxiv.org/abs/2305.16304v3
# Candidate Set Re-ranking for Composed Image Retrieval with Dual Multi-modal Encoder ###### Abstract Composed image retrieval aims to find an image that best matches a given multi-modal user query consisting of a reference image and text pair. Existing methods commonly pre-compute image embeddings over the entire corpus and compare these to a reference image embedding modified by the query text at test time. Such a pipeline is very efficient at test time since fast vector distances can be used to evaluate candidates, but modifying the reference image embedding guided only by a short textual description can be difficult, especially independent of potential candidates. An alternative approach is to allow interactions between the query and every possible candidate, i.e., reference-text-candidate triplets, and pick the best from the entire set. Though this approach is more discriminative, for large-scale datasets the computational cost is prohibitive since pre-computation of candidate embeddings is no longer possible. We propose to combine the merits of both schemes using a two-stage model. Our first stage adopts the conventional vector distancing metric and performs a fast pruning among candidates. Meanwhile, our second stage employs a dual-encoder architecture, which effectively attends to the input triplet of reference-text-candidate and re-ranks the candidates. Both stages utilize a vision-and-language pre-trained network, which has proven beneficial for various downstream tasks. Our method consistently outperforms state-of-the-art approaches on standard benchmarks for the task. ## 1 Introduction The task of composed image retrieval aims at finding a candidate image from a large corpus that best matches a user query, which is comprised of a reference image and a modification sentence describing certain changes. Compared to conventional image retrieval setups such as text-based [24] or content-based [37] retrieval, the incorporation of both the visual and textual modalities enables users to more expressively convey the desired concepts, which is useful for both specialized domains such as fashion recommendations [13; 41] and the more general case of searching over open-domain images [8; 9; 27]. Existing work [3; 5; 11; 27; 39] on composed image retrieval mostly adopts the paradigm of separately embedding the input visual and textual modalities, followed by a model that acts as an image feature modifier conditioned on the text. The modified image feature is finally compared against features of all candidate images through a vector distance (e.g., cosine similarity) before yielding the most similar one as the prediction (see Figure 1 (left)). The main benefit of such a pipeline is the inference cost. Let us assume that a corpus \(\mathcal{D}\) contains \(M\) candidate images. For a query pair of reference image \(I_{\text{R}}\) and modification text \(t\), to select the best matching candidate, the model shall exhaustively assess triplets \(\langle(I_{\mathrm{R}},t),I_{\mathrm{C}}\rangle\) for every image \(I_{\mathrm{C}}\in\mathcal{D}\). Existing pipelines individually pre-embed all candidate images to compare with the joint-embedded of \((I_{\mathrm{R}},t)\) computed on the fly, where the comparison can be done very quickly. We point out that the above pipeline presents a trade-off between the inference cost and the ability to exercise explicit text-image interactions for each candidate. In essence, candidate images are only presented to the model indirectly through the loss function, resulting in the model having to estimate the modified visual features from text inputs in its forward path. Here, we propose an alternative solution that exhaustively classifies triplets with query-specific candidate features, which achieves appreciable performance gain while still maintaining a reasonable inference cost. We observe that for composed image retrieval, easy and hard negatives can be distinctly separated. As the nature of this task dictates that the ground truth candidate be visually similar to the reference, otherwise it would be trivial to study a modification [8; 27]. We can further deduce that a group of hard negatives exist, which is likely to benefit from fine-grained multi-modal reasoning. This observation motivates a two-stage method, where we first filter all candidates to reduce their quantity. Since the goal at this stage is oriented toward removing easy negatives, a low-cost vector distance-based pipeline would suffice. We then re-rank the remaining candidates with explicit text-image matching on each possible triplet. Granted, such a process is more computationally intense but is empirically beneficial for reasoning among hard candidates. With the pre-filtering in place, we are able to limit the overall inference time within an acceptable range. The main focus of this paper is on the second stage. Note that our two-stage pipeline relates to the inference scheme of image-text retrieval [26] in recent Vision-and-Language Pretrained (VLP) networks [21; 22]. Specifically, Li et al. [21] propose to first compute feature similarities for all image-text pairs, then re-rank the top-\(k\) candidates through a joint image-text encoder via the Image-Text Matching (ITM) scores, which greatly speeds up the inference compared to previous VLP networks that require computing ITM scores for _all_ image-text pairs [6; 25]. Here, we arrive at a similar two-stage scheme but for the task of composed image retrieval. We also note that our method, although sharing a similar philosophy and is based on VLP networks, is not a direct replica of what is done in the image-text retrieval tasks discussed above. With the unique input triplets of \(\langle(I_{\mathrm{R}},t),I_{\mathrm{C}}\rangle\), novel model architectures are required for efficient interactions among the three features of two modalities. In summary, our contribution is a two-stage method that combines the efficiency of the existing pipeline and the ability to assess fine-grained query-candidate interactions through explicit pairing. We base our design on VLP models while developing task-specific architectures that encourage interactions among input entities. Our approach significantly outperforms existing methods on datasets of multiple domains. ## 2 Related Work The task of image retrieval traditionally accepts input in the form of either an image [37] or text [24; 44]. The aim is to retrieve an image whose content is the most similar to the input one, or respectively, best matches the textual description. Vo et al. [39] propose composed image retrieval, which takes as input both modalities, using an image as a reference while text as a modifier. Current approaches address this task by designing models that serve as a reference image modifier conditioned on text, essentially composing the input modalities into one joint representation, which is compared with features of candidates through, e.g., cosine similarity. Among them, TIRG [39] uses a gating mechanism along with a residual connection that aims at finding relevant changes and preserving necessary information within the reference respectively. The outputs of the two paths are summed together to produce the final representation. Anwaar et al. [2] follow a similar design but pre-encode the inputs separately and project them into a common embedding space for manipulation. Hosseinzadeh and Wang [15] propose to adopt regional features as in VQA [1] instead of CNN features. Likewise, Wen et al. [40] develop global and local composition networks to better fuse the modalities. VAL [5] introduces a transformer network to jointly encode the input modalities, where the hierarchical design encourages multi-layer matching. MAAF [11] adopts the transformer network differently by pre-processing the input into sequences of tokens to be concatenated and jointly attended. Yang et al. [42] designs a joint prediction module on top of VAL that highlights the correspondences between reference and candidate images. Notably, the module is only used in training as it is intractable to apply it to every possible pair of reference and candidate images during inference. CIRPLANT [27] proposes to use a pre-trained vision-and-language (VLP) transformer to modify the visual content, alongside CLIP4CIR [3; 4], BLIP4CIR [28] and CASE [20]. DCNet [17] introduces the Composition and Correction networks, with the latter accepting a reference image with a candidate target image and assesses their relevancy. This, on first look, suggests an exhaustive reference-candidate pairing. Though, inference cost limits the interaction of a pair of reference and candidate images to simple operations -- i.e., element-wise product and subtraction with a single-layer MLP. ARTEMIS [9] is the first to introduce a model that scores each triplet of query and candidate image, which separates it apart from an image modifier-based pipeline. However, inference cost still confines such scoring to cosine similarities between individually pre-encoded modalities. In contrast to existing approaches, our method is in two stages. We do not seek to modify image features in an end-to-end manner. Instead, we pre-filter the candidates and focus more on re-ranking the hard negatives. The re-ranking step is formatted as a scoring task based on contrastive learning, which is natural for VLP networks trained with similar objectives. We note that the concept of a two-stage scheme is not new for conventional image-text or document retrieval. Indeed, re-ranking a selected list of candidate images via e.g., k-nearest neighbors [35] or query expansion techniques [7] has been widely studied. More recent and related work on VLP models [21; 22; 23] propose to first score the similarities between image and text features, then re-rank the top-\(k\) pairs via a multi-modal classifier. This aligns nicely with the two pre-training objectives, namely, Image-Text Contrastive and Image-Text Matching. To the best of our knowledge, we are the first to apply such a two-stage scheme to composed image retrieval. We contribute by designing an architecture that reasons over the triplet of \(\langle(I_{\text{R}},t),I_{\text{C}}\rangle\), which differs from the conventional retrieval tasks discussed above. ## 3 Two-stage Composed Image Retrieval Composed image retrieval can be defined as follows. Let \(I_{\text{R}}\) be some reference image and \(t\) be a piece of text describing a change to the image. Then given a query consisting of the pair \(q=(I_{\text{R}},t)\), the aim of composed image retrieval is to find the best match, i.e., the target image \(I_{\text{T}}\), among all candidates in a large image corpus \(\mathcal{D}\). In this work, we propose a two-stage model where we first filter the large corpus to obtain a smaller set of candidate images relevant to the query (see Section 3.1), and then re-rank to obtain an ordered list of target images (see Section 3.2). For both steps, we base our designs on the pre-trained vision-and-language (VLP) network BLIP [22], though other VLP models might be used. BLIP consists of an image encoder and a text encoder. The image encoder is a vision transformer [12] that accepts as input a raw image and produces the spatial image features by slicing the image in patches and flattening them in a sequence. A global image feature is also represented as a prepended special [CLS] token. The text encoder can operate in three modes. When configured as a uni-modal encoder, it takes in a sequence of tokenized words from a text sequence and outputs the sequential features with a [CLS] token summarizing the whole text, as in BERT [10]. Optionally, the text encoder can be configured as a multi-modal encoder, where a Cross-Attention (CA) layer is inserted after each Self-Attention (SA) layer. As shown in Figure 2, the CA layer accepts the sequential output of the image encoder and performs image-text attention. The output of which is passed into the Feed-Forward (FF) layer and is the same length as the input text sequence. The transformer-based text encoder accepts inputs of varied lengths while sharing the same token dimension \(d\) as the output of the image encoder. In this paper, we denote the features of an arbitrary image (resp. input text) as \(\mathbf{v}\) (resp. \(\mathbf{w}\)) and its length as \(L_{\mathbf{v}}\) (resp. \(L_{\mathbf{w}}\)). We note that a decoder mode is also available in BLIP for generative tasks (e.g., image captioning [1]), though it is not used in this work. ### Candidate Filtering The first stage of our approach aims to filter out the majority of candidates leaving only a few of the more difficult candidates for further analysis in the second step. Shown in Figure 1 (left), we adopt the BLIP text encoder in its multi-modal mode such that it jointly embeds a given query \(q=(I_{\text{R}},t)\) into a sequential output, which we denote as \(z_{t}\in\mathbb{R}^{L_{\mathbf{w}}\times d}\). We extract the feature of the [CLS] token in \(z_{t}\) as a single \(d\)-dimensional vector and compare it to pre-computed [CLS] embeddings of all candidate images \(I_{\text{T}}^{\prime}\in\mathcal{D}\) via cosine similarity. Note that the pre-computed candidate embeddings are independent of the query text, a weakness that we address in our second stage model. Since BLIP by default projects the [CLS] token features into \(d=256\) for both image and text, the comparison can be efficiently done through a cosine similarity. After training the candidate filtering model, we select the top-\(K\) candidates (for each query) for re-ranking in the second stage. Here we choose \(K\) to be sufficiently large so that the ground-truth target is within the selected images for most queries. We term the percentage value of queries with ground truth within the top-\(K\) as _ground truth coverage_ in the following sections. Empirically, we find that setting \(K\) to \(50\) or \(100\) gives a good trade-off between recall and inference cost in the second stage. Details on the ablation of \(K\) across datasets are discussed in Section B.1. We note that concurrent to our work, CASE [20] adopts a similar approach as our Candidate Filtering model, in that it uses BLIP for reference image-text fusion. We point out that both our filtering model and CASE use BLIP in one of its originally proposed configurations without architectural changes, and hence, is unsurprising and could be viewed as a natural progression for the task. Meanwhile, our second-stage re-ranking sets us apart from this concurrent work. ### Candidate Re-ranking The second stage re-ranks the filtered set of candidates. Since this set is much smaller than the entire corpus we can afford a richer, more expensive approach. To this end, we introduce a novel dual-encoder design for this subtask inspired by the BLIP architecture proposed for NLVR [36]. Figure 1: Overall training pipeline. **Left:** Candidate Filtering model, which takes as input the tokenized text and cross-attends it with the reference image. The output is the sequential feature \(z_{t}\), where we extract the [CLS] token as the summarized representation of the query \(q=(I_{\text{R}},t)\) to compare its similarity with features of \(I^{\prime}_{\text{T}}\). **Right:** Candidate Re-ranking model with dual-encoder architecture. Stacked elements signify that we exhaustively pair up each candidate \(I^{\prime}_{\text{T}}\) among the selected top-\(K\) with the query \(q\) for assessment. Note that the two encoders take in different inputs for cross-attention. The output [CLS] tokens are concatenated and passed for producing a logit. Figure 2: Details of the transformer layer in our dual-encoder architecture. Here, we take the first layer as an example. **SA**: Self-Attention layer, **CA**: Cross-Attention layer, **FF**: Feed-Forward layer. \(\oplus\): element-wise addition for residual connections. Dashed fillings on FF suggest weight-sharing. As shown in Figure 1 (right), our two encoders run in parallel as two branches to serve separate purposes, one to encode \(I_{\text{R}}\) with \(I_{\text{T}}^{\prime}\) and the other to encode \(t\) with \(I_{\text{T}}^{\prime}\). Internally, they exchange information via dedicated merging layers. Encoder-\(z_{t}\), as the name suggests, accepts as input \(z_{t}\in\mathbb{R}^{L_{\text{R}}\times d}\) from the previous stage. Since we do not further finetune the Candidate Filtering model in the second stage, \(z_{t}\) can be precomputed for each query of \(q=(I_{\text{R}},t)\). Meanwhile, Encoder-\(t\) takes as input the tokenized \(t\), which is then embedded into a sequential feature of size \(\mathbb{R}^{L_{\text{R}}\times d}\). Here, note that for a given query, the lengths of \(z_{t}\) and the embedded \(t\) are always identical, as the output of a text encoder (i.e., the Candidate Filtering model) shall retain the dimension of the input coming through the SA layers (see Figure 2). This characteristic makes merging the outputs of the two encoders within each transformer layer effortless. We use a default 12-layer transformer for each encoder. Within each transformer layer, both encoders cross-attend the inputs introduced above with the sequential feature of an arbitrary \(I_{\text{T}}^{\prime}\). The intuition is to allow \(I_{\text{T}}^{\prime}\) to separately attend to the two elements in each query \(q\) for relevancy, namely \(t\) and \(I_{\text{R}}\). For Encoder-\(t\), the two entities entering the CA layer are self-explanatory. However, for Encoder-\(z_{t}\), we opt for using \(z_{t}\) as a surrogate of \(I_{\text{R}}\). The main reason is the GPU memory limit, as it is intractable to perform image-image cross attention with the default \(L_{\text{\boldmath$\mathbf{v}$}}=577\) during training. Although spatial pooling can be used to reduce the length of the input \(I_{\text{R}}\) sequence, we empirically find it inferior, potentially due to the loss of information in pooling. Details are discussed in Section 4.3. On the other hand, \(z_{t}\) can be viewed as an embedding that contains sufficient \(I_{\text{R}}\) information and is readily available, as it has been pre-computed in the previous stage from the query pair \(q\). A bonus of using \(z_{t}\) is that we can easily merge the cross-attended features, since it shares the same dimensionality as \(t\) at all times. Empirically, we confirm that our design choices yield a better result. Figure 2 depicts the transformer block of the re-ranking model. As illustrated, we merge the outputs of the CA layers from the two encoders in each transformer layer. Specifically, given the outputs of the encoders after the CA layers, the merging is performed as an average pooling in the first six layers, and a concatenation followed by a simple MLP in the last six layers. The merged feature is then passed into a residual connection, followed by the FF layers. Regarding weight-sharing across layers in each encoder, we opt for having separate weights of SA and CA layers within each encoder, while sharing the weights of FF layers to account for the different inputs passing through the SA and CA layers. We point out that due to the residual connections (Figure 2), the outputs of the two encoders after the final transformer block are different in values, even though the FF layers are of the same weights. We formulate the re-ranking as a scoring task--among the set of candidate images score the true target higher than all other negative images. For each sequential output from either encoder, we extract the [CLS] token at the front as the summarized feature. We then concatenate the two [CLS] outputs from two encoders and use a two-layer MLP as the scorer head, which resembles the BLIP Image-Text Matching task setup. ### Training Pipeline Candidate Filtering.Our filtering model follows the contrastive learning pipeline [32] with a batch-based classification loss [39] commonly adopted in previous work [20; 28]. Specifically, in training, given a batch size of \(B\), the features of the \(i\)-th query \((I_{\text{R}}^{i},t^{i})\) with its ground-truth target \(I_{\text{T}}^{i}\), we formulate the loss as: \[\mathcal{L}_{\text{Filtering}}=-\frac{1}{B}\sum_{i=1}^{B}\log\left[ \frac{\exp\Bigl{[}\lambda\cdot\kappa\left(f_{\theta}(I_{\text{R}}^{i},t^{i}),I_ {\text{T}}^{i}\right)\Bigr{]}}{\sum_{j=1}^{B}\exp\Bigl{[}\lambda \cdot\kappa\left(f_{\theta}(I_{\text{R}}^{i},t^{i}),I_{\text{T}}^{j}\right) \Bigr{]}}\right], \tag{1}\] where \(f_{\theta}\) is the Candidate Filtering model parameterized by \(\theta\), \(\lambda\) is a learnable temperature parameter following [32], and \(\kappa(\cdot,\cdot)\) is the similarity kernel as cosine similarity. In inference, the model ranks all candidate images \(I_{\text{T}}^{\prime}\) for each query via the same similarity kernel \(\kappa(\cdot,\cdot)\). We then pick the top-\(K\) lists for each query for the second-stage re-ranking. Candidate Re-ranking.The re-ranking model is trained with a similar contrastive loss as discussed above. Specifically, for each \(\langle(I_{\text{R}}^{i},t^{i}),I_{\text{T}}^{i}\rangle\) triplet, we extract the predicted logit and contrast it against all other \(\langle(I_{\mathbf{R}}^{i},t^{i}),I_{\mathbf{T}}^{j}\rangle\) with \(i\neq j\), essentially creating \((B-1)\) negatives for each positive triplet. The loss is formulated as: \[\mathcal{L}_{\text{Re-ranking}}=-\frac{1}{B}\sum_{i=1}^{B}\log\left[ \frac{\exp\!\left[f_{\gamma}(I_{\mathbf{R}}^{i},t^{i},I_{\mathbf{T}}^{i}) \right]}{\sum_{j=1}^{B}\exp\!\left[f_{\gamma}(I_{\mathbf{R}}^{i},t^{i},I_{ \mathbf{T}}^{j})\right]}\right], \tag{2}\] where \(f_{\gamma}\) is the Candidate Re-ranking model parameterized by \(\gamma\). Note that in training, we randomly sample negatives within the same batch to form triplets. Therefore, the choice of \(K\) does not affect the training process. We empirically find this yielding better performance than training only on the top-\(K\) negatives, with the benefit of not relying on a filtered candidate list for training. Incidentally, it is also more efficient, as we do not need to independently load negatives for each query. During inference, the model only considers, for each query, the selected top-\(K\) candidates and ranks them by the predicted logits. ## 4 Experiments ### Experimental Setup Datasets.Following previous work, we consider two datasets in different domains. Fashion-IQ is a dataset of fashion products in three categories, namely _Dress_, _Shirt_, and _Toptee_, which form over 30k triplets with 77k images. The annotations are collected from human annotators and are overall concise. CIRR [27] is proposed to specifically study the fine-grained visiolinguistic cues and implicit human agreements. It contains 36k pairs of queries with human-generated annotations, where images often contain rich object interactions [36]1. Footnote 1: Both datasets are publicly released under the MIT License, which allows distributions and academic usages. Evaluation Metrics.We follow previous work to report our results in Recall@\(K\), that is the percentage of queries whose true target is ranked to be among the top-\(K\) candidates. For Fashion-IQ, we assess the performance with Recall@10 and 50 on each category [41]. Such choices of \(K\) values account for the possible false negatives in the candidates. On CIRR, we report Recall@1, 5, 10, and 50. We additionally record Recall\({}_{\text{subset}}\)@\(K\)[27], where the candidates are limited to a pre-defined set of five with high similarities. The set of five candidates contains no false negatives, making this metric more suitable to study fine-grained reasoning ability. For Fashion-IQ, we report results on the validation split, as the ground truths of the test split remain nonpublic. For CIRR, we report our main results on the test split obtained from the evaluation server2. Footnote 2: [https://cirr.cecs.anu.edu.au/test_process/](https://cirr.cecs.anu.edu.au/test_process/) Implementation Details.We adopt the standard image pre-processing and model configurations of BLIP encoders [22]. Except for image padding, which we follow Baldrati et al. [3] with a padding ratio of 1.25. We initialize the image and text encoders with the BLIP w/ ViT-B pre-trained weights. In both stages, we freeze the ViT image encoder and only finetune the text encoders due to the GPU memory limits. We follow BLIP downstream task settings and optimize with AdamW [29] with a cosine learning rate schedule. Training details for each dataset in the two stages are listed in Section A. All experiments are conducted on a single NVIDIA A100 80G with PyTorch while enabling automatic mixed precision. We base our implementation on the BLIP codebase3. Footnote 3: [https://github.com/salesforce/BLIP](https://github.com/salesforce/BLIP) ### Performance Comparison with State-of-the-Art Fashion-IQ.Table 1 compares the performance on Fashion-IQ. We note that our re-ranking model (row 20) outperforms all existing methods consistently across three categories. Impressively, the performance increase is notable when compared to CASE (row 18), a method that also uses BLIP encoders. This suggests that our two-stage design, particularly the explicit query-specific candidate re-ranking, is beneficial to the task. Regarding our first stage filtering model (row 19), we achieve a performance slightly behind CASE. As discussed in Section 3, we share a similar BLIP-based architecture and training pipeline as CASE. Upon examining the ablation studies by Levy et al. [20], we conjecture that the lower performance is mainly because we adopt a different loss and do not finetune the ViT image encoder alongside due to hardware limits. We note that nevertheless, our re-ranked performance surpasses all existing methods by a large margin. Cirr.Table 2 compares the performance on CIRR. Overall, we observe a similar trend in performance increase as in Fashion-IQ. This includes the performance comparison between our filtering \begin{table} \begin{tabular}{l l c c c c c c c c c} \hline \hline & & \multicolumn{2}{c}{**Dress**} & \multicolumn{2}{c}{**Shirt**} & \multicolumn{2}{c}{**Toptee**} & \multicolumn{2}{c}{**Average**} & \multicolumn{1}{c}{**Avg.**} \\ \cline{3-10} & **Methods** & R@10 & R@50 & R@10 & R@50 & R@10 & R@50 & R@10 & R@50 & **Metric** \\ \hline [MISSING_PAGE_POST] ASE [20] & 47.77 & 69.36 & 48.48 & 70.23 & 50.18 & 72.24 & 48.79 & 70.68 & 59.74 \\ \hline **19** & Ours **F** & 43.78 & 67.38 & 45.04 & 67.47 & 49.62 & 72.62 & 46.15 & 69.15 & 57.65 \\ **20** & Ours **R\({}_{100}\)** & **48.14** & **71.34** & **50.15** & **71.25** & **55.23** & **76.80** & **51.17** & **73.13** & **62.15** \\ \hline \hline \end{tabular} \end{table} Table 1: Fashion-IQ, validation split. We report Average Metric _(Recall\({}_{\text{avg}}\)@10+Recall\({}_{\text{avg}}\)@50)/2_ as in [41]. Rows 1-2 are cited from [41]. \(\dagger\): Methods trained with additional data in a multi-task setup. **F** (shaded) denotes Candidate Filtering model, **R\({}_{K}\)** denotes Candidate Re-ranking model with results obtained on the top-\(K\) filtered results from **F**. For Fashion-IQ we use top-100, which has a ground truth coverage of 77.24%, 75.86% and 81.18% for dress, shirt and toptee categories respectively. Best numbers (resp. second-best) are in **black** (resp. underlined). \begin{table} \begin{tabular}{l l c c c c c c c c c} \hline \hline & & \multicolumn{4}{c}{**Recall@\(K\)**} & \multicolumn{4}{c}{**Recall\({}_{\text{subset}}\)@\(K\)**} & **Avg.** \\ \cline{3-10} & **Methods** & \(K=1\) & \(K=5\) & \(K=10\) & \(K=50\) & \(K=1\) & \(K=2\) & \(K=3\) & **Metric** \\ \hline **1** & TIRG [39] & 14.61 & 48.37 & 64.08 & 90.03 & 22.67 & 44.97 & 65.14 & 35.52 \\ **2** & TIRG+LastConv [39] & 11.04 & 35.68 & 51.27 & 83.29 & 23.82 & 45.65 & 64.55 & 29.75 \\ **3** & MAAF [11] & 10.31 & 33.03 & 48.30 & 80.06 & 21.05 & 41.81 & 61.60 & 27.04 \\ **4** & MAAF+BERT [11] & 10.12 & 33.10 & 48.01 & 80.57 & 22.04 & 42.41 & 62.14 & 27.57 \\ **5** & MAAF-TF [11] & 9.90 & 32.86 & 48.83 & 80.27 & 21.17 & 42.04 & 60.91 & 27.02 \\ **6** & MAAF-RF [11] & 10.22 & 33.32 & 48.68 & 81.84 & 21.41 & 42.17 & 61.60 & 27.37 \\ 7 & CIRPLANT [27] & 15.18 & 43.36 & 60.48 & 87.64 & 33.81 & 56.99 & 75.40 & 38.59 \\ **8** & CIRPLANT w/OSCAR [27] & 19.55 & 52.55 & 68.39 & 29.38 & 39.20 & 63.03 & 79.49 & 45.88 \\ **9** & ARTEMIS [9] & 16.96 & 46.10 & 61.31 & 87.73 & 39.99 & 62.20 & 75.67 & 43.05 \\ **10** & CLIP4ICH [4] & 38.53 & 69.98 & 81.86 & 95.93 & 68.19 & 85.64 & 94.17 & 69.09 \\ **11** & BLIP4ICH [28] & 40.15 & 73.08 & 83.88 & 96.27 & 72.10 & 88.27 & 95.93 & 72.59 \\ **12** & CASE [20] & 48.00 & 79.11 & 87.25 & **97.57** & 75.88 & 90.58 & 96.00 & 77.50 model (row 14) and CASE [20] (row 12), as discussed above. We notice that our re-ranked results (row 15) outperform all previous methods, including models that are based on BLIP and pre-trained on additional data of large scales (row 13). This demonstrates that our design more effectively harnesses the information within the input entities than existing work. ### Ablation Study In Table 3, we test several variants of the re-ranking model to verify our design choices. We report performance on the Fashion-IQ validation split for all experiments. Further ablations studies are included in Section B. We begin with assessing the necessity of our dual-encoder setup, as shown in Table 3 row 1. We validate the need for the Encoder-\(z_{t}\), as otherwise, the performance drops significantly. Building on the above, we further verify our use of \(z_{t}\) as a surrogate of \(I_{\text{R}}\). As discussed in Section 3, Encoder-\(z_{t}\) is designed to allow for interactions between the reference and candidate images. However, GPU memory consumption prohibits direct image-image cross-attention, unless certain spatial pooling is applied to the reference image. In rows 2-3, we show that it is less desired to replace \(z_{t}\) with such pooled features, which, in turn, corroborates that \(z_{t}\) is a better alternative for the network. We additionally show two variants related to our design choices. Row 4 replaces the first six merging layers from the average pooling to MLP, while row 5 removes the weight-sharing of the FF layers. We note a consistent performance decrease in both cases. ### Inference Time One obvious limitation of our method is the inference time of the re-ranking model, as it requires exhaustive pairing of the query and top-\(K\) candidates. Several factors contribute to the case, including the size of the validation/test split of the dataset, choice of \(K\), as well as the general length of the input text, which affects the efficiency of the attention layers within the transformer. We observe that compared to a traditional vector distancing-based method, e.g., our filtering model, the inference time of the re-ranking step is increased by approximately 8 times on Fashion-IQ and 35 times on CIRR. Qualitatively, it takes around 9 minutes (resp. 7 minutes) for inference on the validation split of Fashion-IQ (resp. CIRR). We note that our focus of this work is on achieving higher performance through model architectural design and better use of input information, and is not optimized towards e.g., industrial applications. Meanwhile, the additional cost results in a significant increase in performance -- around 5% in average Recall compared to the filtering results (Table 1 and Table 2). ### Qualitative Results We present several retrieved results on CIRR in Figure 3, where we show the pipeline of filtering followed by re-ranking. For **(a)** and **(b)**, we note that explicit text-candidate pairing can be more beneficial in cases where new elements are added (i.e., "trees", "two people" and "cat"), as the re-ranking model can readily identify concepts within each candidate, and assess its correlations \begin{table} \begin{tabular}{l l c c c c c c c c c} \hline \hline & & \multicolumn{2}{c}{**Dress**} & \multicolumn{2}{c}{**Shirt**} & \multicolumn{2}{c}{**Toptee**} & \multicolumn{2}{c}{**Average**} & \multicolumn{1}{c}{**Avg.**} \\ \cline{3-11} & **Methods** & R@10 & R@50 & R@10 & R@50 & R@10 & R@50 & R@10 & R@50 & **Metric** \\ \hline \hline & Ours \(\mathbf{R}_{100}\) & **48.14** & **71.34** & **50.15** & **71.25** & **55.23** & **76.80** & **51.17** & **73.13** & **62.15** \\ \hline **1** & w/o. \(z_{t}\) & 37.48 & 66.83 & 39.16 & 65.75 & 46.25 & 72.62 & 40.96 & 68.40 & 54.68 \\ \hline **2** & w: Ref\({}_{\text{CLS}}\) & 46.55 & 71.24 & 47.84 & 70.07 & 54.36 & 75.83 & 49.59 & 72.38 & 60.98 \\ **3** & w: Ref\({}_{\text{CLS}+\text{spatial}\text{\& c\times 6}}\) & 48.04 & 71.10 & 48.04 & 70.31 & 54.82 & 76.39 & 50.30 & 72.60 & 61.45 \\ \hline **4** & Full-MLP merge & 47.00 & 70.80 & 47.79 & 69.63 & 54.61 & 76.54 & 49.80 & 72.32 & 61.06 \\ **5** & Dual Feed-Forward & 44.67 & 69.71 & 46.37 & 69.97 & 52.83 & 76.39 & 46.96 & 72.02 & 59.99 \\ \hline \hline \end{tabular} \end{table} Table 3: Ablation studies on Fashion-IQ, validation split. Shaded is our Candidate Re-ranking model as in Table 1 row 20. Row 1 ablates the utility of the dual-encoder design. Rows 2-3 examine the difference between using \(z_{t}\) and the reference image features in Encoder-\(z_{t}\) in Figure 1. In row 3, we choose \(\text{Ref}_{\text{CLS}+\text{spatial}\text{\& c\times 6}}\) to showcase the performance under minimum information loss with pooling, as the hardware cannot accommodate a longer sequence of image input. Rows 4-5 test architectural designs. See further ablations in Section B.2. Best results are in **black**. between the text. We specifically point to **(b)**, where with initial filtering, only one candidate among the top-6 contains a cat as the text describes. After re-ranking, three candidates with cat are brought forward, with the true target ranked the first. In **(c)**, we show that our re-ranking model is also effective at recognizing global visual concepts such as scenes (i.e., library). Finally, we list a failure case of the re-ranking in **(d)**, where we observe that our re-ranking model fails to associate the concept of "with" with having two entities simultaneously in the image. As a result, the top-3 re-ranked candidates each only pictures a single puppy running. We additionally point out that the filtering module is effective at removing easy negatives. As shown in Figure 3, on each sample, the top-ranked candidates are already picturing similar objects or scenes as the reference. For more qualitative examples please see Section C. ## 5 Conclusion We propose a two-stage method for composed image retrieval, which trades off the inference cost with a model that exhaustively pairs up queries with each candidate image. Our filtering module follows prior work and uses vector distances to quickly generate a small candidate set. We then design a dual-encoder architecture to assess the remaining candidates against the query for their relevancy. Both stages of our method are designed based on the existing vision-and-language pre-trained model. We experimentally show that our approach consistently outperforms existing methods on two popular benchmarks, Fashion-IQ and CIRR. Figure 3: Qualitative examples on CIRR. For each sample, we showcase the query (left) with the filtered top-6 candidates (F), followed by the re-ranked top-6 results (R). True targets are in green frames. We demonstrate three cases where re-ranking brings the true target forward **(a-c)**, and one failure case **(d)**. Note that the aspect ratio of certain candidate images are not preserved due to the page width limitation. Additional examples can be seen in the supplementary material. ## 6 Discussions Social Impacts.Our method is a generic retrieval model that accepts user input and locates an already existing image. Therefore, it bears low potential negative social impacts. However, we point out that it is possible for a model trained on open-domain content sourced from, e.g., the Internet to exhibit certain biases. We note that such biases, as in other similar vision-and-language tasks, can be mitigated with a careful filtering of data, but nevertheless, still remains as an active area of research. Limitations.The main limitation of our method is the inference speed, which is discussed in Section 4.4. Additionally, as mentioned above, our method might have inherent potential biases from BLIP, which is pre-trained on web-sourced image-text pairs [34].
2303.15583
Measuring Categorical Perception in Color-Coded Scatterplots
Scatterplots commonly use color to encode categorical data. However, as datasets increase in size and complexity, the efficacy of these channels may vary. Designers lack insight into how robust different design choices are to variations in category numbers. This paper presents a crowdsourced experiment measuring how the number of categories and choice of color encodings used in multiclass scatterplots influences the viewers' abilities to analyze data across classes. Participants estimated relative means in a series of scatterplots with 2 to 10 categories encoded using ten color palettes drawn from popular design tools. Our results show that the number of categories and color discriminability within a color palette notably impact people's perception of categorical data in scatterplots and that the judgments become harder as the number of categories grows. We examine existing palette design heuristics in light of our results to help designers make robust color choices informed by the parameters of their data.
Chin Tseng, Ghulam Jilani Quadri, Zeyu Wang, Danielle Albers Szafir
2023-03-27T20:26:46Z
http://arxiv.org/abs/2303.15583v1
# Measuring Categorical Perception in Color-Coded Scatterplots ###### Abstract. Scatterplots commonly use color to encode categorical data. However, as datasets increase in size and complexity, the efficacy of these channels may vary. Designers lack insight into how robust different design choices are to variations in category numbers. This paper presents a crowdsourced experiment measuring how the number of categories and choice of color encodings used in multi-class scatterplots influences the viewers' abilities to analyze data across classes. Participants estimated relative means in a series of scatterplots with 2 to 10 categories encoded using ten color palettes drawn from popular design tools. Our results show that the number of categories and color discriminability within a color palette notably impact people's perception of categorical data in scatterplots and that the judgments become harder as the number of categories grows. We examine existing palette design heuristics in light of our results to help designers make robust color choices informed by the parameters of their data. s scctterplot, category, colors + Footnote †: preprint: AIP/12/09/18 + Footnote †: preprint: AIP/12/09/18 + Footnote †: preprint: AIP/12/09/18 + Footnote †: preprint: AIP/12/09/18 + Footnote †: preprint: AIP/12/09/18 + Footnote †: preprint: AIP/12/09/18 + Footnote †: preprint: AIP/12/09/18 + Footnote †: preprint: AIP/12/09/18 + Footnote †: preprint: AIP/12/09/18 + Footnote †: preprint: AIP/12/09/18 + Footnote †: preprint: AIP/12/09/18 + Footnote †: preprint: AIP/12/09/18 + Footnote †: preprint: AIP/12/09/18 + Footnote †: preprint: AIP/12/09/18 + Footnote †: preprint: AIP/12/09/18 + Footnote †: preprint: AIP/12/09/18 + Footnote †: preprint: AIP/12/09/18 + Footnote †: preprint: AIP/12/09/18 + Footnote †: preprint: AIP/12/09/18 + Footnote †: preprint: AIP/12/09/18 + Footnote †: preprint: AIP/12/09/18 + Footnote †: preprint: AIP/12/09/18 + Footnote †: preprint: AIP/12/09/18 + Footnote †: preprint: AIP/12/09/18 + Footnote †: preprint: AIP/12/09/18 + Footnote †: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: AIP/12/09/18 + Footnote †: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: preprint: preprint: AIP/12/09/18 + Footnote †: preprint preprint: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: preprint preprint: AIP/12/09/18 + Footnote †: preprint: preprint: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint preprint: AIP/12/09/18 + Footnote †: preprint: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: preprint: preprint preprint: AIP/12/09/18 + Footnote †: preprint: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: preprint: preprint: AIP/12/09/18 + Footnote †: preprint: preprint: preprint: AIP/12/09/18 and choices of color palette's significantly impact people's abilities to estimate category means. We deconstructed our results with respect to common parameters of color encodings to find potential cues for robust palette design and find preliminary evidence that subitizing may impact categorical estimates (Kumar et al., 2017; Zhang et al., 2018; Zhang et al., 2019; Zhang et al., 2019). **Contribution:** The primary contribution of our paper is evaluating mean estimation in multiclass scatterplots with varying color palettes. Our results characterize the effect of the numbers of categories and color palette in perceptions of multiclass scatterplots. Our findings challenge current guidelines on multiclass scatterplot design (Kumar et al., 2017), and we present an exploratory analysis of key factors for effective color palette design. ## 2. Related Work Visual encodings in multiclass scatterplots significantly affect people's ability to interpret categorical data correctly. However, we still do not understand the perceptual impact of encoding choices across varying numbers of categories. We briefly review the topics of graphical perception in scatterplots, color palette design, and tasks in scatterplots to ground our work. ### Graphical Perception in Scatterplots Understanding categorical perception is a fundamental task in both cognitive science (Kumar et al., 2017) and visualization (Zhang et al., 2019). Past work has introduced a range of techniques for eliciting patterns in categorical data, such as Flexible Linked Axes (Zhou and Hansen, 2017), Parallel Sets (Kovesi et al., 2018), and Matchmaker (Machman et al., 2018). However, these techniques leverage specialized approaches with high learning costs, making them difficult for lay audiences to work with. Scatterplots, alternatively, are more familiar for many audiences and commonly encode categorical data (Kumar et al., 2019). Consequently, understanding how to best design scatterplots for categorical datasets is essential for effective data communication. Graphical perception studies investigate how effectively people can estimate different properties from visualized data (see Quadri and Rosen (Quadri and Rosen, 2018) for a survey). Scatterplots are commonly used in graphical perception experiments as they are sufficiently complex to reflect real-world challenges and simultaneously sufficiently simple to control (Kumar et al., 2017; Zhang et al., 2019; Zhang et al., 2019; Zhang et al., 2019). Existing studies have analyzed how scatterplots can support a variety of perceptual tasks across a range of channels. For example, Kim and Heer use scatterplots as a means to assess how different visual channels support various tasks (Kumar et al., 2017). Hong et al. (Hong et al., 2019) found that varying point size and lightness can lead to perceptual bias in mean judgments in scatterplots. Scatterplot studies commonly investigate how design influences people's abilities to estimate aggregate statistics, such as correlation (Kumar et al., 2017; Zhang et al., 2019; Zhang et al., 2019), clustering (Zhang et al., 2019; Zhang et al., 2019; Zhang et al., 2019), and means (Kumar et al., 2017; Zhang et al., 2019; Zhang et al., 2019; Zhang et al., 2019). Other studies model the influence of different channels on scatterplot design, such as opacity (Kumar et al., 2017), color (Zhang et al., 2019), and shape (Kumar et al., 2017). Most graphical perception studies focus on statistical relationships within a single category of scatterplots. However, studies of multiclass scatterplots often characterize people's abilities to separate classes by measuring just-noticeable differences in categorical encodings (Kumar et al., 2017; Zhang et al., 2019). Alternatively, Gleicher et al. (Gleicher et al., 2017) studied how different categorical encodings influenced people's abilities to compare the means of different classes with varying numbers of points and differences in means, colors, and shapes. They found that scatterplots can effectively reveal interclass differences and that the design of a scatterplot influenced people's abilities to compare classes, with color being the strongest categorical cue. However, in contrast to other work on categorical visualization (Kumar et al., 2017), they found that increasing the number of classes from two to three did not decrease performance. We build on these observations to explore how robust people's estimates are in scatterplots with between 2 and 10 classes with varying hardness levels, color palettes, and numbers of points, (see Section 3) to more deeply understand factors involved in effective multiclass scatterplot design. ### Color Palette Design Gleicher et al.'s findings about the effectiveness of color in multiclass scatterplots echo existing design guidance and results from other studies of categorical data encodings (Kumar et al., 2017; Zhang et al., 2019; Zhang et al., 2019; Zhang et al., 2019). Choosing a proper categorical color palette1 for visualizing categorical data is a crucial task (Zhang et al., 2019; Zhang et al., 2019). Designers employ a combination of color models and heuristics to generate palettes (see Zhou & Hansen (Zhou and Hansen, 2019), Kovesi (Kovesi et al., 2018), Bujack et al. (Bujack et al., 2019), and Nardini et al. (Nardini et al., 2019) for surveys). A range of studies has explicitly examined color perception for continuous data, such as characterizing limitations of rainbow colormaps (Kumar et al., 2017; Zhang et al., 2019; Zhang et al., 2019), comparing the task-based effectiveness of continuous colormap designs (Kumar et al., 2017; Zhang et al., 2019; Zhang et al., 2019), modeling color discrimination (Zhang et al., 2019), examining color semantics (Kumar et al., 2017), quantifying the impact of size and shape on encoding perception (Kumar et al., 2017; Zhang et al., 2019) and examining perceptual biases (Zhou and Hansen, 2019). However, significantly fewer studies have characterized color use for categorical data encoding. Footnote 1: We define a color _palette_ as a set of colors specifically designed for categorical data. Several principles and metrics of effective color palette design have been proposed (Kumar et al., 2017; Zhang et al., 2019; Zhang et al., 2019; Zhang et al., 2019). Past work recommends that color palettes optimize the mapping between data semantics and color semantics (Zhang et al., 2019; Zhang et al., 2019; Zhang et al., 2019); select colors that emphasize color harmonies (Zhang et al., 2019; Zhang et al., 2019), affect (Kumar et al., 2017), or pair preference (Zhang et al., 2019); and maximize perceptual and categorical separability between colors (Zhang et al., 2019) (see Silva et al. (Silva et al., 2019) for a survey). Designers can use predefined metrics to describe aesthetic (e.g., pair preference (Zhang et al., 2019)), perceptual (e.g., CIEDE 2000 (Zhang et al., 2019)), and categorical (e.g., color name difference or uniqueness (Zhang et al., 2019)) attributes of color to implement these guidelines and constrain effective palette design. While these metrics underlie many palette design guidelines, implementing these guidelines effectively takes significant expertise. Several methods for creating effective color palettes have been introduced. For example, Healey (Healey, 2011) considers linear separability, color difference, and color categorization to design discriminable color palettes. Harrower and Brewer (Harrower and Brewer, 2011) introduced ColorBrewer for providing designer-crafted distinguishable color palettes for cartography. Gramazio et al. (Gramazio et al., 2017) developed Colorgorical, which can generate categorical palettes by optimizing several perceptual and aesthetic metrics. Recent efforts have also explored how palettes might be extracted from images (Kumar et al., 2017) or colors from a given palette optimally assigned to a visualization (Kumar et al., 2017; Zhang et al., 2019; Zhang et al., 2019). Tools such as Colorgorical (Gramazio et al., 2017) and ColorBrewer (Harrower and Brewer, 2011) enable people to generate or choose from a range of palette designs (see Zhou & Hansen (Zhou and Hansen, 2019) for a survey). In this study, we compare preconstructed palettes from a range of sources, including ColorBrewer (Harrower and Brewer, 2011), Tableau (Zhang et al., 2019), D3 (Zhang et al., 2019), Stata Graphics (Kumar et al., 2017), and Carto (Carto, 2017) (see Figure 1 for the details of our selected color palettes). Following the model for comparing the effectiveness of continuous color ramps in Liu & Heer (2017), we leverage these palettes to understand how effectively common best-practice color palettes encode data over a range of data parameters. ## 3. Methodology We analyzed how the number of categories, number of points, and color palettes used to distinguish various categories impact people's abilities to reason with multiclass scatterplots. We performed a crowdsourced study measuring how well people were able to compare category means over varying category numbers and color palette designs. This study allowed us to characterize the effect of category number in multiclass scatterplots as well as how robust different color palette designs are across varying numbers of categories. We hypothesized that: **H1: Performance will decrease as the number of categories increases.** As visual information becomes more complex, perception and cognition degrades (Zhou et al., 2017; Zhang et al., 2017). Haroz & Whitney (2018) found that these findings generalized to categorical visualizations: increasing the number of categories degrades visual search performance. However, Gleicher et al.'s findings contradicted this observation, instead finding no performance difference between two or three category visualizations (Haroz & Whitney, 2018). We expect that for larger numbers of categories, this robustness will likely falter, even with designer-crafted palettes. Existing heuristics recommend that visualizations should not use more than seven colors for reliable data interpretation (Zhou et al., 2017). This guidance suggests that we should see drastic performance reductions for seven or more categories. **H2: The choices of the color palette will affect people's abilities to effectively compare means.** Perceptual studies demonstrate that color is a strong cue in both visualization (Zhou et al., 2017) and categorical perception (Zhou et al., 2017). Past work has shown that, even in unitless data, the choice of color palettes can affect visualization interpretation (Zhou et al., 2017; Heer et al., 2018). We likewise anticipate that color palette design may differently support varying numbers of categories: some palettes may more robustly distinguish a range of classes than others, especially as the complexity of the palette increases with larger numbers of colors. The anonymized data, results, and infrastructure for our study can be found on OSF.2 Footnote 2: [https://osf.io/w=sdc/view_only=03db060f9e4e42f29f453ed3013c3405](https://osf.io/w=sdc/view_only=03db060f9e4e42f29f453ed3013c3405) ### Task Scatterplots have been studied across a range of tasks (see Sarikaya & Gleicher (2017) for a survey). We employed a relative mean judgment task as applied in previous studies (Haroz & Whitney, 2018; Heer et al., 2018; Zhang et al., 2017). As in Gleicher et al. (2018), we asked participants to estimate the category with the highest average y-value. We used this task as it required participants to first find data points of different categories and then estimate statistical values over all points in that category. This task is sensitive to both overinclusion (i.e., including points that are not in a given class) and underinclusion (i.e., failing to include points in a given category), meaning that confusion between points of different categories should be reflected in participants' responses. It also represents a basic statistical quantity that most lay participants are able to compute. ### Stimuli Generation Participants estimated means for a series of scatterplots. We generated each scatterplot as a 400x400 pixel graph using D3 (Brock et al., 2017). Each scatterplot was rendered to white background and two orthogonal black axes with 13 unlabeled ticks. For every point, we rendered a filled circle mark with a three pixel radius. We selected three pixel points based on internal piloting to ensure that points were distinguishable between classes while also minimizing the need to address overdraw and reflecting design parameters commonly seen in real-world visualizations. As shown in Figure 1, we selected 10 qualitative color palettes: ColorBrewer/Paired (Kolmogorov et al., 2017), ColorBrewer/Set3 (Kolmogorov et al., 2017), D3/Category10 (Brock et al., 2017), Tableau/Tab10 (Brock et al., 2017), Paul Tol/Muted (Turney et al., 2017), SFSO/Parties (Zhou et al., 2017), Stata/S1 (Zhou et al., 2017), Carto/Bold (Kolmogorov et al., 2017) and Carto/Patel (Kolmogorov et al., 2017). These color palettes were chosen from popular visualization tools that provide at least 10 categorical colors in a single palette. If there were more than 10 colors in a certain palette, we used the first 10 as the palette's colors. In each scatterplot, colors were randomly selected from the target palette and mapped to corresponding categories. While some tools prescribe a fixed order to the selection of colors from a palette, this is not a universal design practice. Randomization helps avoids potential bias from differences beyond color selection as not all palettes may have been intentionally ordered, but future work should investigate differences in the ordered application of palettes. We tuned our dataset parameters in a series of three extensive pilot studies, measuring performance for varying numbers of categories, points, and hardness levels (see Appendix for details). As in Gleicher et al. (2018), we controlled task hardness using the distance between classes. The hardness level is denoted by \(\Delta\) and is calculated by the distance between y-means of classes in multiclass scatterplots. To generate positional data with the given mean and covariance, we used a function from Numpy (Zhou et al., 2017) that randomly samples from a multivariate normal distribution. We denoted our data points as \(\{x,y\in\mathbb{R}\,|\,0<x,y<10\}\). First, we randomly sample the mean \(\mu_{1}\) in the range (Brock et al., 2017; Turney et al., 2017) for the category that possesses the highest mean, then set the mean \(\mu_{2}=\mu_{1}-\Delta\) as the second highest mean based on y-values. To prevent subsequent means from drifting too far apart and artificially simplifying the task, we constrained the mean \(\mu_{i}\) of the rest of the categories to \(\Delta<\mu_{1}-\mu_{i}<1.5\Delta\). Finally, we determined the covariance for each category that has y-mean \(\mu_{i}\) with \(\textit{coo}(\lambda_{i},\lambda_{i})\) where \(\lambda_{i}=random(1,min(\mu_{i},10-\mu_{i}))\). We used this variance to tune the datasets such that selecting the category with the highest point did not reliably produce the correct answer, with variance tuned in piloting. Each scatterplot contained between 10 and 20 points per category. To prevent overlapping points, we applied jittering methods which add random noise to any data points that would otherwise overlap each other. We generated 450 datasets in total. ### Procedure Our experiment consisted of three phases: (1) informed consent and color-blindness screening, (2) task description and tutorial, and (3) formal study. At the beginning of the study, participants were provided with informed consent in accordance with our IRB protocol. They were then asked to complete an Ishihara test for color-blindness screening (Krishna et al., 2018). After completing the screening successfully, participants were led to a description page for the mean judgment task. They were required to successfully complete an easy tutorial question to minimize possible ambiguities in their understanding of the task. During our formal study, each participant completed our target task (_Identify the class with the highest average y-value_) for 45 stimuli presented sequentially using a single color palette (42 formal trials and three engagement checks). We used stratified random sampling to balance number of categories and difficulty levels that each participant saw. To ensure participants saw a range of category numbers, we grouped category numbers into three classes: Figure 1. The 10 color palettes used in our experiment. Figure 2. Three engagement checks with D3 color palettes. Participants were required to pass two out of these three tasks to be considered as an approved response. All engagement checks were placed in random order with other formal trials. \begin{table} \begin{tabular}{|c|c|c|c|} \hline Factor & Description & Domain & Group \& Sampling \\ \hline Category number & The number of categories in a scatterplot. & N: [2, 10] & Small: \(\{2,\,3,\,4\}\) & Uniform \\ & The number of categories & N: [2, 10] & Medium: \(\{5,\,6,\,7\}\) & random \\ & in a scatterplot. & Large: \(\{8,9,10\}\) & Randomly assigned one \\ \hline Color palette & The palette used for color encodings. & 10 palettes & at the beginning of the study. \\ \hline Hardness level \(\Delta\) & The distance between two highest y-means. & \(\{\Delta:\text{R}\mid 1.5<\Delta\leq 3.0\}\) & Easy: \(\{2.5<\Delta\leq 3.0\}\) & Uniform \\ & the distance between two highest y-means. & Intermediate: \(\{2.0<\Delta\leq 2.5\}\) & Hard: \(\{1.5<\Delta\leq 2.0\}\) \\ \hline Point number & The number of points in each category. Every category shares the same number in each scatterplot. & N:\(\{10,\,20\}\) & Uniform random. \\ \hline \end{tabular} \end{table} Table 1. The experiment parameters. We refined the factors and domain range in three pilot studies. Category number and color palettes are our independent variables, and hardness level and point number are the control variables. The experiments were built from the combination of these four factors. Figure 4. Instances with varying numbers of categories employed in our study. Their numbers of categories are 3, 6, and 9, respectively from left to right, with the same hardness level (intermediate). Figure 3. Instances with varying hardness level (\(\lambda\)) values employed in our study. The difficulty level of instances varies from easy to hard from left to right, with four categories. small, medium, and large, which corresponded to 2-4, 5-7, and 8-10 categories, as shown in Figure 4. We also grouped stimuli into three difficulty levels: easy, intermediate, and hard, as shown in Figure 3. Each person saw 14 stimuli from each category and difficulty group, with combinations of category and difficulty assigned at random. We randomly placed three engagement checks within 42 formal trials to assess if participants were inattentive during the test. These engagement checks presented three classes with large differences in their means (c.f., Figure 2). We randomly ordered the sequence of the formal questions and the engagement checks to avoid learning or fatigue effects. ### Participants We recruited 95 participants from the US and Canada with at least a 95% approval rating on Amazon Mechanical Turk (MTurk). We excluded four participants who failed more than one engagement check. We analyzed data from the remaining 91 participants (46 male, 45 female; 24-65 years of age). All participants reported normal or corrected to normal vision. Our experiment took about 15 minutes on average, and each of the participants was compensated $3.00 for their time. ### Analysis We measured performance as both accuracy and time spent on task. We analyzed the resulting data using a 10 (color palette) x 9 (number of categories) mixed-factors ANCOVA, with the number of points, interparticipant variation, trial order, and hardness levels as random covariates. During our post-hoc analysis, we employed the Tukey's honestly significant difference test (Tukey's HSD) with \(\alpha=0.05\) and Bonferroni correction. ## 4. Results We discuss significant results and statistical analysis based on the independent factors considered in this paper (see Appendix) using both traditional inferential measures and 95% bootstrapped confidence intervals (\(\pm\) 95% CI) for fair statistical communication (Kolmogorov, 1959). Table 2 summarizes our ANCOVA results. Additional results, charts, and details of the analysis can be found on Appendix. ### Number of Categories Our results support **H1**: we found that performance decreased as the number of categories increased. Our analysis reveals a significant effect of category number on judgment performance (\(F(8,82)=7.6511,p<.0001\)): people were both less accurate and slower with higher numbers of categories. Figure 5 (a) shows that accuracy rate decreases based on the number of categories from 96.4% to 86.6%, with an overall descending trend as the number of categories increases. Figure 5 (b) presents the average spent time broken down by category number, suggesting that participants were slower for scatterplots with more categories. We also found anomalies in the accuracy rate for between five and six categories (Figure 5 (a)). While we initially assumed this anomaly to be noise, the pattern was repeated across almost all palettes. This category number correlates with past findings of _subitizing_--the ability to instantly recognize how many objects are present without counting--in categorical data from Haroz & Whitney (Haroz, 2017). While we do not confirm this hypothesis in this study, our results do raise questions about the role of subitizing or a related mechanism in categorical reasoning with visualizations. ### Color Palettes Our results also support **H2**: color palettes significantly affect accuracy (\(F(9,81)=8.4689,p<.0001\), see Table 2). We found a significant interaction effect between color palettes and the number of categories for both time and accuracy. In other words, as the number of categories increases, the accuracy ranks between color palettes might be different. Different palettes are more or less robust to increasing the number of categories. This finding indicates that there is no best palette for multiclass scatterplots. Instead, our results provide guidance for designers to select effective palettes based on the number of categories in their data. Figure 6 shows the accuracy rate and category number per color palette. These charts reveal that: 1. _SFSO Parties_ and _ColorBrewer Set3_ achieved the highest average accuracy rate in all data, whereas _PaulTol Muted_ and _Stata S1_ exhibited the worst overall performance (an 11.3% accuracy difference on average between _SFSO Parties_ and _Stata S1_), 2. lower performing palettes tend to be less robust to increasing the number of categories, and 3. most palettes show an overall descending trend as the number of categories increases, though some palettes remained relatively robust (e.g., _Stata S2_, _D3 Cat10_). ### Exploratory Analysis To better analyze the impact of specific color palettes, we performed a Tukey's HSD with Bonferroni correction to identify significant performance differences between palettes. The test revealed three _classes_ of color palettes with comparable performance, shown in Table 3. Figure 7 illustrates the combined accuracy rate of the three classes, in which Class A refers to the best performance, Class B is slightly lower, and Class C is the worst overall. All three classes of palettes showed a steady downward trend that is consistent with **H1**. We use the clusters created by these performance classes to \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Source & DF1 & DF2 & F-value & p-value \\ \hline category number & 8 & 82 & 7.6511 & \(<.0001\) \\ \hline hardness level & 1 & 89 & 31.2187 & \(<.0001\) \\ \hline point number & 10 & 80 & 1.6502 & 0.086 \\ \hline color palette & 9 & 81 & 8.4689 & \(<.0001\) \\ \hline category number * hardness level & 7 & 93 & 1.7448 & 0.0832 \\ \hline hardness level * color palette & 8 & 92 & 1.1141 & 0.3486 \\ \hline color palette * category number & 71 & 19 & 1.6921 & \(.0003\) \\ \hline \end{tabular} \end{table} Table 2. ANCOVA results. Significant effects are indicated by bold text and the corresponding rows are highlighted in green. scaffold an exploratory analysis of potential metrics associated with the observed performance differences. We analyzed these classes using eight color metrics associated with palette design to explore the relationship between performance and common design parameters: perceptual distance [68], name difference [22], and name uniqueness [22] as employed by Colorgorical [15] and the magnitude and variances of different dimensions in CIELCh [88] (\(L^{*}\), \(C^{*}\), and \(h^{*}\)). The computations for those metrics can be found in our supplemental materials. Since we randomly sampled colors in a palette for plots with less than 10 categories (see Section 3.2), for each target color palette, we compute those metrics based on the actual colors used in each individual stimulus sampled from the target palette to explore the distribution of these features with respect to performance. We conducted an ANOVA using these nine measures to assess the impact of each parameter on accuracy (Table 4). We found significant effects (\(p<0.01\)) of \(L^{*}\) variance, \(L^{*}\) magnitude, and Figure 5. Our primary results with respect to the numbers of categories, hardness level, and numbers of points. Graphs on the left show changes in accuracy, whereas those on the right show response times Both accuracy and time do not systematically vary with the number of points. However, as the number of categories grows or the hardness level increases, the overall accuracy rate drops, and the time spent escalates. In order to show the trend clearly, we used a scale from 50–100% (chance at our smallest number of categories to perfect performance) on the y-axis for accuracy. Error bars represent 95% confidence intervals. Figure 6. The accuracy rates based on the number of categories separated per color palette, sorted by average accuracy over all categories (dash lines) sorted from most to least accurate. Color palettes are shown along with corresponding charts. See Section 4.2 for detailed analysis and Table 5 in the Appendix for the count of scatterplots per palette. all-pairs perceptual distance (Srivastava et al., 2017) and marginal effects (\(p<0.10\)) of \(h*\) variance and \(C*\) variance. To assess the direction of the effects, we performed an OLS linear regression of each metric and average accuracy. Since the value of \(\beta\)-ratio (in \(Y=\beta X+\epsilon\)) of regression differs from the data range of source metrics, we show its directionality in the right-most column in Table 4, where a plus sign refers to an increasing trend and minus sign means decreasing trend. We found that larger \(L^{*}\) magnitude (lighter colors) and larger perceptual distance both lead to better judgment accuracy, whereas palettes with less lightness variance had better performance. Our findings suggest that palettes leveraging lighter colors, larger perceptual distances, and lower luminance differences led to better performance whereas factors like hue variance or name uniqueness that are conventionally associated with palette design may be less important when considered in isolation. The use of lightness in palettes has a tenuous history: some advocate for minimizing lightness variation to avoid biasing attention (Srivastava et al., 2017; Wang et al., 2018), whereas other guidelines suggest that designers leverage higher contrasts introduced by lightness variations to take advantage of our sensitivity to lightness variations (Srivastava et al., 2017). Our results provide further support for privileging more isoluminant palettes to avoid directing too much attention to given classes (Beng et al., 2018). However, future work should more systematically explore this hypothesis. ## 5. Discussion We measured the impact of the number of categories and choice of color palettes on people's perceptions of multiclass scatterplots. We find that people are less accurate at assessing category means as the number of categories increases. However, certain palettes may be more robust to variations in category number. Our results provide new perspectives on prior findings and offer both actionable design guidance and opportunities for future research. ### Reflections on Prior Findings Our study indicates that increasing numbers of categories would lead to descending performance in relative mean estimation (see \begin{table} \begin{tabular}{|c|c|c|c|} \hline Source Metrics & F-value & p-value & \(\beta\)-ratio \\ \hline \(L*\) variance & 9.88 & **.0005** & — \\ \hline \(L^{*}\) magnitude & 7.51 & **.001** & \(+\) \\ \hline \(h*\) variance & 2.75 & 0.07 & — \\ \hline \(C*\) variance & 16.53 & 0.08 & — \\ \hline \(C*\) magnitude & 8.26 & 0.13 & — \\ \hline Perceptual Distance & 15.20 & **.001** & \(+\) \\ \hline Name Difference & 7.45 & 0.24 & \(+\) \\ \hline Name Uniqueness & 10.31 & 0.33 & \(+\) \\ \hline \end{tabular} \end{table} Table 4. Results of significance analysis from color metrics to judgment accuracy. The right-most column shows plus or minus of the \(\beta\)-ratio in the OLS linear regression where plus means incremental trend, and minus means decremental trend. Significant impacts (\(p<0.01\)) are in bold style and green color. Figure 7. The average accuracy rate with different numbers of categories per performance class of color palettes. Charts represent Class A to C from left to right. Section 4.1). However, this finding is inconsistent with the proposed guidelines of Gleicher et al. (Gleicher et al., 2018), which found little impact of adding additional distractor classes. We anticipate that this contradiction is a result of the number of classes evaluated. For smaller numbers of classes, performance tended to be more robust across palettes. However, as the number of classes increased, performance started to degrade. This finding likely stems from a correlation between performance and color discriminability indicated in our exploratory analysis. As the number of colors increases, it becomes more difficult to ensure colors are spaced apart, especially if maximizing for metrics like pair preference (Srivastava et al., 2017) or otherwise minimizing the number of large hue variations to preserve harmonies (Srivastava et al., 2017). Certain palettes may differently balance this aesthetic and performance trade-off. However, our results indicate that this trade-off is sensitive to the number of categories present in the data. Contrary to past heuristics, we found that performance remained relatively high even for more than seven categories. We hypothesize that people may simply be better at this task than they expect: while the task may feel significantly more difficult as the number of categories increases, our visual system may be more robust than expected in working with complex categorical data. Feedback from pilot participants indicated that the tasks felt challenging as the number of categories exceeded four, but these participants, like those in the formal study, still performed well at such seemingly difficult tasks. While the tested palettes reflect best practices, our findings challenge existing heuristics around the scalability and utility of color in visualization. A more thorough and rigorous empirical examination of the robustness of categorical perception in visualization generally would benefit a wide range of applications. Our results demonstrate a high overall accuracy (more than 85%) compared to Gleicher et al. (Gleicher et al., 2018) (\(\sim\) 75%) even though we tested a larger number of categories. Gleicher et al. chose to generate uniformly sparsely distributed classes. However, such distributions might not be widely applicable; in real-world use cases, people more commonly build insight from densely distributed classes (Gleicher et al., 2018). As shown in Figure 4, we generated more randomly and densely distributed classes to privilege ecological validity. Consequently, our stimuli are more likely to perform similarly to what people usually see in their daily life, but the clustering structure may have affected task accuracy by, for example, making it slightly easier to group points within categories. While we tuned our stimulus difficulty in piloting and our results were consistent across different performance thresholds, providing evidence of their generalizability, these differences raise important future questions as to the impact of different data distributions on categorical palette design. Part of the difference in results between our findings and Gleicher et al. may stem from differences in perceptual mechanisms present when processing different numbers of categories. Our study reveals a significant "dip" in accuracy when the number of categories increased to five or six (see Section 4.1). These bumps correlate with a key number of objects for subitizing (Srivastava et al., 2017): below roughly six objects, we can instantly and precisely detect the quantity of objects present, whereas we have to actively count larger numbers. While subitizing tends to focus on individual objects rather than collections of objects, the dip in accuracy directly correlates with this subitizing threshold and echoes similar findings in past work in categorical visualization (Gleicher et al., 2018). Our study is not designed to probe subitizing or other specific perceptual mechanisms that may explain these results. However, this correlation offers opportunities for further understanding the relationship between categorical perception, subitizing, cluster detection, and other related perceptual phenomena in visualization. We also found a significant overall difference between color palettes. These differences echo the findings of Liu & Heer (Liu and Heer, 2017) for continuous colormaps: even if a palette satisfies the basic constraints of good palette design (e.g., discriminable colors), it may not perform optimally. Like Liu & Heer, we also find that characterizing the source of these performance differences is challenging: palette effectiveness arises from a complex combination of factors. Future work should seek to further deconstruct these factors to derive more robust design guidelines. ### Design Guidelines for Multiclass Scatterplots The data and design of multiclass scatterplots significantly influence our abilities to reason across classes. Compared to Gleicher et al.'s guidelines (Gleicher et al., 2018), our results emphasize the influences of category number and color palette, which are the two essential elements in visualizing categorical data. Additionally, in contrast of some existing guidelines for color palettes (Gleicher et al., 2018; Gleicher et al., 2018), our results indicate maximizing luminance variation may hinder analysis. While designers can use our results to directly choose the optimal palette from our tested set of palettes given the number of categories in their data, our results also provide preliminary guidance for palette selection more broadly: **Simplifying category structure may improve performance.** Our study suggests that people can reason across multiple classes encoded using color. However, as shown in Section 4.1, designers should be aware that performance tends to degrade as the number of categories increases: people are slower and less accurate, especially when working with six or more categories. We recommend designers consider how the number of categories influences performance on key tasks and consider collapsing relevant categories hierarchically if necessary. As a caveat, people were relatively good at completing this task, even with larger numbers of categories than conventional heuristics recommend. Our results indicate that people can reliably distinguish colors in large palettes even though informal pilot participants indicated that the task felt quite difficult for higher numbers of categories. This contrast between perceived and objective performance suggests that even well-established design heuristics can benefit from experimental validation and refinement. **When designing new palettes, consider fewer lightness differences, larger perceptual distances, and lighter colors.** As shown in Section 4.2, our results reveal that color palettes significantly impact the accuracy of human judgment. Our exploratory analysis confirms the benefits of maximizing the pairwise difference between colors and provides further evidence of minimizing lightness variation. However, we also find that palettes using lighter colors tend to also enhance accuracy. We anticipate that this bias may be in part due to the use of a white background enhancing contrast within categories while minimizing undesirable "loud" colors that have too high of a luminance contrast with the background. However, the tested palettes are all handcrafted to select harmonious and aesthetically pleasing colors. Future work should investigate these results on other background colors. We also found little evidence of the benefits of hue or color name variation when considered in isolation. This points to the need for the systematic interrogation of designer practices to improve existing heuristics for palette design (Krishnan et al., 2017). **Choose your palettes to fit your data.** When the number of categories changes, the performance rank of different color palettes may also change. Different palettes are differently robust to changes in category number. We recommend designers select color palettes based on the parameters of their specific data. For example, a designer might use _SFSO Parties_ or _ColorBrewer Set3_ for multiclass scatterplot with less than seven categories and _D3 Cat10_ for larger numbers of categories (see Figure 6). ### Limitations and Future Work We studied the impact of the number of categories and color palettes on multiclass scatterplots. However, scatterplots offer a wide variety of design choices for representing categorical data that may provide different trade-offs in perception (Krishnan et al., 2017). Future work should explore the robustness of different channels to varying numbers of categories. Further, scatterplots often encode larger numbers of variables, such as multiple categorical dimensions or combining categorical and continuous dimensions (Krishnan et al., 2017). Future work should investigate the interplay between different design factors in higher dimensional multiclass scatterplots. Both our study and Gleicher et.al.'s work (Gleicher et al., 2019) focused on comparing y values. However, scatterplots are two-dimensional visualizations. Future work should consider the impact of palettes on crossdimensional tasks. We evaluated 10 pre-defined qualitative color palettes on qualitative data. We employed a random color sampling strategy from selected palettes for data with less than ten categories to simplify the stimulus generation to avoid potential bias from sources outside of color selection. Future work should extend our results to consider sequential strategies in comparing preconstructed palettes. Additionally, categorical data can also be encoded using other types of palettes, such as sequential and diverging encodings (Krishnan et al., 2017), whose robustness to varying numbers of samples is not well understood. Considering additional properties of color selection, such as accessible palettes for people with color vision deficiencies (Krishnan et al., 2017; Krishnan et al., 2017), is also important future work. We sampled from predefined palettes at a fixed mark size. Varying mark size can influence mark discriminability (Krishnan et al., 2017). As mark size was held constant for all palettes and all palettes had large distances between all color pairs, we do not anticipate that this choice biased our results. However, future work should explore a larger range of mark sizes and mark types. It should also seek to more systematically evaluate the robustness of our exploratory results. Such variation is challenging due to a large number of potential perceptual factors; however, our results may provide preliminary support for identifying the most promising factors. We elected to use Mechanical Turk to reflect the range of viewing conditions and participants common to web-based visualizations and to recruit larger numbers of participants. However, variations in viewing conditions can influence color perception. While past studies of color perception in visualization validate the predictive ability of crowdsourced studies for color perception studies in HCI (Krishnan et al., 2017; Krishnan et al., 2017), the variability introduced by the range of viewing conditions on MTurk limits the generalizability of our results and our ability to make precise claims about fine-grained mechanistic perceptual phenomena. However, given the large differences between colors in our palettes, we anticipate the affect of viewing variation to be relatively minimal (Krishnan et al., 2017) and followed best practices in our experimental design to minimize the impact of viewing variation. Future work seeking to quantify more precise causal mechanisms underlying our findings may wish to replicate our study under more constrained conditions. Additionally, data-centric statistical factors that may be related to the performance of multiclass scatterplots are not considered in our study. For example, we did not explore the impact of correlation or strength of clusters. Extending our experiments to consider a wider range of data properties as well as statistical tasks could help us further understand categorical data visualization for complex datasets and usage scenarios and offer broader guidance for categorical visualization generally. ## 6. Conclusion We measure how different color palettes impact people's ability to distinguish classes and assess mean values on multiclass scatterplots. Our results suggest that both the number of categories and the discriminability of color palettes heavily impact people's abilities to use multiclass scatterplots. We found that increasing the number of categories decreases how well people can distinguish different classes. Furthermore, we found preliminary evidence that even using designer-crafter palettes, a more discriminable color palette (such as _SFSO Parties_ who achieves 95% average accuracy) can perform nearly 12% better than a less discriminable one (such as _Stata S1_ with only 83% average accuracy). Based on the experimental results, we critically reflect on past findings and derive a set of design guidelines for palette selection in multiclass scatterplots. We believe that our findings have the potential to support a variety of other visualization types and low-level tasks that combine continuous and categorical data. We hope our work will inform future studies to construct more general guidelines for the understanding of categorical perception in information visualization. ###### Acknowledgements. This work was supported by NSF IIS #2046725 and by NSF CNS #2127309 to the Computing Research Association for the CIFellows Project.
2301.03598
Stream-K: Work-centric Parallel Decomposition for Dense Matrix-Matrix Multiplication on the GPU
We introduce Stream-K, a work-centric parallelization of matrix multiplication (GEMM) and related computations in dense linear algebra. Whereas contemporary decompositions are primarily tile-based, our method operates by partitioning an even share of the aggregate inner loop iterations among physical processing elements. This provides a near-perfect utilization of computing resources, regardless of how efficiently the output tiling for any given problem quantizes across the underlying processing elements. On GPU processors, our Stream-K parallelization of GEMM produces a peak speedup of up to 14$\times$ and 6.7$\times$, and an average performance response that is both higher and more consistent across 32,824 GEMM problem geometries than state-of-the-art math libraries such as CUTLASS and cuBLAS. Furthermore, we achieve this performance from a single tile size configuration per floating-point precision, whereas today's math libraries employ complex kernel-selection heuristics to select from a large ensemble of kernel variants.
Muhammad Osama, Duane Merrill, Cris Cecka, Michael Garland, John D. Owens
2023-01-09T18:22:11Z
http://arxiv.org/abs/2301.03598v1
# Stream-K: Work-centric Parallel Decomposition for Dense Matrix-Matrix Multiplication on the GPU ###### Abstract. We introduce _Stream-K_, a work-centric parallelization of matrix multiplication (GEMM) and related computations in dense linear algebra. Whereas contemporary decompositions are primarily tile-based, our method operates by partitioning an even share of the aggregate inner loop iterations among physical processing elements. This provides a near-perfect utilization of computing resources, regardless of how efficiently the output tiling for any given problem quantizes across the underlying processing elements. On GPU processors, our _Stream-K_ parallelization of GEMM produces a peak speedup of up to \(14\times\) and \(6.7\times\), and an average performance response that is both higher and more consistent across 32,824 GEMM problem geometries than state-of-the-art math libraries such as CUTLASS and cuBLAS. Furthermore, we achieve this performance from a _single_ tile size configuration per floating-point precision, whereas today's math libraries employ complex kernel-selection heuristics to select from a large ensemble of kernel variants. Matrix-Multiplication, GPU, Load-Balancing + Footnote †: Distribution Statement “A” (Approved for Public Release, Distribution Unlimited). + Footnote †: Distribution Statement “A” (Approved for Public Release, Distribution Unlimited). + Footnote †: Distribution Statement “A” (Approved for Public Release, Distribution Unlimited). ## 1. Introduction General matrix-matrix product (GEMM), convolution, and other similar computations constitute the dominant workloads in many deep learning and scientific computing applications. High-performance processors such as GPUs, for example, are designed to achieve nearly 100% of their theoretical peak math throughput when computing GEMM. Doing so, however, requires a work decomposition that perfectly occupies the underlying physical cores. As we show, attaining such high levels of processor utilization across a broad landscape of problems shapes and sizes can be challenging. Classically, GEMM implementations block their computation using a _data-parallel_ tiling of the output matrix, assigning the independent production of output tiles among concurrent threads (or thread groups) (Bartos et al., 2015; Goyal et al., 2016; Goyal et al., 2016). The work per output tile is regular, and tile production tends to dispatch across idle physical cores in "waves". The overall workload is well-balanced and processor utilization is highest when there are many waves, i.e., the number of output tiles greatly oversubscribes the number of cores. However, such oversubscription has shrunk considerably as processors have grown in size. An increased core count will require fewer waves to produce a given tile count. Bigger cores will compel larger matrix blocking factors, leading to fewer waves of larger tiles. In general, execution schedules with fewer waves are much more likely to suffer from _quantization inefficiency_, i.e., the processor underutilization that occurs when the number of output tiles is not an even multiple of the number of processor cores. When the last wave is partially full, the unused cores must wait for the remaining threads to execute millions (if not billions) of multiply-accumulate (MAC) instructions before they are able to execute any dependent work. Figure 0(a) illustrates such a scenario on a hypothetical GPU with four streaming multiprocessor cores (SMs). If we block a \(384\times 384\times 128\) GEMM computation into nine \(128\times 128\) output tiles, a _data-parallel_ decomposition cannot achieve more than 75% of the processor's rated throughput. This theoretical utilization ceiling can be improved to 90% by halving the tile size as shown in Figure 0(b). However, the finer-grained blocking factor will be less cache and scratchpad efficient, and may preclude any practical performance improvement. Quantization inefficiency is a concern for increasingly wide processors such as GPUs, where ALUs-per-core and cores-per-processor both currently number in the hundreds. Consequently, many common GEMM-like workloads now exhibit a final, partially full wave that comprises a significant fraction of the total computation time. The current remedy employed by GPU-based math and deep learning libraries is to deploy an ensemble of tiling configurations. When the ideal blocking factor does not quantize well, the library chooses among tiling alternatives with smaller concurrent work volumes, such as those illustrated in Figure 0(b) and Figure 1(a). Tile-based ensembles, however, present performance and logistical challenges for math libraries seeking to deliver the best-achievable performance across diverse problem sizes and shapes. Distributable code size can be problematic for large ensembles. For example, NVIDIA's cuBLAS library (NVIDIA, 2018) is hundreds of megabytes, often providing more than twenty pre-compiled kernel specializations per architecture for a given API entry point. Large ensembles also require sophisticated selection heuristics. In our evaluation, we show these heuristics can struggle to consistently identify the optimal configuration for arbitrary problems. Unlike these tile-based methods, our _Stream-K_ decomposition always distributes an even share (within one) of the aggregate multiply-accumulate loop iterations required by the GEMM computation across SMs. Because the instruction workload of a single MAC-loop iteration is far smaller than that of an entire output tile, any variance in core workload is practically negligible. _Stream-K_ uses the ideal blocking factor regardless of problem shape, has communication overheads that scale with processor width (rather than output tiles), and compiles to a single kernel. We use an enormous corpus of 32,824 GEMM shapes and sizes to evaluate _Stream-K_, which we implemented within NVIDIA's CUTlass library (CUTlass, 2018). In comparison with CUT-LASS's _data-parallel_ implementation of the same blocking factor, _Stream-K_ provides a substantially higher performance Figure 1. _Data-parallel_ execution schedules for \(384\times 384\times 128\) GEMM across a hypothetical four-SM GPU. Figure 2. Tile-splitting execution schedules for \(384\times 384\times 128\) GEMM across a hypothetical four-SM GPU. response across our landscape of GEMM problems, demonstrating up to 14\(\times\) speedup on NVIDIA A100 GPUs. To highlight the practical challenges of ensemble-based solutions, we also evaluate NVIDIA's cuBLAS library as well as an oracle-driven ensemble of _data-parallel_ CUTLASS tilings. Relative to both ensembles, we show that our single-kernel _Stream-K_ achieves both (1) higher average performance, and (2) higher performance consistency. Versus cuBLAS, _Stream-K_ demonstrates up to 6.7\(\times\) speedup and virtually no instances of slowdown for compute-bound problems. ## 2. Background General Matrix Multiplication (GEMM) is defined as the product \(\mathbf{C}=\alpha\mathbf{AB}+\beta\mathbf{C}\) where \(\alpha\) and \(\beta\) are scalar values and \(\mathbf{A},\mathbf{B}\), and \(\mathbf{C}\) are matrices. (For simplicity, we assume \(\alpha=1\), \(\beta=0\) throughout this paper.) We refer to the _shape_ of a given GEMM problem by the volumetric extents of its computation. For example, a \(m\times n\times k\) GEMM consumes \(m\times k\) and \(k\times n\) input matrices \(\mathbf{A}\) and \(\mathbf{B}\), respectively, performs \(m\times n\times k\) multiply-accumulate operations, and produces an \(m\times n\) output matrix \(\mathbf{C}\). GEMM is a performance-critical subroutine in many large-scale engineering and scientific applications. It plays an important role in matrix factorization methods such as LU, QR, and Cholesky decomposition. High-performance modeling and simulation applications in engineering, climate simulation, cosmology, quantum chemistry, and other scientific domains rely on these factorization methods. Matrix multiplication is also the fundamental building block of modern deep learning (DL) methods. The training of deep neural networks (DNNs) is often performed on massive datasets across large distributed systems (Hinton et al., 2015). Many DL training and inference operations are cast as matrix multiplications. For example, image recognition and computer vision models rely on convolution, which can be implemented directly as the product of filter and image datasets (Beng et al., 2015). Transformer architectures, which have come to dominate natural language processing and other applications, are almost entirely limited by the performance of large matrix products. Early work on GPU matrix-matrix multiplication from Larsen and McAllister framed the computation as a multi-texture multiplication and blending operation (K BLK_N, and BLK_K, while the outermost three iterate across them. If the cache can capture one block from each of the three matrices, the resulting data reuse among those elements will significantly reduce the number of last-level memory accesses (Kumar et al., 2017). ``` 1:\(\triangleright\) tile-processing outer loops 2:formm \(\leftarrow\) 0 to m step BLK_M do 3:fornm \(\leftarrow\) 0 to n step BLK_N do 4:\(\triangleright\) zero-initialize output tile 5:formm \(\leftarrow\) mm to (mm + BLK_M) do 6:fornm \(\leftarrow\) nn to (nn + BLK_N) do 7:C[mmmm,nn] \(\leftarrow\) 0 8:end for 9:end for 10:\(\triangleright\) perform the MAC iterations for this tile 11:forkk \(\leftarrow\) 0 to k step BLK_K do 12:\(\triangleright\) MAC iteration (fully unrolled) 13:formm \(\leftarrow\) mm to (mm + BLK_M) do 14:fornn \(\leftarrow\) nn to (nn + BLK_N) do 15:forkk \(\leftarrow\) kk to (kk + BLK_K) do 16:C[mmmm,nn] \(\leftarrow\)[mmmm,nn] + 17:(A[mmm,kk] \(\times\) B[kkk,nn]) 18:end for 19:end for 20:end for 21:end for 22:end for 23:end for ``` **Algorithm 1** Sequential cache-blocked GEMM. ### Data-parallel As shown in Algorithm 2, the _data-parallel_ GPU formulation of GEMM is decomposed across a grid of parallel thread blocks, or _cooperative thread arrays_ (CTAs)1. The grid is sized such that each CTA produces its own (BLK_M \(\times\) BLK_N) output tile. Footnote 1: Blocks of GPU threads are coscheduled in CTAs, which virtualize the hardware’s streaming multiprocessor cores (SMs). For exposition, the MacLoop() subroutine of Algorithm 3 encapsulates the multiply-accumulate workloads that compute the values of the CTA's output tile. It performs a sequence of _MAC-loop_ iterations in the accumulation domain, e.g., the \(k\)-axis for GEMM. Each _MAC-loop_ iteration comprises a per-thread volume of (BLK_M \(\times\) BLK_N \(\times\) BLK_K) / CTA_THREADS MAC operations. As the computation proceeds, fragments of the input matrices are staged through the SM's shared memory for local reuse among individual threads. Although this particular presentation of MacLoop() deploys one thread per output tile element, the sophisticated implementations in CUTlass (CUTlass, 2017) and cuBLAS (CUTlass, 2017) will: (1) fully unroll the per-thread MAC-loop iteration; (2) implement additional blocking at the warp and/or thread levels; and (3) orchestrate a software pipeline of shared memory data movement across MAC-loop iterations. Unfortunately, this classic _data-parallel_ decomposition is liable to suffer from quantization inefficiency on modern GPUs, as illustrated in Figure 1. Although an ensemble of diverse blocking factors may uncover opportunities for greater processor utilization, it is unlikely to facilitate perfect quantizations for arbitrary problem sizes. Furthermore, smaller blocking factors have two drawbacks: (1) fewer instructions per MAC-loop iteration for covering the latencies of global and shared memory transfers in pipelined implementations; and (2) a higher proportion of memory operations relative to MAC instructions, which may prevent them from being computation-bound. ``` 1:sshared_accum[BLK_M,BLK_N] 2:ires_per_tile \(\leftarrow\) [\(\times\)BLK_K] 3:\(\triangleright\) instantiate the CTA per output tile 4:fork CTA[x] [ in [m/BLK_M] \(\times\) [n/BLK_N] ] do 5:\(\triangleright\) perform the MAC iterations for this tile 6:accum \(\leftarrow\) MacLoop(x, 0,iters_per_tile) 7:\(\triangleright\) store accumulators to output tile 8:StoreFile(C, x, accum) 9:join ``` **Algorithm 2**_Data-parallel_ GPU GEMM. ``` 1:procedureMacLoop(tile_idx, iter_begin, iter_end) 2:sshared_accum[BLK_M,BLK_N] 3:sshared_frag_a[BLK_M,BLK_K] 4:shared_frag_b[BLK_K_BLK_N] 5:\(\triangleright\) determine output tile coordinates 6:mm = BLK_M \(\times\) (tile_idx / [m/BLK_M]) 7:nn \(\leftarrow\) BLK_N \(\times\) (tile_idx \(\times\) [m/BLK_M]) 8:\(\triangleright\) zero-initialize local accumulator storage 9:accum \(\leftarrow\) 0 10:\(\triangleright\) perform the specified range of MAC tiers for this tile 11:fortez \(\leftarrow\) 1reg_begin to iter_end do 12:kk \(\leftarrow\) iter \(\times\) BLK_K_N 13:\(\triangleright\) copy global matrix fragments to local storage 14:frag_a \(\leftarrow\) LoadFragment(A, mm, kk) 15:frag_b \(\leftarrow\) LoadFragment(B, kk, nn) 16:fork THREAD[mm,nn] in [BLK_M, BLK_N] do 17:\(\triangleright\) MAC iteration per thread (fully unrolled) 18:forkk \(\leftarrow\) 0 to BLK_K do 19:accum[mm,nn] \(\leftarrow\)accum[mmmm,nn] + 20:(frag_a[mm,kk] \(\times\) frag_b[kkk,nn]) 21:end for 22:join 23:end for 24:return accum 25:end procedure ``` **Algorithm 3** CTA-wide MacLoop() subroutine for performing a sequence of MAC-loop iterations. ### Fixed-split Alternatively, the granularity of work assigned to each CTA can be reduced via parallelization across the accumulation dimension. For a given output tile, the associativity of addition allows the iteration domain to be split among multiple concurrent CTAs, followed by a dependent "fixup" step to reduce the partial sums computed by each CTA. We highlight this _fixed-split_ approach in Algorithm 4, where each output tile is cooperatively produced by \(s\) CTAs. Notably, it functions identically to the _data-parallel_ decomposition when the splitting factor \(s=1\). The _fixed-split_ decomposition is also featured in CUTLASS and cuBLAS. The splitting factor is implemented as a runtime parameter, allowing a single kernel executable to support multiple work volumes while retaining the ideal blocking factors for optimal data sharing and latency hiding. However, as illustrated in Figure 1(a), the prospect of achieving a perfect quantization from a uniform tile-splitting is unlikely. Furthermore, the extra overheads of communication and synchronization scale with both the overall problem size as well as the splitting factor. ``` 1:_shared_accum[BLK_M_BLK_N] 2:iters_per_tile \(\leftarrow\) [R/BLK_K] 3:iters_per_split \(\leftarrow\) [iters_per_tile/s] 4:\(\triangleright\) instantiate \(s\) CTAs per output tile 5:fork CTA[x,y] in [ [ /m/BLK_M] \(\times\) [/n/BLK_N], s] do 6:\(\triangleright\) perform the range of MAC iterations for this split 7:iter \(\leftarrow\) y x \(\mathtt{iters\_per\_split}\) 8:iter_end \(\leftarrow\) min(iters_per_tile, iter + \(\mathtt{iters\_per\_split}\)) 9:accum \(\leftarrow\) MacLoop(x, iter \(\mathtt{\_end}\)) 10:\(\triangleright\) consolidate partial-sums across CTAs 11:if y \(\neq\) 0 then 12:\(\triangleright\) store accumulators to temporary global storage 13:StoreParatils(paratils[x,y], accum) 14:Signal(flags[x,y]) 15:else 16:\(\triangleright\) accumulate partial sums from other CTAs contributing to this tile 17:forcta \(\leftarrow\) 1 to \(s\)do 18:wait(flags[x,cta]) 19:accum \(\leftarrow\) accum + LoadParatils(paratils[x,cta]) 20:endfor 21:\(\triangleright\) store accumulators to output tile 22:StoreTile(C, tile_id, accum) 23:endif 24:join ``` **Algorithm 4**_Fixed-split_ GPU GEMM with splitting factor \(s\). ## 4. Our _Stream-K_ Decomposition Our _Stream-K_ decomposition is a tile-splitting parallelization in which the splitting seams are completely dissociated from the tiling structure itself. Although we employ familiar blocking and tiling strategies for data reuse, we instead quantize the GEMM computation into MAC-loop iterations, i.e., small volumes of CTA-wide BLK_M \(\times\) BLK_N \(\times\) BLK_K work. As presented in Algorithm 5, _Stream-K_ evenly partitions the GEMM's aggregate workload of MAC-loop iterations across a constant-sized grid of \(g\) CTAs. Each CTA's range of MAC-loop iterations is mapped contiguously into the \(m\to n\to k\) linearization of the GEMM shape, crossing output-tile boundaries as it may. Should a given CTA's starting and/or ending iterations not coincide with tile boundaries (as is expected to be the common case), it must consolidate its partial results with those of the other CTA(s) also covering that tile. In this basic implementation, each output tile in \(\mathbf{C}\) is written by the CTA that performed that tile's \(k=0\) MAC-loop iteration. Before it can do so, however, it must accumulate any partial sums shared from other CTAs in temporary global storage. Notably, _Stream-K_'s communication, synchronization, and global storage overheads are independent of problem size, scaling instead with the number of CTAs \(g\). A secondary benefit of _Stream-K_ is that synchronization-waiting is likely negligible when the number of output tiles is greater than the number of CTAs. In this regime, each output tile is covered by at most two CTAs, and the tile-processing skew ensures that the accumulating CTA will not need its peer contributions until well after those collaborators have finished producing them. Continuing our earlier example, Figure 1(b) illustrates the basic _Stream-K_ execution schedule of the \(384\times 384\times 128\) GEMM problem on a hypothetical four-SM GPU. To fully occupy the GPU, we launch \(g=4\) CTAs. Assuming BLK_M = 128, BLK_N = 128, and BLK_K = 4, each CTA is tasked with a \(128\times 128\times 288\) work volume comprising 72 MAC-loop iterations. This results in a 100% quantization efficiency, as all four SMs will execute the same number of MAC instructions. Additionally, the work volume of a single MAC-loop iteration is 32\(\times\) smaller than that of an entire output tile. Consequently, a 32-way _fixed-split_ decomposition would also provide a 100% quantization efficiency, but at the expense of an 8\(\times\) larger "fixup" overhead. Furthermore, _Stream-K_ is better able to hide the latency of inter-CTA synchronization due to the temporal skew between writers and readers when sharing partial sums. _Stream-K_ also generalizes to both _fixed-split_ and _data-parallel_ decompositions. When the grid size \(g\) is an even multiple of the number of output tiles, _Stream-K_ functions exactly as the _fixed-split_ decomposition. Similarly, when \(g\) equals the number of output tiles, _Stream-K_ behaves identically to the _data-parallel_ decomposition. We take advantage of this generalization to create an optimized hybridization of the _Stream-K_ decomposition in following section (5.2). ## 5. Implementation Details The work decomposition we introduced in the last section can be instantiated in a number of different ways to suit the needs of different hardware architectures and software library designs. Our implementation targets NVIDIA GPUs and is designed to be integrated into existing libraries like cuBLAS and CUTLASS. In this section, we describe how we configure the kernels we launch and introduce a hybridization scheme that helps ensure users achieve maximum GEMM performance across the widest possible range of problem shapes. We also emphasize that these are truly internal implementation details. They are completely transparent to the user of a BLAS-like library and do not alter the library's interface. The only observable impact is the improved performance characteristics that we analyze in Section 6. ### Kernel Configuration The tile size chosen for blocking the GEMM computation is, of course, a critical parameter controlling the performance of the GEMM kernel. For modern NVIDIA GPUs, appropriate tile sizes are determined by the shape of matrices supported by the GPU's Tensor Cores. Based on extensive empirical experience, we selected the smallest CTA-wide tile size capable of achieving 99% of the GPU's peak TFLOP/s for very large GEMM volumes for each supported precision. For the NVIDIA A100 GPU used in our experiments, these sizes are 64\(\times\)64\(\times\)16 for FP64 problems and 128\(\times\)128\(\times\)32 for FP16\(\rightarrow\)32 problems. Achieving maximal GEMM performance from _Stream-K_ parallelization also requires some degree of dynamic problem-specific configuration. Before launching a kernel we choose a grid size likely to yield the best performance on the specific problem shape at hand. This is in contrast to ensemble-based approaches which accommodate diverse problem shapes through the static generation of many kernel variants based on workload decomposition and blocking factor. Our grid size selection heuristic is based on a simple analytical model that minimizes the cost of reading, writing, and accumulating partial sums while equally distributing the MAC-loop iterations per CTA. Details of this analytical model are provided in the supplementary material (Appendix A.1). Parameters to the model are trivially chosen with empirical measurements and need only be done once per target architecture. The resulting parameters can then be compiled statically into the library. Again, this is in contrast to ensemble-based approaches that rely on potentially complex heuristics and machine learning models for kernel selection at run time. ### Data-parallel Hybridization The basic _Stream-K_ decomposition can, in certain cases, exhibit tile-processing skew that leads to potentially adverse effects on cache performance. When the number of output tiles \(t\) is not an even multiple of the grid size \(g\), the starting \(k\)-offset for the first MAC-loop iteration in each CTA will be different. Depending on the sizes and shapes of the input matrices and blocking factors, this skew may preclude these fragments from seeing reuse across CTAs in the GPU's cache structure. In Figure 2(a), for example, the initial \(k\)-axis fragment offsets for each of the four CTAs will be \(k=0\), \(k=32\), \(k=64\), and \(k=96\), respectively. Furthermore, this 32-element skew between CTAs will persist for the duration of the GEMM computation. Tile-processing skew is a direct consequence of _Stream-K_'s workload balancing strategy. However, we can take measures to limit its duration by applying _Stream-K_'s iteration balancing to a smaller, tile-aligned region of the total iteration domain such that the remaining tiles can be produced in full, temporally aligned waves. The simplest hybrid scheme is the "_data-parallel_ + one-tile _Stream-K_" schedule illustrated in Figure 2(b). It applies iteration balancing only among the tiles otherwise remaining for a final, partially full _data-parallel_ wave. The total number of full waves is \(w=\lfloor t/p\rfloor\), where \(t\) is the number of output tiles and \(p\) is the number of SM cores in the GPU. Consequently, each _Stream-K_ CTA receives an even share of iterations that is less than one tile's worth. Unfortunately, this strategy has little ability to hide the synchronization latency for the exchange of partial sums when three or more CTAs cover the same tile. In these scenarios, the accumulating CTA may be forced to wait for the contributions of other CTAs to become visible, as all but the last will be completing their final iterations at roughly the same time. Furthermore, the basic version of our scheme for aggregating partials is serialized within a single CTA, and thus will likely cause SM workload imbalance when the number of contributing CTAs per tile is large. We address these problems with our "two-tile _Stream-K + data-parallel_" hybrid schedule, illustrated in Figure 2(c). It performs one fewer full data-parallel wave in exchange for each _Stream-K_ CTA receiving more than one tile's worth of iterations (but fewer than two). This provides much better latency hiding when \(w\geq 2\), and each accumulating CTA will only need to receive partials from one other contributing CTA. Otherwise, it behaves identically to the "_DP + one tile SK_" schedule. This hybrid approach results in both improved memory access patterns and latency hiding. It also shows the versatility of the generic _Stream-K_ looping structure to implement different scheduling policies within the same kernel instance. ## 6. Performance Evaluation We have implemented our _Stream-K_ decomposition using NVIDIA's CUTLASS library of CUDA C++ template abstractions for authoring GEMM-like computations. CUTLASS provides the optimized equivalent of the CTA-wide MacLoop() subroutine in Algorithm 3, which performs blocking, tiling, and software-pipelined data movement that is analogous to the closed-source cuBLAS and cuDNN implementations. Our evaluation encompasses both (1) double-precision FP64 GEMM, and (2) mixed-precision FP16\(\rightarrow\)32 GEMM. For the latter, the input matrices \(\mathbf{A}\) and \(\mathbf{B}\) comprise half-precision FP16 values, yet the internal accumulation and output matrix \(\mathbf{C}\) values are single-precision FP32. Hardware environmentOur test GPU is the NVIDIA A100, which contains 108 SM cores. For measurement stability, we lock the power envelope at 400 W and SM clocks at 1005 MHz (\(\sim\)71% of their dynamic peak). This establishes FP64 tensor-core peak throughput of 13.9 TFLOP/s, and mixed FP16\(\rightarrow\)32 tensor-core peak throughput of 222.3 TFLOP/s. DatasetOur test corpus intends to approximate the enormous breadth and scope of device-wide GEMM problems that GPU math kernel libraries are designed to accommodate. As shown in Figure 4, we evaluate 32,824 different problem sizes and shapes, log-sampled at random within a domain of \(m\), \(n\), and \(k\) matrix dimensions whose volume spans six orders of magnitude. Figure 4. The test domain of 32,824 GEMM problem shapes and sizes used for performance evaluation. Figure 3. Basic _Stream-K_ vs. hybrid execution schedules for \(896\times 384\times 128\) GEMM across a hypothetical four-SM GPU. _Methodology._ For both GEMM precisions, we build a single _Stream-K_ kernel that has been specialized per the guidelines in the Section 5. Furthermore, these kernels implement our "two-tile _Stream-K + data-parallel_" hybrid decomposition. Our evaluation compares each _Stream-K_ kernel with: 1. the default _data-parallel_ CUTLASS kernel of the same blocking factor; 2. the cuBLAS ensemble for that precision (CUDA 11.6); and 3. an idealized oracle that will always select the highest performing _data-parallel_ CUTLASS blocking factor to execute for a given GEMM instance. For FP64 problems, this oracle selects among the ensemble of {(32\(\times\)32\(\times\)16), (32\(\times\)64\(\times\)16), (64\(\times\)64\(\times\)16), (64\(\times\)128\(\times\)16), (128\(\times\)128\(\times\)16)} blocking factor specializations. For FP16\(\rightarrow\)32, it selects among the ensemble of {(64\(\times\)64\(\times\)64), (64\(\times\)128\(\times\)32), (128\(\times\)128\(\times\)32), (128\(\times\)256\(\times\)32)} blocking factor specializations. These specific specializations are an open-sourced strict subsets alternative of the corresponding cuBLAS GEMM kernel ensembles. The "roofline" plots of Figure 5(a) and Figure 4(a) highlight the spread of performance produced by the singleton _data-parallel_ CUTLASS kernels. They plot the percentage of FP64 and FP16\(\rightarrow\)32 processor utilization as a function of computational intensity. Ideally, a GEMM implementation's performance response would manifest as a narrow band that adheres tightly to the machine's bandwidth- and compute-bound performance ceilings. Here, the _data-parallel_ kernels exhibit a fairly large dynamic range for any given regime of arithmetic intensity. In contrast, the performance responses from the equivalent _Stream-K_ kernels in Figure 5(d) and Figure 4(d) are much tighter. These observations are corroborated by Table 1 and Table 2, which show the _Stream-K_ kernels outperforming their _data-parallel_ FP64 and FP16\(\rightarrow\)32 equivalents by an average of 1.23\(\times\) and 1.63\(\times\), respectively. For extreme strong-scaling scenarios where \(m\times n\) is small and \(k\) is large, our _Stream-K_ kernels demonstrate up to 5.63\(\times\) and 14.7 \(\times\) speedup, respectively. The second columns of Table 1 and Table 2 compare our _Stream-K_ performance with that of cuBLAS. On average, our FP64 and FP16\(\rightarrow\)32 _Stream-K_ GEMM kernels respectively deliver 6% and 13% greater throughput than their corresponding cuBLAS ensembles, with peak improvement of 2.55\(\times\) and 6.74\(\times\). This is a significant improvement over the breadth of 32K GEMM problem shapes and sizes with 20\(\times\)_less_ executable code (a single kernel for each precision) than NVIDIA's vendor GEMM library, cuBLAS. Furthermore, the contrast between the FP64 and FP16\(\rightarrow\)32 cuBLAS performance responses (Figure 5(b) and Figure 4(b)) versus those of our hypothetical CUTLASS oracle ensembles (Figure 5(c) and Figure 4(c)) reveal the difficulties of designing kernel selection heuristics that deliver consistently good performance. Despite having access to the same blocking factor specializations, cuBLAS exhibits substantially wider dynamic ranges than the idealized _data-parallel_ CUTLASS oracle. The performance spreads of our _Stream-K_ kernels are narrower still, achieving up to 4.6\(\times\) the idealized oracle performance and underscoring their ability to achieve utilization levels that are simply not possible from tile-centric work decompositions. Finally, we observe regimes of small, bandwidth-bound problem shapes where our largish blocking factors do not compete well against cuBLAS. However, if we restrict our scope to the domain of compute-bound problems (i.e., FP64 problems having compute intensity > 150 ops/byte and FP16 \(\rightarrow\) 32 problems > 400 ops/byte), Figure 6(a) and Figure 6(b) demonstrate that our singleton _Stream-K_ kernels achieve unilaterally higher performance than the cuBLAS ensembles. The "noisy" relative performance in the regimes below these thresholds is not surprising, as _Stream-K_ is attempting to make memory-bound computations run faster by adding more memory workload. This suggests a few avenues for future work, namely separate cost-modeling for the memory-bound regime and/or the bundling of a second _Stream-K_ kernel having smaller tile size into a two-kernel ensemble. ## 7. Conclusion We presented _Stream-K_, a novel parallel workload decomposition technique for scheduling general matrix multiplication (GEMM) and similar computations on wide architectures such as GPUs. Unlike other tile-splitting techniques, the MAC-loop iteration is our unit of workload quantization across processor cores. This affords excellent strong scaling and workload balancing because its cost is (1) a constant with respect to the problem shape, and (2) substantially smaller than that of an entire output tile. \begin{table} \begin{tabular}{c c c c c} \hline \hline & vs. & vs. & vs. & vs. \\ & CUTLASS & cuBLAS & cuBLAS & CUTLASS \\ & 128\(\times\) 128\(\times\) 32 & & \textgreater{} 150 ops/B & oracle \\ \hline **Average** & 1.63\(\times\) & 1.13\(\times\) & 1.15\(\times\) & 1.12\(\times\) \\ **StdDev** & 1.46 & 0.45 & 0.12 & 0.37 \\ **Min** & 0.80\(\times\) & 0.64\(\times\) & 0.98\(\times\) & 0.61\(\times\) \\ **Max** & 14.7\(\times\) & 6.74\(\times\) & 1.85\(\times\) & 4.63\(\times\) \\ \hline \hline \end{tabular} \end{table} Table 2. _Stream-K_ FP16\(\rightarrow\)32 Relative Performance \begin{table} \begin{tabular}{c c c c c} \hline \hline & vs. & vs. & vs. \\ & CUTLASS & cuBLAS & cuBLAS & CUTLASS \\ & 128\(\times\) 128\(\times\) 32 & & \textgreater{} 150 ops/B & oracle \\ \hline **Average** & 1.63\(\times\) & 1.13\(\times\) & 1.15\(\times\) & 1.12\(\times\) \\ **StdDev** & 1.46 & 0.45 & 0.12 & 0.37 \\ **Min** & 0.80\(\times\) & 0.64\(\times\) & 0.98\(\times\) & 0.61\(\times\) \\ **Max** & 14.7\(\times\) & 6.74\(\times\) & 1.85\(\times\) & 4.63\(\times\) \\ \hline \hline \end{tabular} \end{table} Table 1. _Stream-K_ FP64 Relative Performance Furthermore, _Stream-K_ produces an \(O(p)\) number of splitting seams that are bound by the number of processor cores. Consequently, the overheads of strong scaling and workload balancing scale with processor width rather than problem size. This is a welcome feature for many applications that cannot afford to allocate large amounts of temporary storage equivalent to the problem output. Finally, we evaluated our _Stream-K_ approach across a broad spectrum of GEMM shapes and sizes. We showed that a single blocking configuration of _Stream-K_ can (1) achieve levels of absolute performance that match and/or exceed that of NVIDIA's cuBLAS library, even when the latter is operating at near-peak processor utilization, and (2) do so with much higher levels of performance consistency. Additionally, _Stream-K_ is an attractive option for library construction and maintenance, as it presents an opportunity to reduce distribution sizes by an order of magnitude and removes the need for complex handcoded heuristics or machine learning models for kernel selection without compromising performance. _Stream-K_ is open-sourced within CUTLASS 2.11 ([https://github.com/NVIDIA/cutlass](https://github.com/NVIDIA/cutlass)) and the performance shown within this paper can be reproduced when compiled using CUDA 11.8. For future works, we identify cache-aware, tile-access patterns such as Morton Order, an avenue for optimization. We also believe that _Stream-K_ decomposition could provide a similar improved performance response for other GEMM-like workloads that struggle with the same quantization inefficiencies. Figure 5. FP16\(\rightarrow\)FP32 GEMM “roofline” performance utilization landscapes on NVIDIA A100 across 32K GEMM problem shapes and sizes. ## Acknowledgments This material is based upon work supported by Defense Advanced Research Projects Agency (DARPA) under Contract No. HR0011-18-3-0007. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the U.S. Government. Distribution Statement "A" (Approved for Public Release, Distribution Unlimited). We would like to acknowledge Louis Feng, Valentin Andrei, Zhongyi Lin and Serban D. Porumbescu for their feedback on early drafts of the paper.
2307.07470
Charm-Meson $t$-channel Singularities in an Expanding Hadron Gas
We study the time evolution of the numbers of charm mesons after the kinetic freezeout of the expanding hadron gas produced by the hadronization of the quark-gluon plasma from a central heavy-ion collision. The $\pi D$ reaction rates have contributions from a $D^\ast$ resonance in the $s$ channel. The $\pi D^\ast$ reaction rates are enhanced by $t$-channel singularities from an intermediate $D$. The contributions to reaction rates from $D^\ast$ resonances and $D$-meson $t$-channel singularities are sensitive to thermal mass shifts and thermal widths. In the expanding hadron gas, the $t$-channel singularities are regularized by the thermal $D$ widths. After kinetic freezeout, the thermal $D$ widths are dominated by coherent pion forward scattering. The contributions to $\pi D^\ast$ reaction rates from $t$-channel singularities are inversely proportional to the pion number density, which decreases to 0 as the hadron gas expands. The $t$-channel singularities produce small but significant changes in charm-meson ratios from those predicted using the known $D^\ast$-decay branching fractions.
Eric Braaten, Roberto Bruschini, Li-Ping He, Kevin Ingles, Jun Jiang
2023-07-14T16:51:28Z
http://arxiv.org/abs/2307.07470v1
# Charm-Meson \(t\)-channel Singularities ###### Abstract We study the time evolution of the numbers of charm mesons after the kinetic freezeout of the expanding hadron gas produced by the hadronization of the quark-gluon plasma from a central heavy-ion collision. The \(\pi D\) reaction rates have contributions from a \(D^{*}\) resonance in the \(s\) channel. The \(\pi D^{*}\) reaction rates are enhanced by \(t\)-channel singularities from an intermediate \(D\). The contributions to reaction rates from \(D^{*}\) resonances and \(D\)-meson \(t\)-channel singularities are sensitive to thermal mass shifts and thermal widths. In the expanding hadron gas, the \(t\)-channel singularities are regularized by the thermal \(D\) widths. After kinetic freezeout, the thermal \(D\) widths are dominated by coherent pion forward scattering. The contributions to \(\pi D^{*}\) reaction rates from \(t\)-channel singularities are inversely proportional to the pion number density, which decreases to 0 as the hadron gas expands. The \(t\)-channel singularities produce small but significant changes in charm-meson ratios from those predicted using the known \(D^{*}\)-decay branching fractions. Charm mesons, effective field theory, heavy-ion collisions. Introduction The charm mesons that are most easily observed in high-energy experiments are the pseudoscalar mesons \(D^{+}\) and \(D^{0}\) and the vector mesons \(D^{*+}\) and \(D^{*0}\). The \(D\)'s have very long lifetimes because they decay by the weak interactions. The \(D^{*}\)'s are resonances whose widths are several orders of magnitude narrower than those of most hadron resonances. This remarkable feature arises because the \(D^{*}\)-\(D\) mass splittings are very close to the pion mass \(m_{\pi}\), which limits the phase space available for those decays \(D^{*}\to D\pi\) that are kinematically allowed. The soft pion in the rest frame of the \(D^{*}\) suppresses the rate for a hadronic decay \(D^{*}\to D\pi\), making it comparable to that for a radiative decay \(D^{*}\to D\gamma\). Another consequence of \(D^{*}\)-\(D\) mass splittings being approximately equal to \(m_{\pi}\) is that there are charm-meson reactions with a \(t\)-channel singularity. A \(t\)-channel singularity is a divergence in the rate for a reaction in which an unstable particle decays and one of the particles from its decay is scattered [1]. The singularity arises because the scattered particle can be on shell. The adjective "\(t\)-channel" refers to the fact that in the case of a \(2\to 2\) reaction, the scattered particle is exchanged in the \(t\) channel. The existence of \(t\)-channel singularities was first pointed out by Peierls in 1961 in the case of \(\pi N^{*}\) scattering through the exchange of a nucleon [2]. An example of a reaction with a \(t\)-channel singularity in the Standard Model of particle physics is \(\nu_{e}Z^{0}\to\nu_{e}Z^{0}\), which can proceed through the exchange of \(\bar{\nu}_{e}\), which is one of the decay products in \(Z^{0}\to\nu_{e}\bar{\nu}_{e}\). The tree-level cross section diverges when the center-of-mass energy is greater than \(\sqrt{2}\,M_{Z}\), because the \(\bar{\nu}_{e}\) can be on shell. Another reaction with a \(t\)-channel singularity is \(\mu^{+}\mu^{-}\to W^{+}e^{-}\bar{\nu}_{e}\), which can proceed through exchange of \(\nu_{\mu}\), which is among the decay products in \(\mu^{-}\to\nu_{\mu}e^{-}\bar{\nu}_{e}\). Melnikov and Serbo solved the divergence problem by taking into account the finite transverse sizes of the colliding \(\mu^{+}\) and \(\mu^{-}\) beams [3]. A general discussion of \(t\)-channel singularities has been presented by Grzadkowski, Iglicki, and Mrowczynski [1]. They pointed out that if a reaction with a \(t\)-channel singularity occurs in a thermal medium, the divergence is regularized by the thermal width of the exchanged particle. The most divergent term in the reaction rate is replaced by a term inversely proportional to the thermal width. A general discussion of the thermal regularization of \(t\)-channel singularities has recently been presented by Iglicki [4]. The simplest charm-meson reactions with a \(t\)-channel singularity are \(\pi D^{*}\to\pi D^{*}\). There are \(t\)-channel singularities in 6 of the 10 scattering channels: the elastic scattering of \(\pi^{0}D^{*+}\), \(\pi^{+}D^{*+}\), and \(\pi^{0}D^{*0}\) and the inelastic reactions \(\pi^{0}D^{*+}\to\pi^{+}D^{*0}\), \(\pi^{+}D^{*0}\to\pi^{0}D^{*+}\), and \(\pi^{-}D^{*+}\to\pi^{0}D^{*0}\). These reactions can proceed through the decay \(D^{*}\to D\pi\) followed by the inverse decay \(\pi D\to D^{*}\). The \(t\)-channel singularity arises because the intermediate \(D\) can be on shell. The cross section diverges when the center-of-mass energy squared, \(s\), is in a narrow interval close to the threshold. In the case of the elastic scattering reaction \(\pi D^{*}\to\pi D^{*}\), the \(t\)-channel singularity region is \[2M_{*}^{2}-M^{2}+2m_{\pi}^{2}<s<(M_{*}^{2}-m_{\pi}^{2})^{2}/M^{2}, \tag{1}\] where \(M_{*}\) and \(M\) are the masses of \(D^{*}\) and \(D\). The lower endpoint of the interval is above the threshold \((M_{*}+m_{\pi})^{2}\) by approximately \(2M\delta\), where \(\delta=M_{*}-M-m_{\pi}\). The small energy difference \(\delta\) is comparable to isospin splittings. The difference between the upper and lower endpoints is approximately \(8(M_{*}/M)m_{\pi}\delta\), which has a further suppression factor of \(m_{\pi}/M\). The interval in the center-of-mass energy \(\sqrt{s}\) is largest for the reaction \(\pi^{0}D^{*0}\to\pi^{0}D^{*0}\) extending from 6.1 MeV to 8.1 MeV above the threshold \(M_{*}+m_{\pi}=2141.8\) MeV. Since the \(t\)-channel singularity arises because the intermediate \(D\) can be on-shell, the divergence in the cross section could be regularized by taking into account the tiny decay width \(\Gamma\) of the \(D\), which would replace the divergent term by a term with a factor \(1/\Gamma\). However, the resulting enormous cross section is unphysical. One reason is that the widths of the incoming and outgoing \(D^{*}\) are larger than \(\Gamma\) by about 8 orders of magnitude, so the \(D^{*}\) widths are more relevant than the width of \(D\). An obvious question is whether the \(t\)-channel singularities in charm-meson reactions have any observable consequences. One situation in which there may be observable consequences is the production of charm mesons in relativistic heavy-ion collisions. A central heavy-ion collision is believed to produce a hot dense region of quark-gluon plasma in which quarks and gluons are deconfined. The quark-gluon plasma expands and cools until it reaches the temperature for the crossover transition to a hadron resonance gas in which the quarks and gluons are confined into hadrons. After hadronization, the hadron resonance gas continues to expand and cool until it reaches kinetic freezeout, after which the momentum distributions of the hadrons are no longer affected by scattering. After kinetic freezeout, the hadron gas continues to expand as the hadrons free-stream away from the interaction region. The \(t\)-channel singularities in charm-meson reactions could have significant effects either during the expansion and cooling of the hadron resonance gas between hadronization and kinetic freezeout or during the expansion of the hadron gas after kinetic freezeout. In the hadron gas produced by a heavy-ion collision, \(t\)-channel singularities are regularized by the thermal widths of the hadrons. The divergent term in the rate for a reaction with a \(t\)-channel singularity is replaced by a term inversely proportional to the thermal width of the hadron that can be on shell. Between hadronization and kinetic freezeout, the thermal widths are determined by the temperature. After kinetic freezeout, the thermal widths are determined by the temperature at kinetic freezeout and by the density of the system, which decreases as the hadron gas expands. In this paper, we restrict our study of the effects of \(t\)-channel singularities in charm-meson reactions to the expanding hadron gas after kinetic freezeout. The restriction to after kinetic freezeout offers many simplifications. The only hadrons in the hadron resonance gas that remain are the most stable ones whose lifetimes \(\tau\) are long enough that \(c\tau\) is larger than the size of the hadron gas, whose order of magnitude is 10 fm. The most abundant hadrons by far are pions. The temperature at kinetic freezeout is low enough that the interactions of charm mesons and pions can be described by a chiral effective field theory. The relevant charm mesons are \(D^{+}\), \(D^{0}\), \(D^{*+}\), and \(D^{*0}\). The decays of \(D^{*+}\) and \(D^{*0}\), whose lifetimes satisfy \(c\tau>2000\) fm, occur long after kinetic freezeout. The dominant contribution to the thermal width of a charm meson comes from the coherent forward scattering of pions and is proportional to the pion number density \(\mathfrak{n}_{\pi}\), which decreases to 0 as the hadron gas expands. A \(D\)-meson \(t\)-channel singularity therefore gives a contribution to the reaction rate inversely proportional to \(\mathfrak{n}_{\pi}\). The factor of \(1/\mathfrak{n}_{\pi}\) can cancel a multiplicative factor of \(\mathfrak{n}_{\pi}\) in a term in a rate equation, increasing the importance of that term at late times. In Ref. [5], we showed that \(D\)-meson \(t\)-channel singularities in the reactions \(\pi D^{*}\to\pi D^{*}\) produce significant modifications to the ratios of charm mesons produced by heavy-ion collisions. In this paper, we present the details of the calculations that lead to this surprising result. The rest of the paper is organized as follows. In Section II, we establish our notation for various properties of charm mesons. In Section III, we describe the hadron resonance gas produced by a heavy-ion collision and we present a simple model for its time evolution. In Section IV, we calculate the mass shifts and thermal widths of charm meson in a pion gas. In Section V, we calculate reaction rates of charm meson and pions in a pion gas. In Section VI, we solve the rate equations for the charm-meson number densities in the expanding hadron gas produced by a heavy-ion collision after kinetic freezeout. We show that \(D\)-meson \(t\)-channel singularities produce small but significant changes in the ratios of charm-meson abundances. We summarize our results in Section VII. In Appendix A, we give the Feynman rules for heavy-hadron \(\chi\)EFT used in the calculations in Sections IV and V. In Appendix B, we calculate a thermal average over the pion momentum distribution that is sensitive to isospin splittings. ## II Charm Mesons In this section, we introduce notation for the masses and decay widths of charm mesons. We also describe simple relations between numbers of charm mesons that involve \(D^{*}\) branching fractions. ### Masses and widths We denote the masses of the pseudoscalar charm mesons \(D^{+}\) and \(D^{0}\) by \(M_{+}\) and \(M_{0}\) and the masses of the vector charm mesons \(D^{*+}\) and \(D^{*0}\) by \(M_{*+}\) and \(M_{*0}\). We denote the \(D^{*a}-D^{b}\) mass difference by \(\Delta_{ab}=M_{*a}-M_{b}\). The average of the four mass differences is \(\Delta=141.3\) MeV. We denote the masses of the pions \(\pi^{\pm}\) and \(\pi^{0}\) by \(m_{\pi+}\) and \(m_{\pi 0}\). We sometimes also denote the mass of the pion produced in the transition \(D^{*a}\to D^{b}\pi\) by \(m_{\pi ab}\): \(m_{\pi ab}=m_{\pi 0}\) if \(a=b\), \(m_{\pi ab}=m_{\pi+}\) if \(a\neq b\). When isospin splittings can be neglected, we take the pion mass \(m_{\pi}\) to be the average over the three pion flavors: \(m_{\pi}=138.0\) MeV. Many reaction rates are sensitive to the difference between a \(D^{*}\) mass and a \(D\pi\) scattering threshold. The differences for the transitions \(D^{*}\to D\pi\) that conserve electric charge are \[\Delta_{00}-m_{\pi 0} = 7.04\pm 0.03\ {\rm MeV}, \tag{2a}\] \[\Delta_{+0}-m_{\pi+} = 5.855\pm 0.002\ {\rm MeV},\] (2b) \[\Delta_{++}-m_{\pi 0} = 5.63\pm 0.02\ {\rm MeV},\] (2c) \[\Delta_{0+}-m_{\pi+} = -2.38\pm 0.03\ {\rm MeV}. \tag{2d}\] The negative value of \(\Delta_{0+}-m_{\pi+}\) implies that the decay \(D^{*0}\to D^{+}\pi^{-}\) is kinematically forbidden. We denote the total decay widths of the vector charm mesons \(D^{*+}\) and \(D^{*0}\) by \(\Gamma_{*+}\) and \(\Gamma_{*0}\). The decay width of \(D^{*+}\) is measured. The decay width of \(D^{*0}\) can be predicted using Lorentz invariance, chiral symmetry, isospin symmetry, and measured \(D^{*}\) branching fractions: \[\frac{{\rm Br}[D^{*0}\to D^{0}\pi^{0}]\ \Gamma_{*0}}{{\rm Br}[D^{*+}\to D^{0}\pi^{+}]\,\Gamma_{*+}} = \frac{\lambda^{3/2}(M_{*0}^{2},M_{0}^{2},m_{\pi 0}^{2})/M_{*0}^{5}}{2 \,\lambda^{3/2}(M_{*+}^{2},M_{0}^{2},m_{\pi+}^{2})/M_{*+}^{5}}, \tag{3}\] where \(\lambda(x,y,z)=x^{2}+y^{2}+z^{2}-2(xy+yz+zx)\). The branching fractions for the decays \(D^{*}\to D\pi\) are \[B_{+0} \equiv \mbox{Br}\big{[}D^{*+}\to D^{0}\pi^{+}\big{]}=(67.7\pm 0.5)\%, \tag{4a}\] \[B_{00} \equiv \mbox{Br}\big{[}D^{*0}\to D^{0}\pi^{0}\big{]}=(64.7\pm 0.9)\%,\] (4b) \[B_{++} \equiv \mbox{Br}\big{[}D^{*+}\to D^{+}\pi^{0}\big{]}=(30.7\pm 0.5)\%. \tag{4c}\] The \(D^{*}\) decay widths are \[\Gamma_{*+} \equiv \Gamma[D^{*+}]=83.4\pm 1.8\ \mbox{keV}, \tag{5a}\] \[\Gamma_{*0} \equiv \Gamma[D^{*0}]=55.4\pm 1.5\ \mbox{keV}. \tag{5b}\] The \(D^{*}\) radiative decay rates are \[\Gamma_{*+,\gamma} \equiv \Gamma\big{[}D^{*+}\to D^{+}\gamma\big{]}=1.3\pm 0.3\ \mbox{keV}, \tag{6a}\] \[\Gamma_{*0,\gamma} \equiv \Gamma\big{[}D^{*0}\to D^{0}\gamma\big{]}=19.6\pm 0.7\ \mbox{keV}. \tag{6b}\] The decay widths of the spin-0 charm mesons \(D^{+}\) and \(D^{0}\) are smaller than those for \(D^{*}\) by about 8 orders of magnitude, because they only decay through weak interactions. The interactions of low-energy pions with momenta at most comparable to \(m_{\pi}\) can be described by chiral effective field theory (\(\chi\)EFT) [6]. The self-interactions of pions in \(\chi\)EFT at leading order (LO) are determined by the pion decay constant \(f_{\pi}\). It can be determined from the partial decay rate for \(\pi^{+}\) into \(\mu^{+}\,\nu_{\mu}\): \[\Gamma\big{[}\pi^{+}\to\mu^{+}\nu_{\mu}\big{]}=\frac{1}{8\pi}|V_{ud}|^{2}G_{F} ^{2}f_{\pi}^{2}\frac{m_{\mu}^{2}(m_{\pi^{+}}^{2}-m_{\mu}^{2})^{2}}{m_{\pi^{+}} ^{3}}. \tag{7}\] From the measured decay rate, we obtain \(f_{\pi}=131.7\) MeV. The interactions of charm mesons with low-energy pions can be described by heavy-hadron \(\chi\)EFT (HH\(\chi\)EFT) [7; 8; 9]. The first-order corrections in HH\(\chi\)EFT include terms suppressed by \(m_{\pi}/M\) and \(\Delta/M\). Isospin splittings can be treated as second-order corrections. The partial decay rate for \(D^{*}\to D\pi\) in HH\(\chi\)EFT at LO is sensitive to isospin splittings through a multiplicative factor \((\Delta^{2}-m_{\pi}^{2})^{3/2}\). Isospin splittings can be taken into account in the partial decay rate for \(D^{*a}\to D^{b}\pi\) by replacing \(\Delta\) by \(\Delta_{ab}=M_{*a}-M_{b}\) and \(m_{\pi}\) by the mass \(m_{\pi ab}\) of the emitted pion. The resulting expression for the partial decay rate is \[\Gamma\big{[}D^{*a}\to D^{b}\pi\big{]}=\frac{g_{\pi}^{2}}{12\pi\,f_{\pi}^{2}} \left(2-\delta_{ab}\right)\,\left(\Delta_{ab}^{2}-m_{\pi ab}^{2}\right)^{3/2} \theta(\Delta_{ab}-m_{\pi ab}). \tag{8}\] The dimensionless coupling constant \(g_{\pi}\) can be determined from measurements of the decay \(D^{*+}\to D^{0}\pi^{+}\): \(g_{\pi}=0.520\pm 0.006\). ### Charm-meson numbers The numbers of charm hadrons created in a high-energy collision must be inferred from the numbers that are detected. The decay of \(D^{*0}\) always produces \(D^{0}\). The decay of \(D^{*+}\) produces \(D^{0}\) and \(D^{+}\) with branching fractions \(B_{+0}\) and \(1-B_{+0}\). We denote the numbers of \(D^{0}\), \(D^{+}\), \(D^{*0}\), and \(D^{*+}\) observed in some kinematic region by \(N_{D^{0}}\), \(N_{D^{+}}\), \(N_{D^{*0}}\), and \(N_{D^{*+}}\). The observed numbers of \(D^{0}\) and \(D^{+}\) can be predicted in terms of the numbers \((N_{D^{*}})_{0}\) and \((N_{D^{*a}})_{0}\) before \(D^{*}\) decays and the branching fraction \(B_{+0}\): \[N_{D^{0}} = \left(N_{D^{0}}\right)_{0}+\left(N_{D^{*0}}\right)_{0}+B_{+0} \left(N_{D^{*+}}\right)_{0}, \tag{9a}\] \[N_{D^{+}} = \left(N_{D^{+}}\right)_{0}+0+\left(1-B_{+0}\right)\left(N_{D^{*+} }\right)_{0}. \tag{9b}\] The last two terms in each equation come from \(D^{*0}\) and \(D^{*+}\) decays, respectively. The difference between the numbers of \(D^{0}\) and \(D^{+}\) can be expressed as \[N_{D^{0}}-N_{D^{+}}=2B_{+0}\left(N_{D^{*+}}\right)_{0}+\left(N_{D^{0}}-N_{D^{ +}}\right)_{0}+\left(N_{D^{*0}}-N_{D^{*+}}\right)_{0}. \tag{10}\] The simple relations in Eqs. (9) have been assumed in all previous analyses of charm-meson production. We will show in this paper that these relations can be modified by \(t\)-channel singularities. In a high-energy hadron collision, the numbers of \(D^{0}\) and \(D^{+}\) created in some kinematic region should be approximately equal by isospin symmetry: \((N_{D^{0}})_{0}\approx(N_{D^{+}})_{0}\). Similarly, the numbers of \(D^{*0}\) and \(D^{*+}\) created should be approximately equal: \((N_{D^{*0}})_{0}\approx(N_{D^{*+}})_{0}\). The deviations from isospin symmetry in the charm cross section should be negligible, because isospin splittings are tiny compared to the energy available for producing additional hadrons. The decays of bottom hadrons give isospin-violating contributions to charm-meson production, but the bottom cross section is much smaller than the charm cross section at present-day colliders. The charm mesons that are most easily observed at a hadron collider are \(D^{0}\), \(D^{+}\), and \(D^{*+}\), because they have significant decay modes with all charged particles. If the only reactions of charm mesons after their production are \(D^{*}\) decays, the ratios of the observed numbers of \(D^{0}\), \(D^{+}\), and \(D^{*+}\) are determined by the vector/pseudoscalar ratio before \(D^{*}\) decays, which we denote by \((N_{D^{*}}/N_{D})_{0}\). Assuming isospin symmetry, that ratio can be expressed in terms of the observed numbers \(N_{D^{0}}\), \(N_{D^{+}}\), and \(N_{D^{*+}}\): \[\left(\frac{N_{D^{*}}}{N_{D}}\right)_{0}\approx\frac{2N_{D^{*+}}}{N_{D^{0}}+N _{D^{+}}-2\,N_{D^{*+}}}. \tag{11}\] Isospin symmetry also implies that there is a combination of the three observed numbers that is completely determined by \(B_{+0}\): \[\frac{N_{D^{0}}-N_{D^{+}}}{N_{D^{*+}}}\approx 2\,B_{+0}=1.35\pm 0.01. \tag{12}\] Deviations from this prediction must come either from initial conditions that deviate from isospin symmetry or from charm-meson reactions other than \(D^{*}\) decays that also violate isospin symmetry. Reactions with \(t\)-channel singularities are examples of such reactions. ## III Heavy-Ion Collisions In this section, we present a simple model for the hadron resonance gas produced by a central relativistic heavy-ion collision. We describe the Statistical Hadronization Model for the abundances of hadrons produced by a heavy-ion collision. Finally we describe the number densities of pions and charm mesons both before and after the kinetic freezeout of the hadron gas. ### Expanding hadron gas The central collision of relativistic heavy ions is believed to produce a quark-gluon plasma (QGP) consisting of deconfined quarks and gluons which then evolves into a hadron resonance gas (HRG) consisting of hadrons. A heavy-ion collision involves multiple stages: the collisions of the Lorentz-contracted nucleons in the nuclei, the formation and thermalization of the QGP, the expansion and cooling of the QGP, the hadronization of the QGP into the HRG, the expansion and cooling of the HRG as most of the resonances decay, the kinetic freezeout of the HRG when its density becomes too low for collisions to change momentum distributions, and finally the expansion of the resulting hadron gas by the free-streaming of hadrons. For each stage, complicated phenomenological models have been developed to provide quantitative descriptions [10; 11; 12; 13; 14]. A natural variable to describe the space-time evolution of the system created by the heavy-ion collision is the proper time \(\tau\) since the collision. A simple phenomenological model that may describe the essential features of the system between the equilibration of the QGP and the kinetic freezeout of the HRG is a homogeneous system with volume \(V(\tau)\) in thermal equilibrium at temperature \(T(\tau)\). We denote the proper time just after hadronization by \(\tau_{H}\) and the proper time at kinetic freezeout by \(\tau_{\rm kf}\). The volume increases from \(V_{H}\) at \(\tau_{H}\) to \(V_{\rm kf}\) at \(\tau_{\rm kf}\), while the temperature decreases from \(T_{H}\) to \(T_{\rm kf}\). These proper times, volumes, and temperatures can be determined by fitting the outputs of simplified hydrodynamic models for heavy-ion collisions. Values of the volumes \(V_{H}\) and \(V_{\rm kf}\) and the temperatures \(T_{H}\) and \(T_{\rm kf}\) for various heavy-ion colliders are given in Refs. [15; 16]. An explicit parametrization of the volume \(V(\tau)\) can be obtained by assuming the boost-invariant longitudinal expansion proposed by Bjorken [17] and an accelerated transverse expansion caused by the pressure of the QGP before hadronization and by the pressure of the HRG after hadronization [18]. The parametrization of \(V(\tau)\) for the HRG between hadronization and kinetic freezeout is [19] \[V(\tau)=\pi\big{[}R_{H}+v_{H}(\tau-\tau_{H})+a_{H}(\tau-\tau_{H})^{2}/2\big{]} ^{2}\,c\tau\qquad(\tau_{H}<\tau<\tau_{\rm kf}), \tag{13}\] where \(R_{H}\), \(v_{H}\), and \(a_{H}\) are the transverse radius, velocity, and acceleration at \(\tau_{H}\). If the transverse velocity \(v_{H}+a_{H}(\tau-\tau_{H})\) reaches the speed of light before kinetic freezeout, the subsequent transverse expansion proceeds at the constant velocity \(c\). The temperature \(T(\tau)\) can be determined by assuming isentropic expansion. The parametrization of \(T(\tau)\) for the HRG between hadronization and kinetic freezeout in Ref. [20] is \[T(\tau)=T_{H}+(T_{\rm kf}-T_{H})\left(\frac{\tau-\tau_{H}}{\tau_{\rm kf}- \tau_{H}}\right)^{4/5}\qquad(\tau_{H}<\tau<\tau_{\rm kf}). \tag{14}\] The parameters in \(V(\tau)\) and \(T(\tau)\) for central Pb-Pb collisions at 5.02 TeV are given in Ref. [21]. Hadronization and kinetic freezeout occur at the proper times \(\tau_{H}=10.2\) fm/\(c\) and \(\tau_{\rm kf}=21.5\) fm/\(c\). Between hadronization and kinetic freezeout, the temperature decreases from \(T_{H}=156\) MeV to \(T_{\rm kf}=115\) MeV. The transverse radius increases from \(R_{H}=13.0\) fm to 24.0 fm. The transverse speed increases from \(v_{H}=0.78\,c\) to \(c\) at \(\tau=12.7\) fm/\(c\) and then remains constant at \(v_{\rm kf}=c\). After kinetic freezeout, the system continues to expand, but the momentum distributions of the hadrons are those for a fixed temperature: \(T(\tau)=T_{\rm kf}\). A simple model for the volume \(V(\tau)\) is continued longitudinal expansion at the speed of light and transverse expansion at the same speed \(v_{\rm kf}\) as at kinetic freezeout: \[V(\tau)=\pi\left[R_{\rm kf}+v_{\rm kf}(\tau-\tau_{\rm kf})\right]^{2}c\tau\qquad (\tau>\tau_{\rm kf}). \tag{15}\] We assume the system remains homogeneous throughout the expanding volume \(V(\tau)\). In the absence of further interactions, the number density for each stable hadron would decrease in proportion to \(1/V(\tau)\) as \(\tau\) increases. Charm quarks and antiquarks are created in the hard collisions of the nucleons that make up the heavy ions. Charm quarks are assumed to quickly thermalize with the QGP at the temperature \(T(\tau)\). They are not in chemical equilibrium, because the temperature of the QGP is too low for gluon and light-quark collisions to create charm quark-antiquark pairs. The low density of charm quarks suppresses the annihilation of charm quarks and antiquarks, so the charm-quark and charm-antiquark numbers are essentially conserved. Conservation of charm-quark number determines the charm-quark fugacity \(g_{c}(\tau)\) in terms of the temperature \(T(\tau)\) and the volume \(V(\tau)\). After hadronization, charm hadrons are in thermal equilibrium with the HRG at the temperature \(T(\tau)\). Their number densities evolve according to rate equations consistent with the conservation of charm-quark number. The charm hadrons are assumed to remain in thermal equilibrium until kinetic freezeout, after which they free-stream to the detector. ### Statistical Hadronization Model The Statistical Hadronization Model (SHM) is a model for the abundances of hadrons produced by a heavy-ion collision [22]. According to the SHM, the hadronization of the QGP into the HRG occurs while they are in chemical and thermal equilibrium with each other at a specific hadronization temperature \(T_{H}\) that can be identified with the temperature of the crossover between the QGP and the HRG. At hadronization, the number density of any spin state of a light hadron depends only on the hadron mass and the temperature \(T_{H}\). (At sufficiently high rapidity or at lower heavy-ion collision energies, a number density can also depend on the baryon chemical potential.) The SHM takes into account the subsequent decays of hadron resonances, which increase the abundances of the lighter and more stable hadrons. The SHM does not take into account the scattering reactions that allow the HRG to remain in thermal equilibrium after hadronization. The SHM can also describe the abundances of charm hadrons produced by a heavy-ion collision [23]. According to the SHM, charm hadrons are created during hadronization while the QGP and HRG are in thermal equilibrium at the temperature \(T_{H}\). At hadronization, the number density of any spin state of a charm hadron is determined only by its mass, the hadronization temperature \(T_{H}\), and multiplicative factors of the charm-quark fugacity \(g_{c}\). The number density of a charm hadron with a single charm quark or antiquark is larger than the number density in chemical equilibrium by the factor \(g_{c}\). The number density of a hadron whose heavy constituents consist of \(n\) charm quarks and antiquarks is larger than the number density in chemical equilibrium by \(g_{c}^{n}\)[24]. The SHM gives simple predictions for charm-hadron ratios at hadronization. Since the mass of a charm hadron is so large compared to \(T(\tau)\), its momentum distribution in the HRG can be approximated by a relativistic Boltzmann distribution. The cham-hadron fugacity enters simply as a multiplicative factor. At hadronization, the charm-hadron fugacity is the product of the charm-quark fugacity \(g_{c}\) and the number of spin states. The factor of \(g_{c}\) cancels in ratios of charm-hadron number densities. The ratio of the numbers of vector and pseudoscalar charm mesons at hadronization is predicted to be \[\frac{N_{D^{*}}}{N_{D}}=3\,\frac{M_{*}^{2}K_{2}(M_{*}/T_{H})}{M^{2}K_{2}(M/T_{H})}, \tag{16}\] where \(M\) and \(M_{*}\) are the masses of \(D\) and \(D^{*}\), which we take to be the isospin averages of the masses of the pseudoscalar and vector charm mesons, respectively. At the hadronization temperature \(T_{H}=156\) MeV, the vector/pseudoscalar ratio is predicted to be \(N_{D^{*}}/N_{D}=1.339\). Ratios of the charm-hadron number densities for isospin partners are given by equations analogous to Eq. (16) but without the factor of 3. The predicted ratio for pseudoscalar charm mesons at hadronization is \(N_{D^{0}}/N_{D^{+}}=1.028\). The predicted ratio for vector charm mesons at hadronization is \(N_{D^{*0}}/N_{D^{*+}}=1.020\). The SHM predictions for charm-hadron ratios are modified from the simple predictions at hadronization by the feeddown from the decays of higher charm-hadron resonances. The SHM has been applied to Pb-Pb collisions for at nucleon-nucleon center-of-mass energy \(\sqrt{s_{NN}}=5.02\) TeV in Ref. [24] for various centrality bins; we choose to focus only on the most central collisions. The charm-quark fugacity at hadronization has been determined to be \(g_{c}=29.6\pm 5.2\). Predictions for the multiplicities \(dN/dy\) for 4 charm mesons and 2 charm baryons at midrapidity (\(|y|<\frac{1}{2}\)) are given in Table 1 of Ref. [24]. The expanding hadron gas is modeled by a "core" in which the formation of charm hadrons is described by the SHM and a "corona" in which their formation is described by that in \(pp\) collisions. For collisions in the centrality range 0-10%, the predicted multiplicities \(dN/dy\) from the core for \(D^{0}\), \(D^{+}\), and \(D^{*+}\) are 6.02, 2.67, and 2.36, respectively, with error bars consistent with those from a multiplicative factor of \(g_{c}=29.6\pm 5.2\). The error bars on ratios of the multiplicities should be much smaller than 18%, but they cannot be determined from the results presented in Ref. [24]. The predicted additional multiplicities \(dN/dy\) from the corona for \(D^{0}\), \(D^{+}\), and \(D^{*+}\) are 0.396, 0.175, and 0.160, respectively. The effect of the corona is to increase all three multiplicities by about 7%. An SHM prediction for the vector/pseudoscalar ratio before \(D^{*}\) decays can be obtained by inserting the predicted total multiplicities for \(D^{0}\), \(D^{+}\), and \(D^{*+}\) into Eq. (11): \((N_{D^{*}}/N_{D})_{0}=1.194\). This is significantly smaller than the ratio 1.339 at hadronization predicted by Eq. (16), but also includes feeddown effects from decays of higher resonances. The SHM prediction for the ratio \((N_{D^{0}}-N_{D^{+}})/N_{D^{*+}}\) is 1.42. This is larger than the isospin-symmetry prediction 1.35 in Eq. (12) by about 5%. This, in turn, is larger then the thermal isospin-symmetry deviations at hadronization predicted by the SHM, which are less than 3%. ### Pion momentum distributions The temperature \(T\) of the HRG is comparable to the pion mass \(m_{\pi}\). By isospin symmetry, the pions \(\pi^{-}\), \(\pi^{0}\), and \(\pi^{+}\) all have the same number density \(\mathfrak{n}_{\pi}\). The number density for pions in chemical and thermal equilibrium at temperature \(T\) is \[\mathfrak{n}_{\pi}^{\rm(eq)}=\int\frac{d^{3}q}{(2\pi)^{3}}\frac{1}{e^{\beta \omega_{q}}-1}, \tag{17}\] where \(\omega_{q}=\sqrt{m_{\pi}^{2}+q^{2}}\) and \(\beta=1/T\) is the inverse temperature. At the kinetic freezeout temperature \(T_{\rm kf}=115\) MeV, the equilibrium number density is \(\mathfrak{n}_{\pi}^{\rm(kf)}=1/(3.95\ {\rm fm})^{3}\). Between hadronization and kinetic freezeout, the pions are in chemical and thermal equilibrium. The temperature \(T(\tau)\) of the HRG decreases as the proper time \(\tau\) increases. The momentum distribution \(\mathfrak{f}_{\pi}\) of the pions is the Bose-Einstein distribution: \[\mathfrak{f}_{\pi}(\omega_{q})=\frac{1}{e^{\beta\omega_{q}}-1}\qquad(\tau_{H}< \tau<\tau_{\rm kf}), \tag{18}\] where \(\beta=1/T(\tau)\). The temperature \(T(\tau)\) can be parametrized as in Eq. (14). After kinetic freezeout, the temperature remains constant: \(T(\tau>\tau_{\rm kf})=T_{\rm kf}\). The pion number density decreases in inverse proportion to the volume \(V(\tau)\) of the expanding hadron gas: \[\mathfrak{n}_{\pi}(\tau)=\frac{V_{\rm kf}}{V(\tau)}\mathfrak{n}_{\pi}^{\rm(kf) }\qquad(\tau>\tau_{\rm kf}), \tag{19}\] where \(\mathfrak{n}_{\pi}^{\rm(kf)}\) is the equilibrium pion number density in Eq. (17) at the temperature \(T_{\rm kf}\) and \(V_{\rm kf}\) is the volume of the hadron gas at kinetic freezeout. The volume \(V(\tau)\) can be parametrized as in Eq. (15). The normalization of the momentum distribution \(\mathfrak{f}_{\pi}\) of the pions is determined by the pion number density \(\mathfrak{n}_{\pi}\): \[\mathfrak{f}_{\pi}(\omega_{q})=\frac{\mathfrak{n}_{\pi}}{\mathfrak{n}_{\pi}^{ \rm(kf)}}\,\frac{1}{e^{\beta_{\rm kf}\,\omega_{q}}-1}\qquad(\tau>\tau_{\rm kf}), \tag{20}\] where \(\beta_{\rm kf}=1/T_{\rm kf}\). We use angular brackets to denote the average over the momentum distribution of a pion. The thermal average of a function \(F(\mathbf{q})\) of the pion momentum is \[\big{\langle}F(\mathbf{q})\big{\rangle}=\int\frac{d^{3}q}{(2\pi)^{3}}\,\mathfrak{ f}_{\pi}(\omega_{q})\,F(\mathbf{q})\bigg{/}\int\frac{d^{3}q}{(2\pi)^{3}}\, \mathfrak{f}_{\pi}(\omega_{q}). \tag{21}\] The thermal average depends on the temperature \(T\). After kinetic freezeout, the pion number density \(\mathfrak{n}_{\pi}\) cancels in the thermal average in Eq. (21). If the thermal average is sensitive to the flavor \(i\) of the pion, the pion energy in Eq. (21) should be replaced by \(\omega_{iq}=\sqrt{m_{\pi i}^{2}+q^{2}}\). The multiplicities of \(\pi^{+}\) and \(\pi^{-}\) produced by Pb-Pb collisions at the LHC with \(\sqrt{s_{NN}}=5.02\) TeV have been measured by the ALICE collaboration [25]. The pion multiplicity averaged over \(\pi^{+}\) and \(\pi^{-}\) from collisions in the centrality range 0-10% is \[dN_{\pi}/dy=769\pm 34. \tag{22}\] The total pion multiplicity for \(\pi^{+}\), \(\pi^{-}\), and \(\pi^{0}\) is 3 times larger. A fit of the SHM to hadron abundances at midrapidity in Pb-Pb collisions at the LHC with \(\sqrt{s_{NN}}=2.76\) TeV has been presented in Ref. [26]. The central values of the SHM fits for the multiplicities of \(\pi^{+}\) and \(\pi^{-}\) are lower than the data by about 10%, which is comparable to the experimental error bars. ### Charm-meson momentum distributions We denote the number densities of the charm mesons \(D^{+}\), \(D^{0}\), \(D^{*+}\), and \(D^{*0}\) in the hadron gas by \(\mathfrak{n}_{D^{+}}\), \(\mathfrak{n}_{D^{0}}\), \(\mathfrak{n}_{D^{*+}}\), and \(\mathfrak{n}_{D^{*0}}\), respectively. Since charm-meson masses are so much larger than the temperature \(T\), the momentum distributions of the charm mesons can be approximated by relativistic Boltzmann distributions. If the charm mesons were in both chemical and thermal equilibrium, their number densities would be determined by the temperature \(T\): \[\mathfrak{n}_{D^{a}}^{(\text{eq})} = \int\frac{d^{3}q}{(2\pi)^{3}}\exp\bigl{(}-\,\beta\sqrt{M_{a}^{2}+ p^{2}}\,\bigr{)}=\frac{M_{a}^{2}\,K_{2}(M_{a}/T)}{2\pi^{2}/T}, \tag{23a}\] \[\mathfrak{n}_{D^{*a}}^{(\text{eq})} = 3\int\frac{d^{3}q}{(2\pi)^{3}}\exp\bigl{(}-\,\beta\sqrt{M_{*a}^{ 2}+p^{2}}\,\bigr{)}=\frac{3\,M_{a}^{2}\,K_{2}(M_{*a}/T)}{2\pi^{2}/T}. \tag{23b}\] However the charm mesons in the expanding hadron gas are not in chemical equilibrium. The number densities \(\mathfrak{n}_{D^{a}}(\tau)\) and \(\mathfrak{n}_{D^{*a}}(\tau)\) evolve with the proper time according to rate equations consistent with the conservation of charm-quark number. The momentum distributions of the charm mesons are \[\mathfrak{f}_{D^{a}}(\mathbf{p}) = \frac{\mathfrak{n}_{D^{a}}}{\mathfrak{n}_{D^{a}}^{(\text{eq})}} \,\exp\bigl{(}-\,\beta\sqrt{M_{a}^{2}+p^{2}}\,\bigr{)}, \tag{24a}\] \[\mathfrak{f}_{D^{*a}}(\mathbf{p}) = 3\,\frac{\mathfrak{n}_{D^{*a}}}{\mathfrak{n}_{D^{*a}}^{(\text{eq })}}\,\exp\bigl{(}-\,\beta\sqrt{M_{*a}^{2}+p^{2}}\,\bigr{)}. \tag{24b}\] Before kinetic freezeout, the number densities \(\mathfrak{n}_{D^{a}}(\tau)\) and \(\mathfrak{n}_{D^{*a}}(\tau)\) evolve according to rate equations that take into account charm-meson reactions and the expanding volume \(V(\tau)\). After kinetic freezeout, the temperature remains constant at \(T_{\text{kf}}\), so \(\beta=\beta_{\text{kf}}\). In the absence of further interactions, \(\mathfrak{n}_{D^{a}}(\tau)\) and \(\mathfrak{n}_{D^{*a}}(\tau)\) would decrease in proportion to \(1/V(\tau)\) as \(\tau\) increases, just like the pion number density in Eq. (19). At very large proper times (\(c\tau>2,000\) fm), the \(D^{*}\)'s decay into \(D\)'s. The multiplicities of charm hadrons in central Pb-Pb collisions at \(\sqrt{s_{NN}}=5.02\) TeV have been predicted using SHM in Ref. [24]. For collisions in the centrality range 0-10%, the central values of the predicted multiplicities \(dN/dy\) at midrapidity for \(D^{0}\), \(D^{+}\), and \(D^{*+}\) are 6.42, 2.84, and 2.52. No prediction was given for the multiplicity of \(D^{*0}\). We can estimate the multiplicity of \(D^{*0}\) by assuming that the ratio of the numbers of \(D^{*0}\) and \(D^{*+}\) is the same as at hadronization: \[\frac{N_{*0}}{N_{*+}}=\frac{M_{*0}^{2}\,K_{2}(M_{*0}/T_{H})}{M_{*+}^{2}\,K_{2 }(M_{*+}/T_{H})}. \tag{25}\] For the hadronization temperature \(T_{H}=156\) MeV, this ratio is 1.020. The estimated multiplicities for \(D^{*0}\) and \(D^{*+}\) are \[(dN_{D^{*0}}/dy)_{0}=2.57,\qquad(dN_{D^{*+}}/dy)_{0}=2.52. \tag{26}\] The SHM predictions for the multiplicities for \(D^{0}\) and \(D^{+}\) take into account \(D^{*}\) decays. We obtain the predictions for the multiplicities before \(D^{*}\) decays by using Eqs. (9) with \(B_{+0}=67.7\%\): \[(dN_{D^{0}}/dy)_{0}=2.14,\qquad(dN_{D^{+}}/dy)_{0}=2.03. \tag{27}\] The ratio of a charm-meson multiplicity in Eqs. (26) or (27) to the pion multiplicity in Eq. (22) can be identified with the ratio of the charm-meson number density to the pion number density at kinetic freezeout \[\frac{(dN_{D^{(*)a}}/dy)_{0}}{dN_{\pi}/dy}=\frac{\mathfrak{n}_{D^{(*)a}}(\tau_{ \rm{kf}})}{\mathfrak{n}_{\pi}(\tau_{\rm{kf}})}. \tag{28}\] ## IV Mass shifts and thermal widths In this section, we determine the mass shifts and thermal widths of pions and charm mesons in a hadron gas at temperatures near that of kinetic freezeout. The dominant effects from the hadronic medium come from coherent pion forward scattering. ### Coherent pion forward scattering When a particle propagates through a medium, its properties are modified by the interactions with the medium. The modifications can be described by the self-energy \(\Pi(p)\), which depends on the energy and momentum of the particle and also on the properties of the medium. The real part of \(\Pi(p)\) at \(\mathbf{p}=0\) determines the shift in the rest mass of the particle. The imaginary part of \(\Pi(p)\) at \(\mathbf{p}=0\) determines the thermal width of the particle at rest. If the particle is in thermal equilibrium with the medium, its self-energy can be calculated using thermal field theory. To be more specific, we consider the self-energy \(\Pi_{D}(p)\) of a pseudoscalar charm meson \(D\). The one-loop Feynman diagrams for \(\Pi_{D}(p)\) in HH\(\chi\)EFT are shown in Fig. 1. The first diagram can be expressed as the sum of a vacuum contribution and a thermal contribution from pions. The second diagram can be expressed as the sum of a vacuum contribution, a thermal contribution from pions, and a thermal contribution from vector charm mesons \(D^{*}\). At temperatures relevant to the hadron gas, thermal contributions from vector charm mesons are severely suppressed by a Boltzmann factor \(\exp(-M_{*}/T)\). The thermal contributions from pions can be expressed as an integral over the pion momentum \(\mathbf{q}\) weighted by the Bose-Einstein distribution \(1/(e^{\beta\omega_{q}}-1)\), where \(\omega_{q}=\sqrt{m_{\pi}^{2}+q^{2}}\). The thermal contribution from pions to the \(D\) self-energy can be calculated alternatively from the tree diagrams for \(\pi D\) scattering in Fig. 2. At this order, the thermal contribution from pions comes from coherent pion forward scattering. If a pion with flavor \(k\) and momentum \(\mathbf{q}\) is scattered back into the state with the same flavor \(k\) and momentum \(\mathbf{q}\), the initial many-body state is the charm meson plus the medium (which includes the pion with flavor \(k\) and momentum \(\mathbf{q}\)), and the final many-body state is also the charm meson plus the medium. Since the initial state is the same for all \(\mathbf{q}\) and the final state is also the same, the Figure 1: One-loop Feynman diagrams for the \(D\) self-energy in HH\(\chi\)EFT. The \(D\), \(D^{*}\), and \(\pi\) are represented by solid, double (solid+dashed), and dashed lines, respectively. pion-forward-scattering amplitudes must be added coherently for all momenta \(\mathbf{q}\) and all pion flavors \(k\). The \(D\) self-energy from coherent pion forward scattering can be obtained from the negative of the \(\mathcal{T}\)-matrix element by weighting it by \(\mathfrak{f}_{\pi}(\omega_{q})/(2\omega_{q})\), where \(\mathfrak{f}_{\pi}(\omega_{q})\) is the pion momentum distribution and \(1/(2\omega_{q})\) is a normalization factor, integrating over the pion momentum \(\mathbf{q}\) with measure \(d^{3}q/(2\pi)^{3}\), and summing over the three pion flavors. If the pions are in chemical and thermal equilibrium at temperature \(T\), the pion momentum distribution is the Bose-Einstein distribution in Eq. (18). However, this prescription for the self-energy from coherent pion forward scattering applies equally well to any medium in which the pions have a momentum distribution \(\mathfrak{f}_{\pi}(\omega_{q})\). The thermal contribution from pions to the \(D\) self-energy can be obtained directly from the \(D\) self-energy diagrams in Fig. 1 by making a simple substitution for the pion propagator in the loop: \[\frac{i}{q^{2}-m_{\pi}^{2}+i\epsilon}\longrightarrow\mathfrak{f}_{\pi}(|q_{0}|) \,2\pi\delta(q^{2}-m_{\pi}^{2}). \tag{29}\] The delta function can be expressed as \[\delta(q^{2}-m_{\pi}^{2})=\sum_{\pm}\theta(\pm q_{0})\,\frac{1}{2\omega_{q}} \,\delta(|q_{0}|-\omega_{q}). \tag{30}\] This substitution is referred to as the cutting of the pion line. The cutting of the pion line in the first diagram in Fig. 1 is 0, because the vertex is 0 when the incoming and outgoing pions have the same flavor. The cutting of the pion line in the second diagram in Fig. 1 gives the last two forward-scattering diagrams in Fig. 2. They come from the positive and negative regions of \(q_{0}\), respectively. ### Pions The thermal mass shift and the thermal width for a pion in a pion gas can be calculated using \(\chi\)EFT. The mass shift for a pion in thermal equilibrium was first calculated using \(\chi\)EFT at LO by Gasser and Leutwyler [27]. The pion thermal width was calculated in the low-density limit using \(\chi\)EFT at NLO by Goity and Leutwyler [28]. A complete calculation of the self-energy of a pion in thermal equilibrium in \(\chi\)EFT at NLO was presented by Schenk Figure 2: Feynman diagrams for the \(D\) self-energy from coherent pion forward scattering in HH\(\chi\)EFT at LO. The empty circles indicate an incoming and outgoing pion with the same flavor and the same 3-momentum. These diagrams can be obtained by cutting the pion lines in the diagrams in Fig. 1. The second diagram has a \(D^{*}\) resonance contribution. [29]. It was used to obtain the pion mass shift and the pion thermal width. The pion mass shift at NLO has also been calculated by Toublan [30]. The pion self-energy in \(\chi\)EFT at LO is given by the one-loop Feynman diagram in the left panel of Fig. 3. The thermal contribution to the pion self-energy can also be obtained from the Feynman diagram for coherent pion forward scattering in the right panel of Fig. 3. The self-energy \(\Pi_{\pi}(p_{0},p)\) of a pion with 4-momentum \((p_{0},\mathbf{p})\) can be obtained from the negative of the amplitude \({\cal A}_{ik,jk}(p_{0},\mathbf{p},\mathbf{q})\) for forward scattering of an on-shell pion with flavor \(k\) and 3-momentum \(\mathbf{q}\) by weighting it by \(\mathfrak{f}_{\pi}(\omega_{q})/(2\omega_{q})\), integrating over \(\mathbf{q}\), and summing over the three pion flavors \(k\): \[\Pi_{\pi}(p_{0},p)\,\delta^{ij}=-\sum_{k}\int\!\!\frac{d^{3}q}{(2\pi)^{3}2 \omega_{q}}\,\mathfrak{f}_{\pi}(\omega_{q})\,{\cal A}_{ik,jk}(p_{0},\mathbf{p},\bm {q}). \tag{31}\] The amplitude \({\cal A}_{ik,jk}\) at LO does not depend on \(\mathbf{q}\): \[{\cal A}_{ik,jk}(p_{0},p)=-\frac{2}{3f_{\pi}^{2}}\big{[}(2p_{0}^{2}-2p^{2}+m_{ \pi}^{2})\,\delta^{ij}-2(p_{0}^{2}-p^{2}+2m_{\pi}^{2})\,\delta^{ik}\delta^{jk} \big{]}. \tag{32}\] The pion self-energy at LO is \[\Pi_{\pi}(p_{0},p)=\frac{1}{3f_{\pi}^{2}}(4p_{0}^{2}-4p^{2}-m_{\pi}^{2})\, \mathfrak{n}_{\pi}\left\langle\frac{1}{\omega_{q}}\right\rangle_{\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! ### Pseudoscalar charm mesons The contributions to the thermal mass shift and thermal width of a pseudoscalar charm meson in a pion gas from coherent pion forward scattering can be calculated using HH\(\chi\)EFT. #### iii.3.1 \(D\) self-energy In HH\(\chi\)EFT at LO, the reaction \(\pi D\to\pi D\) proceeds through the three diagrams in Fig. 4. The 4-momentum of \(D\) can be expressed as \(P=Mv+p\), where \(v\) is the velocity 4-vector and \(p\) is the residual 4-momentum. The amplitude for the transition \(D^{a}(p)\pi^{i}(q)\to D^{b}(p^{\prime})\pi^{j}(q^{\prime})\) is \[{\cal A}_{ai,bj}(p,q,q^{\prime}) = \frac{1}{2f_{\pi}^{2}}\,[\sigma^{i},\sigma^{j}]_{ab}\,v\!\cdot\!(q +q^{\prime}) \tag{36}\] \[-\frac{g_{\pi}^{2}}{f_{\pi}^{2}}\left((\sigma^{i}\sigma^{j})_{ab} \,\frac{-q\!\cdot\!q^{\prime}+(v\!\cdot\!q)\,(v\!\cdot\!q^{\prime})}{v\!\cdot \!(p+q)-\Delta+i\Gamma_{*}/2}+(\sigma^{j}\sigma^{i})_{ab}\,\frac{-q\!\cdot\!q ^{\prime}+(v\!\cdot\!q)\,(v\!\cdot\!q^{\prime})}{v\!\cdot\!(p-q^{\prime})- \Delta+i\Gamma_{*}/2}\right).\] We have inserted the \(D^{*}\) width in the denominators to allow for the possibility that the \(D^{*}\) can be on shell. In the case of the forward scattering of \(\pi^{k}(q)\) to \(\pi^{k}(q)\), the amplitude reduces to a function of \(v\!\cdot\!p\) and \(v\!\cdot\!q\) and it is diagonal in \(a\) and \(b\). The diagonal entry is \[{\cal A}_{ak,ak}(v\!\cdot\!p,v\!\cdot\!q)=-\frac{2g_{\pi}^{2}}{f_{\pi}^{2}}\, \frac{[(v\!\cdot\!q)^{2}-m_{\pi}^{2}](\Delta-v\!\cdot\!p)}{(v\!\cdot\!q)^{2}- (\Delta-v\!\cdot\!p)^{2}+i\,(\Delta-v\!\cdot\!p)\Gamma_{*}}. \tag{37}\] Since \(\Gamma_{*}\ll\Delta\), we have omitted the terms proportional to \(\Gamma_{*}\) in the numerator and to \(\Gamma_{*}^{2}\) in the denominator. The \(D\) self energy \(\Pi_{D}(v\!\cdot\!p)\) in HH\(\chi\)EFT at LO is the sum of the two one-loop diagrams in Fig. 1. The contribution from coherent pion forward scattering is the sum of the three tree diagrams in Fig. 2. The coherent sum of the first diagram over pion flavors is 0. The \(D\) self energy can be obtained from the amplitude in Eq. (37) by multiplying it by \(-1/2\), weighting it by \({\mathfrak{f}}_{\pi}(\omega_{q})/(2\omega_{q})\), integrating over the momentum \({\mathbf{q}}\), and summing over the three pion flavors \(k\). We choose the velocity 4-vector \(v\) of the charm meson to be the same as the 4-vector that defines the thermal frame in which the pion momentum distribution is Eq. (18) before kinetic freezeout and Eq. (20) after kinetic freezeout. The pion energy is \(v\!\cdot\!q=\omega_{q}\). The \(D\) self-energy is \[\Pi_{D}(v\!\cdot\!p)=\frac{3g_{\pi}^{2}}{f_{\pi}^{2}}\,\int\!\!\frac{d^{3}q}{(2 \pi)^{3}2\omega_{q}}{\mathfrak{f}}_{\pi}(\omega_{q})\frac{(\omega_{q}^{2}-m_{ \pi}^{2})(\Delta-v\!\cdot\!p)}{\omega_{q}^{2}-(\Delta-v\!\cdot\!p)^{2}+i\,( \Delta-v\!\cdot\!p)\Gamma_{*}}. \tag{38}\] Figure 4: Feynman diagrams for \(\pi D\to\pi D\) in HH\(\chi\)EFT at LO. The second diagram has a \(D^{*}\) resonance contribution. Since the charm-meson mass difference \(\Delta=M_{*}-M\) is approximately equal to the pion mass \(m_{\pi}\), the self-energy is sensitive to isospin splittings when the \(D\) is close to the mass shell \(v\cdot p=0\). The isospin splittings can be taken into account by reintroducing a sum over the flavors \(c\) of the intermediate \(D^{*}\). In the self energy in Eq. (38), the factor \(\sum_{k}(\sigma^{k}\sigma^{k})_{aa}=3\delta_{aa}\) from the pion vertices is replaced by \(\sum_{k}\sum_{c}(\sigma^{k})_{ac}(\sigma^{k})_{ca}=\sum_{c}(2-\delta_{ac})\). The mass difference \(\Delta\) is replaced by \(\Delta_{ac}\) and the pion energy is replaced by \(\omega_{caq}=\sqrt{m_{\pi ca}^{2}+q^{2}}\). The \(D^{a}\) self energy is \[\Pi_{D^{a}}(v\!\cdot\!p)=\frac{g_{\pi}^{2}}{f_{\pi}^{2}}\,\sum_{c}(2-\delta_{ ac})\,\int\!\!\frac{d^{3}q}{(2\pi)^{3}2\omega_{caq}}\mathfrak{f}_{\pi}( \omega_{caq})\frac{q^{2}\,(\Delta_{ca}-v\!\cdot\!p)}{\omega_{caq}^{2}-(\Delta _{ca}-v\!\cdot\!p)^{2}+i\,(\Delta_{ca}-v\!\cdot\!p)\Gamma_{*c}}, \tag{39}\] where \(q^{2}\) is the square of the 3-momentum. #### iii.2.2 Mass shift and thermal width The mass shift \(\delta M_{a}\) and the thermal width \(\delta\Gamma_{a}\) for the charm meson \(D^{a}\) in HH\(\chi\)EFT at LO are obtained by evaluating the \(D^{a}\) self energy on the mass shell \(v\!\cdot\!p=0\): \[\Pi_{D^{a}}(v\!\cdot\!p=0)=\delta M_{a}-i\,\delta\Gamma_{a}/2. \tag{40}\] The \(D^{a}\) self energy with isospin splittings in Eq. (39) evaluated on the mass shell is \[\Pi_{D^{a}}(0)=\frac{g_{\pi}^{2}}{f_{\pi}^{2}}\,\sum_{c}(2-\delta_{ac})\, \Delta_{ca}\int\!\!\frac{d^{3}q}{(2\pi)^{3}2\omega_{caq}}\mathfrak{f}_{\pi}( \omega_{caq})\frac{q^{2}}{q^{2}-q_{ca}^{2}+i\Delta_{ca}\Gamma_{*c}}, \tag{41}\] where \(q_{ca}^{2}=\Delta_{ca}^{2}-m_{\pi ca}^{2}\). Since \(\Delta_{ca}\Gamma_{*c}\ll|q^{2}-q_{ca}^{2}|\) except in a very narrow range of \(q^{2}\), the expressions for \(\delta M_{a}\) and \(\delta\Gamma_{a}\) can be simplified by taking the limit \(\Gamma_{*c}\to 0\). The \(D^{a}\) mass shift in the limit \(\Gamma_{*c}\to 0\) can be expressed in terms of an average over the pion momentum distribution of a function of \(q\) that involves a principal-value distribution: \[\delta M_{a}=\frac{g_{\pi}^{2}}{2f_{\pi}^{2}}\,\mathfrak{n}_{\pi}\,\sum_{c}(2- \delta_{ac})\,\Delta_{ca}\left\langle\frac{q^{2}}{\omega_{caq}}\mathcal{P} \frac{1}{q^{2}-q_{ca}^{2}}\right\rangle_{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! The leading term in the expansion of the \(D^{a}\) thermal width is \[\delta\Gamma_{a}\approx 3\,\mathfrak{f}_{\pi}(m_{\pi})\,\sum_{c}\Gamma\big{[}D^{*c }\to D^{a}\pi\big{]}, \tag{45}\] where \(\Gamma[D^{*c}\to D^{a}\pi]\) is the partial decay rate of \(D^{*c}\) in Eq. (8). In a pion gas with temperature \(T_{\rm{kf}}=115\) MeV, the \(D\) mass shift in Eq. (44) is \(\delta M=1.257\) MeV. The thermal widths for \(D^{+}\) and \(D^{0}\) in Eq. (45) are \(\delta\Gamma_{+}=32.6\) keV and \(\delta\Gamma_{0}=118.9\) keV. ### Vector charm mesons The contributions to the thermal mass shift and thermal width of a vector charm meson in a pion gas from coherent pion forward scattering can be calculated using HH\(\chi\)EFT. #### iii.4.1 \(D^{*}\) self-energy In HH\(\chi\)EFT at LO, the reaction \(\pi D^{*}\to\pi D^{*}\) proceeds through the five diagrams in Fig. 5. The 4-momentum of \(D^{*}\) can be expressed as \(P=Mv+p\), where \(v\) is the velocity 4-vector and \(p\) is the residual 4-momentum. The amplitude for the transition \(\pi^{i}(q)D^{*a}(p)\to\pi^{j}(q^{\prime})D^{*b}(p^{\prime})\) is \[{\cal A}^{\mu\nu}_{ai,bj} = -\frac{1}{2f_{\pi}^{2}}g^{\mu\nu}\left[\sigma^{i},\sigma^{j} \right]_{ab}v\cdot(q+q^{\prime}) \tag{46}\] \[-\frac{g_{\pi}^{2}}{f_{\pi}^{2}}\left[(\sigma^{i}\sigma^{j})_{ab} \frac{q^{\mu}q^{\prime\nu}}{v\cdot(p+q)+i\Gamma/2}+(\sigma^{j}\sigma^{i})_{ab }\frac{q^{\prime\mu}q^{\nu}}{v\cdot(p-q^{\prime})+i\Gamma/2}\right]\] \[+\frac{g_{\pi}^{2}}{f_{\pi}^{2}}\epsilon^{\mu\rho\lambda}(v)\, \epsilon^{\nu\sigma}{}_{\lambda}(v)\left[(\sigma^{i}\sigma^{j})_{ab}\frac{q_ {\rho}q^{\prime}_{\sigma}}{v\cdot(p+q)-\Delta}+(\sigma^{j}\sigma^{i})_{ab} \frac{q^{\prime}_{\rho}q_{\sigma}}{v\cdot(p-q^{\prime})-\Delta}\right],\] where \(\epsilon^{\mu\rho\lambda}(v)=\epsilon^{\mu\rho\lambda\alpha}v_{\alpha}\). We have inserted the \(D\) width in the denominators of the \(D\) propagators to allow for the possibility that the \(D\) can be on shell. In the case of the forward Figure 5: Feynman diagrams for \(\pi D^{*}\to\pi D^{*}\) in HH\(\chi\)EFT at LO. The third diagram produces a \(D\)-meson \(t\)-channel singularity in the reaction rate. scattering of \(\pi^{k}(q)\) to \(\pi^{k}(q)\), the amplitude is diagonal in \(a\) and \(b\). The diagonal entry is \[{\cal A}^{\mu\nu}_{ak,ak} = \frac{2g_{\pi}^{2}}{f_{\pi}^{2}}\left[q^{\mu}q^{\nu}\frac{v\cdot p} {(v\!\cdot\!q)^{2}-(v\!\cdot\!p)^{2}-i\,v\!\cdot\!p\,\Gamma}\right. \tag{47}\] \[\qquad\left.-\epsilon^{\mu\lambda}(v,q)\,\epsilon^{\nu}\,_{ \lambda}(v,q)\,\frac{v\cdot p-\Delta}{(v\!\cdot\!q)^{2}-(\Delta-v\!\cdot\!p)^{ 2}}\right],\] where \(\epsilon^{\mu\lambda}(v,q)=\epsilon^{\mu\lambda\alpha\beta}v_{\alpha}q_{\beta}\). Since \(\Gamma\ll\Delta\), we have omitted terms proportional to \(\Gamma\) in the numerator and to \(\Gamma^{2}\) in the denominator. The self-energy tensor \(\Pi^{\mu\nu}\) of a vector meson \(D^{*}\) in HH\(\chi\)EFT at LO is the sum of the three one-loop Feynman diagrams in Fig. 6. The contribution from coherent pion forward scattering can be obtained by cutting the pion lines using the prescription in Eq. (29). The cut of the first diagram in Fig. 6 is zero, because the coherent sum over pion flavors is 0. The cuts of the last two diagrams in Fig. 6 give the four tree diagrams in Fig. 7. By rotational symmetry, the contribution to \(\Pi^{\mu\nu}\) from the coherent forward scattering of a pion with 4-momentum \(q\) is a linear combination of \(g^{\mu\nu}\), \(q^{\mu}q^{\nu}\), \(v^{\mu}q^{\nu}+q^{\mu}v^{\nu}\), and \(v^{\mu}v^{\nu}\). However the tensor structure of the \(D^{*}\) propagator in Eq. (50) ensures that only the \(-g^{\mu\nu}+v^{\mu}v^{\nu}\) component contributes to the \(D^{*}\) self energy \(\Pi_{D^{*}}(v\!\cdot\!p)\). That component can be obtained from the tensor \({\cal A}^{\mu\nu}_{ak,ak}\) in Eq. (47) by contracting it with \((-g^{\mu\nu}+v^{\mu}v^{\nu})/3\). The \(D^{*}\) self energy can Figure 6: One-loop Feynman diagrams for the \(D^{*}\) self-energy in HH\(\chi\)EFT. Figure 7: Feynman diagrams for the \(D^{*}\) self energy from coherent pion forward scattering in HH\(\chi\)EFT at LO. These diagrams can be obtained by cutting the pion lines in the last two diagrams in Fig. 6. The second diagram in the first row produces a \(D\)-meson \(t\)-channel singularity. be obtained from that component by multiplying it by \(-1/2\), weighting it by \(\mathfrak{f}_{\pi}(\omega_{q})/(2\omega_{q})\), integrating over \(\mathbf{q}\), and summing over the three pion flavors \(k\): \[\Pi_{D^{*}}(v\!\cdot\!p) = -\frac{g_{\pi}^{2}}{f_{\pi}^{2}}\int\!\!\frac{d^{3}q}{(2\pi)^{3}2 \omega_{q}}\mathfrak{f}_{\pi}(\omega_{q})\,(\omega_{q}^{2}-m_{\pi}^{2})\left( \,\frac{v\!\cdot\!p}{\omega_{q}^{2}-(v\!\cdot\!p)^{2}-i\,v\!\cdot\!p\,\Gamma}+ \frac{2(v\!\cdot\!p-\Delta)}{\omega_{q}^{2}-(v\!\cdot\!p-\Delta)^{2}}\right).\] Since the charm-meson mass difference \(\Delta=M_{*}-M\) is approximately equal to the pion mass \(m_{\pi}\), the self energy is sensitive to isospin splittings when the \(D^{*}\) is close to the mass shell \(v\!\cdot\!p=\Delta\). The isospin splittings can be taken into account by reintroducing a sum over the flavors \(c\) of the intermediate \(D\) or \(D^{*}\). In the self energy in Eq. (48), the factor \(\sum_{k}(\sigma^{k}\sigma^{k})_{aa}=3\delta_{aa}\) from the pion vertices is replaced by \(\sum_{k}\sum_{c}(\sigma^{k})_{ac}(\sigma^{k})_{ca}=\sum_{c}(2-\delta_{ac})\). The isospin splittings in the denominators of the propagators can be taken into account in the first term in Eq. (48) by replacing \(v\!\cdot\!p\) by \(v\!\cdot\!p-M_{c}+M\) and \(\Gamma\) by \(\Gamma_{c}\). They can be taken into account in the second term by replacing \(v\!\cdot\!p-\Delta\) by \(v\!\cdot\!p-M_{*c}+M\). The mass-shell condition \(v\!\cdot\!p=\Delta\) is modified to \(v\!\cdot\!p=M_{*a}-M\). #### iv.2.2 Mass shift and thermal width The mass shift \(\delta M_{*a}\) and the thermal width \(\delta\Gamma_{*a}\) for the charm meson \(D^{*a}\) in HH\(\chi\)EFT at LO are obtained by evaluating the self-energy on the mass shell: \[\Pi_{D^{*a}}(v\!\cdot\!p=\Delta)=\delta M_{*a}-i\,\delta\Gamma_{*a}/2. \tag{49}\] If isospin splittings are taken into account in Eq. (48), the \(D^{*a}\) self-energy on the mass shell is \[\Pi_{D^{*a}}(\Delta) = -\frac{g_{\pi}^{2}}{3f_{\pi}^{2}}\sum_{c}(2-\delta_{ac})\!\int\! \!\frac{d^{3}q}{(2\pi)^{3}2\omega_{acq}}\mathfrak{f}_{\pi}(\omega_{acq})\! \!\left(\frac{q^{2}\Delta_{ac}}{q^{2}-q_{ac}^{2}-i\Delta_{ac}\Gamma_{c}}+ \frac{2q^{2}(M_{*a}-M_{*c})}{\omega_{acq}^{2}}\right),\] where \(\omega_{acq}=\sqrt{m_{\pi ac}^{2}+q^{2}}\) and \(q_{ac}^{2}=\Delta_{ac}^{2}-m_{\pi ac}^{2}\). In the second term inside the parentheses, we have omitted the term \(-(M_{*a}-M_{*c})^{2}\) in the denominator, because \(M_{*+}-M_{*0}\ll m_{\pi}\). Since \(\Delta_{ac}\Gamma_{c}\ll|q^{2}-q_{ac}^{2}|\) except in a very narrow range of \(q^{2}\), the expressions for \(\delta M_{*a}\) and \(\delta\Gamma_{*a}\) can be simplified by taking the limit \(\Gamma_{c}\to 0\). The resulting \(D^{*a}\) mass shift is \[\delta M_{*a} = -\frac{g_{\pi}^{2}}{6f_{\pi}^{2}}\,\mathfrak{n}_{\pi}\sum_{c}(2- \delta_{ac})\left[\Delta_{ac}\left\langle\frac{q^{2}}{\omega_{acq}}{\cal P} \frac{1}{q^{2}-q_{ac}^{2}}\right\rangle_{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! This thermal width comes from the coherent pion forward scattering diagram with a \(D\) in the \(t\) channel in Fig. 7. Note that the sum \(\delta\Gamma_{*+}+\delta\Gamma_{*0}\) is equal to the \(1/3\) of the sum \(\delta\Gamma_{+}+\delta\Gamma_{0}\) from Eq. (45). In a pion gas with temperature \(T_{\rm kf}=115\) MeV, the mass shifts for \(D^{*+}\) and \(D^{*0}\) are \(\delta M_{*+}=-0.478\) MeV and \(\delta M_{*0}=-0.417\) MeV. The thermal widths for \(D^{*+}\) and \(D^{*0}\) are \(\delta\Gamma_{*+}=32.7\) keV and \(\delta\Gamma_{*0}=14.6\) keV. The mass shift and thermal width of \(D^{*a}\) can be expanded in powers of isospin splittings using the methods in Appendix B. The leading term in the expansion of the mass shift is the same for \(D^{*+}\) and \(D^{*0}\) and it differs from the mass shift \(\delta M\) for \(D^{+}\) and \(D^{0}\) in Eq. (44) by the multiplicative factor \(-1/3\): \[\delta M_{*}\approx-\delta M/3, \tag{53}\] The leading term in the expansion of the \(D^{*a}\) thermal width is \[\delta\Gamma_{*a}\approx\mathfrak{f}_{\pi}(m_{\pi})\,\sum_{c}\Gamma\big{[}D^{ *a}\to D^{c}\pi\big{]}, \tag{54}\] where \(\Gamma[D^{*a}\to D^{c}\pi]\) is the \(D^{*}\) partial decay rate in Eq. (8). In a pion gas with temperature \(T_{\rm kf}=115\) MeV, the \(D^{*}\) mass shift in Eq. (53) is \(\delta M_{*}=-0.419\) MeV and the \(D^{*+}\) and \(D^{*0}\) thermal widths in Eq. (54) are \(\delta\Gamma_{*+}=35.2\) keV and \(\delta\Gamma_{*0}=15.3\) keV. ### Expanding hadron gas Thermal mass shifts and thermal widths have significant effects on some reaction rates for pions and charm mesons in the expanding hadron gas created by a heavy-ion collision. The pion mass shift \(\delta m_{\pi}\) in \(\chi\)EFT at LO is given in Eq. (35). The charm-meson mass shifts \(\delta M_{a}\) for \(D^{a}\) and \(\delta M_{*a}\) for \(D^{*a}\) in HH\(\chi\)EFT at LO are given in Eqs. (42) and (51). We will use the simpler approximations for the charm-meson mass shifts in Eqs. (44) and (53). The mass shifts in the hadron gas before kinetic freezeout are determined by the temperature \(T\). The mass shifts after kinetic freezeout are determined by the pion number density \(\mathfrak{n}_{\pi}\). Some reaction rates are sensitive to mass differences through a factor of \(M_{*}-M-m_{\pi}\) raised to a power. The four relevant mass differences in the vacuum are given in Eqs. (2). The thermal mass shifts for \(D^{*}\), \(D\), and \(\pi\) are given in Eqs. (53), (44), and (35). The mass differences in the hadron gas after kinetic freezeout decrease linearly with \(\mathfrak{n}_{\pi}\) with the same slope: \[\Delta_{00}-m_{\pi 0} \approx +7.04\ {\rm MeV}-\big{(}3.23\ {\rm MeV}\big{)}\,\mathfrak{n}_{\pi}/\mathfrak{n}_{\pi}^{(\rm kf)}, \tag{55a}\] \[\Delta_{+0}-m_{\pi+} \approx +5.86\ {\rm MeV}-\big{(}3.23\ {\rm MeV}\big{)}\,\mathfrak{n}_{\pi}/ \mathfrak{n}_{\pi}^{(\rm kf)},\] (55b) \[\Delta_{++}-m_{\pi 0} \approx +5.63\ {\rm MeV}-\big{(}3.23\ {\rm MeV}\big{)}\,\mathfrak{n}_{\pi}/ \mathfrak{n}_{\pi}^{(\rm kf)},\] (55c) \[\Delta_{0+}-m_{\pi+} \approx -2.38\ {\rm MeV}-\big{(}3.23\ {\rm MeV}\big{)}\,\mathfrak{n}_{\pi}/ \mathfrak{n}_{\pi}^{(\rm kf)}, \tag{55d}\] where \(\mathfrak{n}_{\pi}^{(\rm kf)}\) is the pion number density at kinetic freezeout. The signs of the mass differences in Eqs. (55) imply that the decays \(D^{*0}\to D^{0}\pi^{0}\), \(D^{*+}\to D^{+}\pi^{0}\), and \(D^{*+}\to D^{+}\pi^{0}\) are always kinematically allowed in the expanding hadron gas after kinetic freezeout, while the decay \(D^{*0}\to D^{+}\pi^{-}\) is always forbidden. The partial widths of the charm mesons from the decays \(D^{*}\to D\pi\) are given in Eq. (8): \[\Gamma_{D^{*+}\to D^{+}\pi} = \frac{g_{\pi}^{2}}{12\pi\,f_{\pi}^{2}}\left(\Delta_{++}^{2}-m_{\pi 0}^{2} \right)^{3/2}. \tag{56a}\] \[\Gamma_{D^{*+}\to D^{0}\pi} = \frac{g_{\pi}^{2}}{6\pi\,f_{\pi}^{2}}\left(\Delta_{+0}^{2}-m_{\pi +}^{2}\right)^{3/2},\] (56b) \[\Gamma_{D^{*0}\to D^{0}\pi} = \frac{g_{\pi}^{2}}{12\pi\,f_{\pi}^{2}}\left(\Delta_{00}^{2}-m_{\pi 0 }^{2}\right)^{3/2},\] (56c) \[\Gamma_{D^{*0}\to D^{+}\pi} = 0, \tag{56d}\] where \(\Delta_{ab}=M_{*a}-M_{b}\) is the \(D^{*a}\)-\(D^{b}\) mass difference. In the vacuum, the masses \(M_{*a}\), \(M_{b}\), and \(m_{\pi ab}\) are constants. In the hadron gas, the mass shifts from coherent pion forward scattering can be taken into account by replacing \(\Delta_{ab}\) in Eqs. (56) by \(\Delta_{ab}+\delta M_{*}-\delta M\), where \(\delta M\) and \(\delta M_{*}\) are the charm-meson mass shifts in Eqs. (44) and (53), and replacing \(m_{\pi i}\) by \(m_{\pi i}+\delta m_{\pi}\), where \(\delta m_{\pi}\) is the pion mass shift in Eq. (35). In the expanding hadron gas after kinetic freezeout, the terms \(\Delta_{ab}^{2}-m_{\pi ab}^{2}\) in Eqs. (56) are quadratic functions of \(\mathfrak{n}_{\pi}\). The thermal width \(\Gamma_{a}\) of \(D^{a}\) from coherent pion forward scattering is given in Eq. (45). The thermal widths for \(D^{+}\) and \(D^{0}\) are \[\Gamma_{+} = 3\,\mathfrak{f}_{\pi}(m_{\pi})\,\Gamma_{D^{*+}\to D^{+}\pi}, \tag{57a}\] \[\Gamma_{0} = 3\,\mathfrak{f}_{\pi}(m_{\pi})\,\big{(}\Gamma_{D^{*0}\to D^{0} \pi}+\Gamma_{D^{*+}\to D^{0}\pi}\big{)}. \tag{57b}\] In the hadron gas before kinetic freezeout, the factor \(\mathfrak{f}_{\pi}(m_{\pi})\) depends on the temperature \(T\). In the hadron gas after kinetic freezeout at the temperature \(T_{\rm kf}=115\) MeV, \(\mathfrak{f}_{\pi}(m_{\pi})=0.431\,\mathfrak{n}_{\pi}/\mathfrak{n}_{\pi}^{(\rm kf)}\), where \(\mathfrak{n}_{\pi}^{(\rm kf)}\) is the pion number density at kinetic freezeout. The thermal widths \(\Gamma_{a}\) in Eqs. (57) also depend on \(T\) or \(\mathfrak{n}_{\pi}\) through the factors of \((\Delta_{ab}^{2}-m_{\pi ab}^{2})^{3/2}\) in \(\Gamma_{D^{*a}\to D^{0}\pi}\). The thermal correction \(\delta\Gamma_{*a}\) to the width for \(D^{*a}\) is given in Eq. (54). The total widths for \(D^{*+}\) and \(D^{*0}\) are \[\Gamma_{*+} = \left[1+\mathfrak{f}_{\pi}(m_{\pi})\right]\big{(}\Gamma_{D^{*+} \to D^{+}\pi}+\Gamma_{D^{*+}\to D^{0}\pi}\big{)}+\Gamma_{*+,\gamma}, \tag{58a}\] \[\Gamma_{*0} = \left[1+\mathfrak{f}_{\pi}(m_{\pi})\right]\Gamma_{D^{*0}\to D^{0} \pi}+\Gamma_{*0,\gamma}, \tag{58b}\] where \(\Gamma_{*+,\gamma}\) and \(\Gamma_{*0,\gamma}\) are the radiative decay rates in Eqs. (6). The terms with the factor \(\mathfrak{f}_{\pi}(m_{\pi})\) come from coherent pion forward scattering. In the hadron gas before kinetic freezeout, \(\mathfrak{f}_{\pi}(m_{\pi})\) depends on \(T\). In the hadron gas after kinetic freezeout, \(\mathfrak{f}_{\pi}(m_{\pi})=0.431\,\mathfrak{n}_{\pi}/\mathfrak{n}_{\pi}^{(\rm kf)}\). The thermal widths \(\Gamma_{*a}\) in Eqs. (58) also depend on \(T\) or \(\mathfrak{n}_{\pi}\) through the factors of \((\Delta_{ab}^{2}-m_{\pi ab}^{2})^{3/2}\) in \(\Gamma_{D^{*a}\to D^{0}\pi}\). The thermal widths for the charm mesons after kinetic freezeout at the temperature \(T_{\rm kf}=115\) MeV are shown as functions of the pion number density \(\mathfrak{n}_{\pi}\) in Fig. 8. The thermal widths of \(D^{+}\) and \(D^{0}\) are given in Eqs. (57). The thermal widths of \(D^{*+}\) and \(D^{*0}\) are given in Eqs. (58). The thicker curves in Fig. 8 take into account the thermal mass shifts of pions and charm mesons in the partial decay rates for \(D^{*a}\to D^{b}\pi\) in Eqs. (56). The thinner straight lines in Fig. 8 are obtained by setting the masses of pions and charm mesons in those partial decay rates equal to their vacuum values. The effects of the thermal mass shifts are large. At kinetic freezeout, the thermal widths of \(D^{+}\) and \(D^{0}\) are 9.1 keV and 40.2 keV. As decreases from \(\mathfrak{n}_{\pi}^{\rm(kf)}\) to \(0\), those decay rates increase to the maximum values \(10.6\,\)keV and \(42.9\,\)keV near \(0.74\,\mathfrak{n}_{\pi}^{\rm(kf)}\) and then decrease to \(0\). At kinetic freezeout, the thermal widths of \(D^{*+}\) and \(D^{*0}\) are \(35.6\,\)keV and \(39.9\,\)keV. As \(\mathfrak{n}_{\pi}\) decreases from \(\mathfrak{n}_{\pi}^{\rm(kf)}\) to \(0\), those decay rates increase to the vacuum values in Eqs. (5) to within errors. The decrease in the thermal widths of \(D^{*+}\) and \(D^{*0}\) with increasing \(\mathfrak{n}_{\pi}\) may be counterintuitive, but it is a consequence of the decreasing phase space available for the decay because of the decreasing mass differences in Eqs. (55). ## V Reaction rates In this section, we calculate reaction rates for charm mesons in a pion gas. The results are applied to the hadron gas from a heavy-ion collision after kinetic freezeout. ### \(D^{*}\leftrightarrow D\pi\) The decays of \(D^{*}\) into \(D\pi\) are 1-body reactions that give contributions to the rate equations for the number densities of \(D^{*}\) in a pion gas that are not suppressed by any powers of the pion number density. The partial decay rate in the vacuum for \(D^{*a}\to D^{b}\pi\) in HH\(\chi\)EFT at LO is given in Eq. (8). This rate is nonzero only if \(\Delta_{ab}>m_{\pi ab}\), and it is sensitive to the masses through the factor of \((\Delta_{ab}^{2}-m_{\pi ab}^{2})^{3/2}\). This expression can also be used for the Figure 8: Thermal widths for the charm mesons in the hadron gas after kinetic freezeout as functions of the pion number density \(\mathfrak{n}_{\pi}\): \(D^{+}\) (dashed red), \(D^{0}\) (dashed blue), \(D^{*+}\) (solid red), and \(D^{*0}\) (solid blue). The thicker curves include the effects of mass shifts from coherent pion forward scattering. The thinner straight lines ignore the thermal mass shifts. partial decay rate in the pion gas by taking into account the mass shifts from coherent pion forward scattering. The charm-meson mass difference \(\Delta_{ab}\) is shifted by \(\delta M_{*}-\delta M\), where \(\delta M_{*}\) and \(\delta M\) are given by Eqs. (53) and (44). The pion mass \(m_{\pi ab}\) is shifted by \(\delta m_{\pi}\), which is given in Eq. (35). The radiative decays of \(D^{*}\) into \(D\gamma\) are also 1-body reactions. The partial decay rates for \(D^{*}\to D\gamma\) in the vacuum are not sensitive to masses, because the \(D^{*}\)-\(D\) mass differences are much larger than the mass shifts. The radiative decay rates in the pion gas can therefore be approximated by their values in the vacuum in Eqs. (6). A vector charm meson \(D^{*}\) can be produced in a pion gas by the inverse decay \(\pi D\to D^{*}\). The reaction rate in the vacuum for \(D^{a}\pi\to D^{*b}\) averaged over the three pion flavors is \[v\sigma\big{[}\pi D^{a}\to D^{*b}\big{]}=\frac{\pi g_{\pi}^{2}}{6f_{\pi}^{2}} \left(2-\delta_{ab}\right)\frac{q_{ba}^{2}}{\Delta_{ba}}\,\delta(\omega_{baq} -\Delta_{ba}), \tag{59}\] \(\omega_{baq}=\sqrt{m_{\pi ba}^{2}+q^{2}}\), and \(q_{ba}^{2}=\Delta_{ba}^{2}-m_{\pi ba}^{2}\). The reaction rate in the pion gas is obtained by averaging over the momentum distributions of the incoming \(D\) and \(\pi\). The average over the \(D\) momentum distribution has no effect, because the reaction rate in Eq. (59) does not depend on the charm-meson momentum. The average over the pion momentum distribution can be evaluated using the delta function in Eq. (59): \[\big{\langle}v\sigma\big{[}\pi D^{a}\to D^{*b}\big{]}\big{\rangle}=\big{(} \mathfrak{f}_{\pi}(\Delta_{ba})/\mathfrak{n}_{\pi}\big{)}\,\Gamma\big{[}D^{*b} \to D^{a}\pi\big{]}, \tag{60}\] where \(\Gamma[D^{*b}\to D^{a}\pi]\) is the decay rate in Eq. (8). Since \(\Delta_{ba}\) is large compared to isospin splittings, it can be approximated by the average \(\Delta\) over the four \(D^{*b}\to D^{a}\pi\) transitions or alternatively by the pion mass \(m_{\pi}\). The reaction rates for \(\pi D^{a}\to D^{*b}\) in the hadron gas near or after kinetic freezeout are \[\langle v\sigma_{\pi D^{+}\to D^{*+}}\rangle = \left[\mathfrak{f}_{\pi}(m_{\pi})/\mathfrak{n}_{\pi}\right]\Gamma _{D^{*+}\to D^{+}\pi}, \tag{61a}\] \[\langle v\sigma_{\pi D^{0}\to D^{*+}}\rangle = \left[\mathfrak{f}_{\pi}(m_{\pi})/\mathfrak{n}_{\pi}\right]\Gamma _{D^{*+}\to D^{0}\pi},\] (61b) \[\langle v\sigma_{\pi D^{0}\to D^{*0}}\rangle = \left[\mathfrak{f}_{\pi}(m_{\pi})/\mathfrak{n}_{\pi}\right]\Gamma _{D^{*0}\to D^{0}\pi},\] (61c) \[\langle v\sigma_{\pi D^{+}\to D^{*0}}\rangle = 0. \tag{61d}\] Before kinetic freezeout, the factor \(\mathfrak{f}_{\pi}(m_{\pi})/\mathfrak{n}_{\pi}\) is determined by the temperature \(T\). After kinetic freezeout at the temperature \(T_{\rm kf}=115\) MeV, that factor has the constant value \(0.431/\mathfrak{n}_{\pi}^{\rm(kf)}\) independent of \(\mathfrak{n}_{\pi}\). Figure 9: Feynman diagram for \(\pi D\to D^{*}\) in HH\(\chi\)EFT at LO. The dashed line is a \(\pi\), the solid line is a \(D\), and the double (solid+dashed) line is a \(D^{*}\). ### \(\pi D\to\pi D\) The reaction \(\pi D^{a}\to\pi D^{b}\) can change the flavor of a pseudoscalar charm meson. The Feynman diagrams for this reaction in HH\(\chi\)EFT at LO are shown in Fig. 4. The reaction rate has a \(D^{*}\) resonance contribution from the second diagram in Fig. 4 that is sensitive to isospin splittings and to the \(D^{*}\) width. A simple expression for the nonresonant contribution to the reaction rate can be obtained by setting the \(D^{*}\)-\(D\) mass splitting \(\Delta\) equal to the pion mass \(m_{\pi}\) and then taking the limit as the \(D^{*}\) width \(\Gamma_{*}\) approaches 0. A simple expression for the resonant contribution to the reaction rate can be obtained by isolating the term with the factor \(1/\Gamma_{*}\). We approximate the reaction rate by the sum of the nonresonant reaction rate and the resonant reaction rate. The \({\cal T}\)-matrix element for \(\pi^{i}D^{a}\to\pi^{j}D^{b}\) in the zero-width limit is obtained from the amplitude in Eq. (36) by setting \(\Gamma_{*}=0\) and by putting the external legs on shell by setting \(v\cdot p=0\), \(v\cdot q=\omega_{q}\), and \(v\cdot q^{\prime}=\omega_{q^{\prime}}\): \[{\cal T}_{ai,bj}=\frac{1}{2f_{\pi}^{2}}\,[\sigma^{i},\sigma^{j}]_{ab}\,( \omega_{q}+\omega_{q^{\prime}})-\frac{g_{\pi}^{2}}{f_{\pi}^{2}}\left[(\sigma^ {i}\sigma^{j})_{ab}\,\frac{\mathbf{q}\!\cdot\!\mathbf{q}^{ \prime}}{\omega_{q}-\Delta}-(\sigma^{j}\sigma^{i})_{ab}\,\frac{\mathbf{q}\!\cdot\!\mathbf{q}^{\prime}}{\omega_{q^{\prime}}+\Delta} \right]. \tag{62}\] We can ignore the recoil of \(D\) and set \(|\mathbf{q}^{\prime}|=|\mathbf{q}|\). The non-resonant reaction rate can be obtained by taking the limit \(\Delta\to m_{\pi}\). The non-resonant reaction rate for \(\pi D^{a}\to\pi D^{b}\) averaged over incoming pion flavors is \[v\sigma[\pi D^{a}\to\pi D^{b}]_{\rm nonres}=\frac{1}{12\pi f_{\pi}^{4}}\big{[} 2(2-\delta_{ab})\,(1+g_{\pi}^{4}/3)\,\omega_{q}^{2}+\delta_{ab}\,g_{\pi}^{4}\, m_{\pi}^{2}\big{]}\frac{q}{\omega_{q}}, \tag{63}\] where \(q\) is the 3-momentum of the incoming pion. The reaction rate in the pion gas can be obtained by averaging over the momentum distributions of the incoming \(D\) and \(\pi\): \[\left\langle v\sigma[\pi D^{a}\to\pi D^{b}]_{\rm nonres}\right\rangle=\frac{1} {12\pi f_{\pi}^{4}}\left[2(2-\delta_{ab})\,(1+g_{\pi}^{4}/3)\left\langle\omega _{q}q\right\rangle_{\mathbf{q}}+\delta_{ab}\,g_{\pi}^{4}\,m_{\pi}^{2 }\left\langle\frac{q}{\omega_{q}}\right\rangle_{\mathbf{q}}\right], \tag{64}\] where the angular brackets represents the average over the Bose-Einstein distribution for the pion defined in Eq. (21). The second diagram in Fig. 4 with \(D^{*c}\) in the \(s\) channel gives a resonance contribution to the reaction rate proportional to \(1/\Gamma_{*c}\) if \(\Delta_{ac}>m_{\pi ac}\). In the square of the matrix element for the scattering of a \(D\) with momentum \(Mv+p\) and a \(\pi\) with momentum \(q\), the resonance contribution can be isolated by making a simple substitution for the product of the \(D^{*}\) propagator and its complex conjugate: \[\frac{1}{v\!\cdot\!(p+q)-\Delta+i\Gamma_{*}/2}\left(\frac{1}{v\!\cdot\!(p+q)- \Delta+i\Gamma_{*}/2}\right)^{\!*}\longrightarrow\frac{2\pi}{\Gamma_{*}}\, \delta\big{(}v\!\cdot\!(p+q)-\Delta\big{)}. \tag{65}\] The resonant reaction rate for \(\pi D^{a}\to\pi D^{b}\) averaged over the flavors of the incoming pion is \[v\sigma\big{[}\pi D^{a}\to\pi D^{b}\big{]}_{\rm res} = \frac{g_{\pi}^{4}}{72f_{\pi}^{4}}\sum_{c}(2-\delta_{ac})(2- \delta_{bc})\frac{q_{ca}^{2}\,q_{cb}^{3}}{\Delta_{ca}\Gamma_{*c}}\,\theta( \Delta_{cb}-m_{\pi cb})\,\delta(\omega_{caq}-\Delta_{ca}), \tag{66}\] where \(q\) is the 3-momentum of the incoming pion. Using the expressions for \(\Gamma[D^{*}\to D\pi]\) in Eq. (8) and \(v\sigma[\pi D\to D^{*}]\) in Eq. (59), the singular term in the reaction rate can be expressed as \[v\sigma\big{[}\pi D^{a}\to\pi D^{b}\big{]}_{\rm res}=\sum_{c}\frac{1}{\Gamma_{*c }}\,v\sigma[\pi D^{a}\to D^{*c}]\,\Gamma[D^{*c}\to D^{b}\pi]. \tag{67}\] The reaction rate in the pion gas can be evaluated by using the thermal average of \(v\sigma[\pi D\to D^{*}]\) in Eq. (60): \[\big{\langle}v\sigma\big{[}\pi D^{a}\to\pi D^{b}\big{]}_{\rm res}\big{\rangle} =\frac{\mathfrak{f}_{\pi}(\Delta)}{\mathfrak{n}_{\pi}}\,\sum_{c}\frac{\Gamma \big{[}D^{*c}\to D^{a}\pi\big{]}\,\Gamma\big{[}D^{*c}\to D^{b}\pi\big{]}}{ \Gamma_{*c}}. \tag{68}\] The reaction rate for \(\pi D^{a}\to\pi D^{b}\) in the pion gas can be approximated by the sum of the non-resonant reaction rate in Eq. (64) and the resonant reaction rate in Eq. (68). The reaction rates in the hadron gas near or after kinetic freezeout are \[\langle v\sigma_{\pi D^{0}\to\pi D^{0}}\rangle = (0.496+0.188\,g_{\pi}^{4})\,\frac{m_{\pi}^{2}}{f_{\pi}^{4}}+ \frac{\mathfrak{f}_{\pi}(m_{\pi})}{\mathfrak{n}_{\pi}}\,\left(\frac{\Gamma_{D ^{*0}\to D^{0}\pi}^{2}}{\Gamma_{*0}}+\frac{\Gamma_{D^{*+}\to D^{0}\pi}^{2}}{ \Gamma_{*+}}\right), \tag{69a}\] \[\langle v\sigma_{\pi D^{0}\to\pi D^{+}}\rangle = (0.991+0.330\,g_{\pi}^{4})\,\frac{m_{\pi}^{2}}{f_{\pi}^{4}}+ \frac{\mathfrak{f}_{\pi}(m_{\pi})}{\mathfrak{n}_{\pi}}\,\frac{\Gamma_{D^{*+} \to D^{0}\pi}\,\Gamma_{D^{*+}\to D^{+}\pi}}{\Gamma_{*+}},\] (69b) \[\langle v\sigma_{\pi D^{+}\to\pi D^{0}}\rangle = (0.991+0.330\,g_{\pi}^{4})\,\frac{m_{\pi}^{2}}{f_{\pi}^{4}}+ \frac{\mathfrak{f}_{\pi}(m_{\pi})}{\mathfrak{n}_{\pi}}\,\frac{\Gamma_{D^{*+} \to D^{0}\pi}\,\Gamma_{D^{*+}\to D^{+}\pi}}{\Gamma_{*+}},\] (69c) \[\langle v\sigma_{\pi D^{+}\to\pi D^{+}}\rangle = (0.496+0.188\,g_{\pi}^{4})\,\frac{m_{\pi}^{2}}{f_{\pi}^{4}}+ \frac{\mathfrak{f}_{\pi}(m_{\pi})}{\mathfrak{n}_{\pi}}\,\frac{\Gamma_{D^{*+} \to D^{+}\pi}^{2}}{\Gamma_{*+}}. \tag{69d}\] The dimensionless numbers in the first terms depend only on \(m_{\pi}/T\), which we have evaluated at \(T_{\rm kf}=115\) MeV. Before kinetic freezeout, the factor \(\mathfrak{f}_{\pi}(m_{\pi})/\mathfrak{n}_{\pi}\) is determined by \(T\). After kinetic freezeout at \(T_{\rm kf}=115\) MeV, that factor has the constant value \(0.431/\mathfrak{n}_{\pi}^{(\rm kf)}\) independent of \(\mathfrak{n}_{\pi}\). There can also be dependence on \(T\) or \(\mathfrak{n}_{\pi}\) through the mass shifts in \(\Gamma_{D^{*a}\to D^{b}\pi}\) and through the factors of \(1/\Gamma_{*c}\). The \(D^{*}\) resonance terms in the reaction rates for \(\pi D^{a}\to\pi D^{b}\) in Eqs. (69) can be obtained from the reaction rates for \(\pi D^{a}\to D^{*c}\) from Eqs. (61) by multiplying by the branching fraction \(\Gamma_{D^{*c}\to D^{b}\pi}/\Gamma_{*c}\) and summing over the two flavors of \(D^{*c}\). Thus if the reaction rates for \(\pi D\to D^{*}\) in Eqs. (61) and \(\pi D\to\pi D\) in Eqs. (69) are both included in a rate equation, the contributions of \(\pi D\to D^{*}\) in which \(D^{*}\) subsequently decays to \(D\pi\) are double counted. The only contributions of \(\pi D\to D^{*}\) that are not double counted are those in which \(D^{*}\) subsequently decays to \(D\gamma\). The double counting can be avoided by replacing the reaction rates for \(\pi D\to D^{*}\) in Eqs. (61) by the contributions from the subsequent radiative decay of \(D^{*}\): \[\langle v\sigma_{\pi D^{+}\to D^{+}\gamma}\rangle = \big{[}\mathfrak{f}_{\pi}(m_{\pi})/\mathfrak{n}_{\pi}\big{]}\, \Gamma_{D^{*+}\to D^{+}\pi}\,\big{(}\Gamma_{*+,\gamma}/\Gamma_{*+}\big{)}, \tag{70a}\] \[\langle v\sigma_{\pi D^{0}\to D^{+}\gamma}\rangle = \big{[}\mathfrak{f}_{\pi}(m_{\pi})/\mathfrak{n}_{\pi}\big{]}\, \Gamma_{D^{*+}\to D^{0}\pi}\,\big{(}\Gamma_{*+,\gamma}/\Gamma_{*+}\big{)}\,,\] (70b) \[\langle v\sigma_{\pi D^{0}\to D^{0}\gamma}\rangle = \big{[}\mathfrak{f}_{\pi}(m_{\pi})/\mathfrak{n}_{\pi}\big{]}\, \Gamma_{D^{*0}\to D^{0}\pi}\,\big{(}\Gamma_{*0,\gamma}/\Gamma_{*0}\big{)},\] (70c) \[\langle v\sigma_{\pi D^{+}\to D^{0}\gamma}\rangle = 0, \tag{70d}\] where \(\Gamma_{*+,\gamma}\) and \(\Gamma_{*0,\gamma}\) are the radiative decay rates in the vacuum in Eqs. (6). ### \(\pi D^{*}\leftrightarrow\pi D\) The reactions \(\pi D^{*}\leftrightarrow\pi D\) can change vector charm mesons into pseudoscalar charm mesons and vice versa. The reaction \(\pi D^{*}\to\pi D\) is exothermic, releasing a mass energy comparable to \(m_{\pi}\). Since this is large compared to isospin splittings, isospin splittings can be neglected. Relatively simple expressions for the reaction rates can be obtained by taking the limit \(\Delta\to m_{\pi}\). The square of the matrix element for \(\pi(q)D^{*a}\to\pi(q^{\prime})D^{b}\) averaged over \(D^{*}\) spins and averaged/summed over pion flavors is \[\overline{|{\cal M}|^{2}}=\frac{g_{\pi}^{4}}{9f_{\pi}^{4}}\frac{(\mathbf{q}\times \mathbf{q^{\prime}})^{2}}{\omega_{q}^{2}\omega_{q^{\prime}}^{2}}\big{[}2(2-\delta_ {ab})(\omega_{q}-\omega_{q^{\prime}})^{2}+3\delta_{ab}(\omega_{q}+\omega_{q^{ \prime}})^{2}\big{]}. \tag{71}\] The reaction rate can be reduced to \[v\sigma\big{[}\pi D^{*a}\to\pi D^{b}\big{]}=\frac{g_{\pi}^{4}}{216\pi f_{\pi}^ {4}}\frac{q^{2}\big{[}(\omega_{q}+\Delta)^{2}-m_{\pi}^{2}\big{]}^{3/2}}{\omega _{q}^{3}(\omega_{q}+\Delta)^{2}}\big{[}3\delta_{ab}(2\omega_{q}+\Delta)^{2}+2( 2-\delta_{ab})\Delta^{2}\big{]}. \tag{72}\] The reaction rate in the pion gas in the limit \(\Delta\to m_{\pi}\) is \[\big{\langle}v\sigma\big{[}\pi D^{*a}\to\pi D^{b}\big{]}\big{\rangle} = \frac{g_{\pi}^{4}}{216\pi f_{\pi}^{4}}\left\langle\frac{q^{2}}{( \omega_{q}+m_{\pi})^{2}}\left(\frac{\omega_{q}+2m_{\pi}}{\omega_{q}}\right)^{3 /2}\right.\] (73) \[\qquad\qquad\qquad\qquad\times\left.\big{[}3\delta_{ab}(2\omega_{ q}+m_{\pi})^{2}+2(2-\delta_{ab})m_{\pi}^{2}\big{]}\right\rangle_{\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! The reaction rates for \(\pi D^{*a}\to\pi D^{b}\) in the hadron gas after kinetic freezeout are \[\langle v\sigma_{\pi D^{*0}\to\pi D^{0}}\rangle=\langle v\sigma_{\pi D ^{*+}\to\pi D^{+}}\rangle = 0.243\,g_{\pi}^{4}\,m_{\pi}^{2}/f_{\pi}^{4}, \tag{76a}\] \[\langle v\sigma_{\pi D^{*0}\to\pi D^{+}}\rangle=\langle v\sigma_{ \pi D^{*+}\to\pi D^{0}}\rangle = 0.006\,g_{\pi}^{4}\,m_{\pi}^{2}/f_{\pi}^{4}. \tag{76b}\] The dimensionless numerical factors depend only on \(m_{\pi}/T\), which has been evaluated at \(T_{\rm{kf}}=115\) MeV. ### \(\pi D^{*}\to\pi D^{*}\) The reaction \(\pi D^{*}\to\pi D^{*}\) can change the flavor of a vector charm meson. The five Feynman diagrams for this reaction in HH\(\chi\)EFT at LO are shown in Fig. 5. The third diagram, which proceeds through an intermediate \(D\), produces a \(t\)-channel singularity in the reaction rate that is proportional to \(1/\Gamma\) in the limit \(\Gamma\to 0\). The singularity comes from the decay \(D^{*}\to\pi D\) followed by the inverse decay \(\pi D\to D^{*}\). A relatively simple expression for the nonsingular contribution to the reaction rate is obtained by setting \(\Delta=m_{\pi}\) and then taking the limit \(\Gamma\to 0\). A simple expression for the resonant contribution to the reaction rate can be obtained by isolating the term with the factor \(1/\Gamma\). We approximate the reaction rate by the sum of the nonsingular reaction rate and the singular reaction rate. The \({\cal T}\)-matrix element for \(\pi^{i}D^{*a}\to\pi^{j}D^{*b}\) in the zero-width limit is obtained from the amplitude in Eq. (46) by contracting it with the \(D^{*}\) polarization vectors, setting \(\Gamma=0\), and then putting the external legs on shell by setting \(v\cdot p=\Delta\), \(v\cdot q=\omega_{q}\), and \(v\cdot q^{\prime}=\omega_{q^{\prime}}\): \[\varepsilon_{\mu}{\cal T}^{\mu\nu}_{ai,bj}\varepsilon_{\nu}^{ \prime*} = -\frac{1}{2f_{\pi}^{2}}\left[\sigma^{i},\sigma^{j}\right]_{ab}( \omega_{q}+\omega_{q^{\prime}})(\mathbf{\varepsilon}\cdot\mathbf{\varepsilon^{\prime*}}) \tag{77}\] \[-\frac{g_{\pi}^{2}}{f_{\pi}^{2}}\left[(\sigma^{i}\sigma^{j})_{ab} \frac{(\mathbf{\varepsilon}\cdot\mathbf{q})\left(\mathbf{q^{\prime}}\cdot\mathbf{\varepsilon^{\prime*}}\right)}{ \omega_{q}+\Delta}-(\sigma^{j}\sigma^{i})_{ab}\frac{(\mathbf{ \varepsilon}\cdot\mathbf{q^{\prime}})\left(\mathbf{q} \cdot\mathbf{\varepsilon^{\prime*}}\right)}{\omega_{q^{\prime}}- \Delta}\right]\] \[+\frac{g_{\pi}^{2}}{f_{\pi}^{2}}\left[(\sigma^{i}\sigma^{j})_{ab} \frac{(\mathbf{\varepsilon}\times\mathbf{q})\cdot(\mathbf{q^{\prime}}\times\mathbf{\varepsilon^{\prime*}})}{\omega_{q }}-(\sigma^{j}\sigma^{i})_{ab}\frac{(\mathbf{\varepsilon}\times\mathbf{q^{\prime}})\cdot(\mathbf{q}\times\mathbf{ \varepsilon^{\prime*}})}{\omega_{q^{\prime}}}\right].\] We can ignore the recoil of \(D^{*}\) and set \(|\mathbf{q^{\prime}}|=|\mathbf{q}|\). The non-singular contribution to the reaction rate can be obtained by taking the limit \(\Delta\to m_{\pi}\). The reaction rate averaged over incoming pion flavors and incoming \(D^{*}\) spins is \[v\sigma[\pi D^{*a}\to\pi D^{*b}]_{\rm{nonsing}} = \frac{1}{36\pi f_{\pi}^{4}}\Big{\{}(2-\delta_{ab})\big{[}6\omega _{q}^{2}+2g_{\pi}^{4}(m_{\pi}^{2}+q^{4}/\omega_{q}^{2})\big{]} \tag{78}\] \[\qquad\qquad+\delta_{ab}\,g_{\pi}^{4}(3\omega_{q}^{2}+q^{4}/ \omega_{q}^{2})\Big{\}}\frac{q}{\omega_{q}}.\] The reaction rate in the pion gas is obtained by averaging over the momentum distributions of the incoming \(D\) and \(\pi\): \[\big{\langle}v\sigma[\pi D^{*a}\to\pi D^{*b}]_{\rm{nonsing}} \big{\rangle} = \frac{1}{36\pi f_{\pi}^{4}}\bigg{(}\big{[}6(2-\delta_{ab})+2(2+ \delta_{ab})g_{\pi}^{4}\big{]}\langle\omega_{q}q\rangle_{\mathbf{q}} \tag{79}\] \[\qquad\qquad\qquad-4g_{\pi}^{4}m_{\pi}^{2}\left\langle\frac{q}{ \omega_{q}}\right\rangle_{\mathbf{q}}+(4-\delta_{ab})g_{\pi}^{4}m_{ \pi}^{4}\left\langle\frac{q}{\omega_{q}^{3}}\right\rangle_{\mathbf{q}} \bigg{)}.\] The third diagram in Fig. 5 produces a \(t\)-channel singularity, because the intermediate \(D\) can be on shell. In the square of the matrix element for an incoming \(D^{*}\) with momentum \((M+\Delta)v+p\) and an outgoing pion with momentum \(q^{\prime}\), the \(t\)-channel singularity can be isolated by making a simple substitution for the product of the \(D\) propagator and its complex conjugate: \[\frac{1}{v\!\cdot\!(\Delta v+p-q^{\prime})+i\Gamma/2}\left(\frac{1}{v\!\cdot\!( \Delta v+p-q^{\prime})+i\Gamma/2}\right)^{\!\!*}\longrightarrow\frac{2\pi}{ \Gamma}\,\delta\big{(}\Delta+v\!\cdot\!p-v\!\cdot\!q^{\prime}\big{)}. \tag{80}\] The \(t\)-channel singularity contribution to the reaction rate for \(\pi(q)D^{*a}(p)\to\pi(q^{\prime})D^{*b}(p^{\prime})\) averaged over incoming \(D^{*}\) spins and over incoming pion flavors is \[v\sigma\big{[}\pi D^{*a}\to\pi D^{*b}\big{]}_{\rm sing}\;=\;\frac{g_{\pi}^{4}} {72\,f_{\pi}^{4}}\sum_{c}(2-\delta_{ac})(2-\delta_{bc})\frac{q_{bc}^{2}\,q_{ ac}^{3}}{\Delta_{bc}\Gamma_{c}}\,\delta(\omega_{bcq}-\Delta_{bc})\theta( \Delta_{ac}-m_{\pi ac}). \tag{81}\] Using the expressions for \(\Gamma[D^{*}\to D\pi]\) in Eq. (8) and \(v\sigma[D\pi\to D^{*}]\) in Eq. (59), the singular term in the reaction rate can be expressed as \[v\sigma\big{[}\pi D^{*a}\to\pi D^{*b}\big{]}_{\rm sing}=\sum_{c}\frac{1}{ \Gamma_{c}}\,\Gamma\big{[}D^{*a}\to D^{c}\pi\big{]}\,v\sigma\big{[}D^{c}\pi \to D^{*b}\big{]}. \tag{82}\] The reaction rate in the pion gas can be evaluated using the thermal average of \(v\sigma[D\pi\to D^{*}]\) in Eq. (60): \[\Big{\langle}v\sigma\big{[}\pi D^{*a}\to\pi D^{*b}\big{]}_{\rm sing}\Big{\rangle} =\frac{\mathfrak{f}_{\pi}(\Delta)}{\mathfrak{n}_{\pi}}\sum_{c}\frac{\Gamma \big{[}D^{*a}\to D^{c}\pi\big{]}\,\Gamma\big{[}D^{*b}\to D^{c}\pi\big{]}}{ \Gamma_{c}}. \tag{83}\] This differs from the resonant term in the reaction rate for \(\pi D\to\pi D\) in Eq. (68) in that the sum is over \(D\) flavors instead of \(D^{*}\) flavors and that the product of \(D^{*}\) partial decay rates is divided by a \(D\) decay rate \(\Gamma_{c}\) instead of a \(D^{*}\) decay rate \(\Gamma_{*c}\). The reaction rates for \(\pi D^{*a}\to\pi D^{*b}\) in the hadron gas near or after kinetic freezeout can be approximated by the sum of the nonsingular reaction rates in Eq. (79) and the \(t\)-channel singularity reaction rate in Eq. (83): \[\langle v\sigma_{\pi D^{*0}\to\pi D^{*0}}\rangle = (0.496+0.469\,g_{\pi}^{4})\,\frac{m_{\pi}^{2}}{f_{\pi}^{4}}+ \frac{\mathfrak{f}_{\pi}(m_{\pi})}{\mathfrak{n}_{\pi}}\,\frac{\Gamma_{D^{*0} \to D^{0}\pi}^{2}}{\Gamma_{0}}, \tag{84a}\] \[\langle v\sigma_{\pi D^{*0}\to\pi D^{*+}}\rangle = (0.991+0.306\,g_{\pi}^{4})\,\frac{m_{\pi}^{2}}{f_{\pi}^{4}}+ \frac{\mathfrak{f}_{\pi}(m_{\pi})}{\mathfrak{n}_{\pi}}\,\frac{\Gamma_{D^{*0} \to D^{0}\pi}\,\Gamma_{D^{*+}\to D^{0}\pi}}{\Gamma_{0}},\] (84b) \[\langle v\sigma_{\pi D^{*+}\to\pi D^{*0}}\rangle = (0.991+0.306\,g_{\pi}^{4})\,\frac{m_{\pi}^{2}}{f_{\pi}^{4}}+ \frac{\mathfrak{f}_{\pi}(m_{\pi})}{\mathfrak{n}_{\pi}}\,\frac{\Gamma_{D^{*0} \to D^{0}\pi}\,\Gamma_{D^{*+}\to D^{0}\pi}}{\Gamma_{0}},\] (84c) \[\langle v\sigma_{\pi D^{*+}\to\pi D^{*+}}\rangle = (0.496+0.469\,g_{\pi}^{4})\,\frac{m_{\pi}^{2}}{f_{\pi}^{4}}+ \frac{\mathfrak{f}_{\pi}(m_{\pi})}{\mathfrak{n}_{\pi}}\,\left(\frac{\Gamma_{D^{ *+}\to D^{0}\pi}^{2}}{\Gamma_{0}}+\frac{\Gamma_{D^{*+}\to D^{+}\pi}^{2}}{ \Gamma_{+}}\right). \tag{84d}\] The dimensionless numbers in the first terms depend only on \(m_{\pi}/T\), which we have evaluated at \(T_{\rm kf}=115\) MeV. Before kinetic freezeout, the factor \(\mathfrak{f}_{\pi}(m_{\pi})/\mathfrak{n}_{\pi}\) is determined by \(T\). After kinetic freezeout at \(T_{\rm kf}=115\) MeV, that factor has the constant value \(0.431/\mathfrak{n}_{\pi}^{(\rm kf)}\) independent of \(\mathfrak{n}_{\pi}\). There can also be dependence on \(T\) or \(\mathfrak{n}_{\pi}\) through the mass shifts in \(\Gamma_{D^{*a}\to D^{b}\pi}\) and through the factors of \(1/\Gamma_{c}\). After kinetic freezeout, the most dramatic dependences on \(\mathfrak{n}_{\pi}\) can be made explicit by inserting the expressions for the thermal widths of \(D^{+}\) and \(D^{0}\) in Eqs. (57). The resulting expression for the reaction rate for \(\pi D^{*0}\to\pi D^{*+}\) (or \(\pi D^{*+}\to\pi D^{*0}\)) is \[\langle v\sigma_{\pi D^{*0}\to\pi D^{*+}}\rangle=(0.991+0.306\,g_{\pi}^{4})\, \frac{m_{\pi}^{2}}{f_{\pi}^{4}}+\frac{1}{3\mathfrak{n}_{\pi}}\,\frac{\Gamma_{D ^{*0}\to D^{0}\pi}\,\Gamma_{D^{*+}\to D^{0}\pi}}{\Gamma_{D^{*0}\to D^{0}\pi} +\Gamma_{D^{*+}\to D^{0}\pi}}. \tag{85}\] At kinetic freezeout, the \(t\)-channel singularity term is smaller than the nonsingular term by the factor \(0.0003\). However the multiplicative factor of \(1/\mathfrak{n}_{\pi}\) makes the \(t\)-channel singularity term increase dramatically as the hadron gas expands. It becomes equal to the nonsingular term when \(\mathfrak{n}_{\pi}\) decreases by the factor \(0.0009\). Since the volume \(V(\tau)\) of the hadron gas increases roughly as \(\tau^{3}\), this corresponds to an increase in its linear dimensions by about a factor of 10. Many of the \(\pi D^{(*)}\to\pi D^{(*)}\) scattering reactions change the flavor or spin of the charm meson. The reaction rates in the hadron gas after kinetic freezeout at \(T_{\rm lsf}=115\) MeV for incoming neutral charm mesons \(D^{0}\) or \(D^{*0}\) are shown as functions of the pion number density \(\mathfrak{n}_{\pi}\) in Fig. 11. For each of these reactions, there is another one with an incoming charged charm meson \(D^{+}\) or \(D^{*+}\) that has the same reaction rate. The reaction rate for \(\pi D^{0}\to\pi D^{+}\) is given in Eq. (69b). The reaction rates for \(\pi D^{0}\to\pi D^{*0}\) and \(\pi D^{0}\to\pi D^{*+}\) are given in Eqs. (75a) and (75b). The reaction rates for \(\pi D^{*0}\to\pi D^{0}\) and \(\pi D^{*0}\to\pi D^{+}\) are given in Eqs. (76a) and (76b). The largest reaction rates in Fig. 11 are for reactions that change the charm-meson flavor only. The reaction rate for \(\pi D^{0}\to\pi D^{+}\) has a \(D^{*+}\) resonance contribution that makes it increase as \(\mathfrak{n}_{\pi}\) decreases. However the \(D^{*+}\) resonance contribution is about 3 orders of magnitude smaller than the nonresonant term, so the decrease is not visible in Fig. 11. The reaction rate for \(\pi D^{*0}\to\pi D^{*+}\) has a \(D^{0}\)\(t\)-channel singularity contribution that makes it diverge as \(\mathfrak{n}_{\pi}\to 0\). The other reaction rates in Fig. 11 are constant functions of \(\mathfrak{n}_{\pi}\). The rates in Fig. 11 for reactions that change the charm-meson spin are suppressed by more than 1.5 orders of magnitude. The rates for reactions that change both the flavor and spin of the charm meson are suppressed by more than 3 orders of magnitude. ## VI Evolution of charm-meson ratios In this section, we calculate the evolution of the charm-meson number densities in the expanding hadron gas from a heavy-ion collision after kinetic freezeout. ### Rate equations The evolution of the number density \(\mathfrak{n}_{D^{(*)}}(\tau)\) of a charm meson in the expanding hadron gas with the proper time \(\tau\) can be described by a first order differential equation. The number density decreases because of the increasing volume \(V(\tau)\), but it can also be changed by reactions. The time derivative of \(\mathfrak{n}_{D^{(*)}}\) has positive contributions from reactions with \(D^{(*)}\) in the final state and negative contributions from reactions with \(D^{(*)}\) in the initial state. Near kinetic freezeout, the most important reactions involve pions, because pions are by far the most abundant hadrons in the hadron gas. After kinetic freezeout, most interactions have a negligible effect on the number density \(\mathfrak{n}_{D^{(*)}}(\tau)\) of a charm meson. The charm-meson number density decreases in proportion to \(1/V(\tau)\), like the pion number density \(\mathfrak{n}_{\pi}(\tau)\) in Eq. (19). The effect of the increasing volume can be removed from the differential equation by considering the rate equation for the ratio of number densities \(\mathfrak{n}_{D^{(*)}}/\mathfrak{n}_{\pi}\). The remaining terms in the rate equation come from reactions that change the spin or flavor of the charm meson. The reaction rate is multiplied by a factor of the number density for every particle in the initial state. The reaction rate is determined by the temperature, which is fixed at the kinetic freezeout temperature \(T_{\rm kf}\), the charm-meson number density \(\mathfrak{n}_{D^{(*)}}\), and the pion number density \(\mathfrak{n}_{\pi}(\tau)\), which decreases as \(1/V(\tau)\). Some reaction rates in the expanding hadron gas are sensitive to the thermal mass shifts and the thermal widths of the hadrons. The greatest sensitivities to the thermal mass shifts for charm mesons and pions are in reactions whose rates are proportional to a power of \(M_{D^{*}}-M_{D}-m_{\pi}\), such as \(D^{*}\to D\pi\) decay rates. The greatest sensitivity to the thermal width \(\Gamma_{a}\) of a pseudoscalar charm meson \(D^{a}\) comes from \(D\)-meson \(t\)-channel singularities, which can produce contributions to reaction rates proportional to \(1/\Gamma_{a}\) in the limit \(\Gamma_{a}\to 0\). The most relevant reactions for charm mesons in the expanding hadron gas include the decays \(D^{*}\to D\pi\) and \(D^{*}\to D\gamma\), the scattering reactions \(\pi D\to\pi D\), \(\pi D\to D\gamma\), and \(\pi D^{*}\to\pi D^{*}\) that change the charm-meson flavor, and the scattering reactions \(\pi D\to\pi D^{*}\) and \(\pi D^{*}\to\pi D\) that change the charm-meson spin. The rate equations for the number densities of the pseudoscalar charm mesons \(D^{a}\) and the vector charm mesons \(D^{*a}\) are \[{\mathfrak{n}}_{\pi}\frac{d}{d\tau}\left(\frac{{\mathfrak{n}}_{D^{ a}}}{{\mathfrak{n}}_{\pi}}\right) = \left[1+{\mathfrak{f}}_{\pi}(m_{\pi})\right]\sum_{b}\Gamma_{D^{*b }\to D^{a\pi}}\,{\mathfrak{n}}_{D^{*b}}+\Gamma_{*a,\gamma}\,{\mathfrak{n}}_{D ^{*a}} \tag{86a}\] \[+3\sum_{b\neq a}\left[\,\langle v\sigma_{\pi D^{b}\to\pi D^{a}} \rangle\,\left({\mathfrak{n}}_{D^{b}}-{\mathfrak{n}}_{D^{a}}\right)+\left\langle v \sigma_{\pi D^{b}\to D^{a}\gamma}\right\rangle\,{\mathfrak{n}}_{D^{b}}- \left\langle v\sigma_{\pi D^{a}\to D^{b}\gamma}\right\rangle\,{\mathfrak{n}}_ {D^{a}}\right]{\mathfrak{n}}_{\pi}\] \[+3\sum_{b}\left[\,\langle v\sigma_{\pi D^{b}\to\pi D^{a}}\rangle\, \,{\mathfrak{n}}_{D^{*b}}-\langle v\sigma_{\pi D^{a}\to\pi D^{b}}\rangle\,{ \mathfrak{n}}_{D^{a}}\right]{\mathfrak{n}}_{\pi}+\ldots,\] The partial decay rates \(\Gamma_{D^{*a}\to D^{b}\pi}\) and \(\Gamma_{*a,\gamma}\) are given in Eqs. (56) and (6). The reaction rates \(\langle v\sigma_{\pi D^{a}\to\pi D^{b}}\rangle\) and \(\langle v\sigma_{\pi D^{a}\to D^{b}\gamma}\rangle\), which have \(D^{*}\) resonance contributions, are given in Eqs. (69) and (70). The reaction rates \(\langle v\sigma_{\pi D^{a}\to\pi D^{*b}}\rangle\) and \(\langle v\sigma_{\pi D^{*a}\to\pi D^{b}}\rangle\) are given in Eqs. (75) and (76). The reaction rates \(\langle v\sigma_{\pi D^{*a}\to\pi D^{*b}}\rangle\), which have \(D\)-meson \(t\)-channel singularities, are given in Eqs. (84). The rate equations in Eqs. (86) are consistent with the conservation of charm-quark number, which implies that the sum of the ratios of the number densities for all four charm mesons remains constant: \[{\mathfrak{n}}_{\pi}\frac{d}{d\tau}\left(\frac{{\mathfrak{n}}_{D^{0}}+{ \mathfrak{n}}_{D^{+}}+{\mathfrak{n}}_{D^{*0}}+{\mathfrak{n}}_{D^{*+}}}{{ \mathfrak{n}}_{\pi}}\right)=0. \tag{87}\] Given initial conditions on the ratios \({\mathfrak{n}}_{D^{(*)}}(\tau)/{\mathfrak{n}}_{\pi}(\tau)\) of charm-meson and pion number densities, the rate equations in Eqs. (86) can be integrated to determine the ratios at larger \(\tau\). As our initial conditions on the ratio at kinetic freezeout, we take the ratio of the multiplicity of the charm meson before \(D^{*}\) decays and the pion multiplicity: \[\frac{{\mathfrak{n}}_{D^{a}}(\tau_{\rm ff})}{{\mathfrak{n}}_{\pi} (\tau_{\rm ff})} = \frac{(dN_{D^{a}}/dy)_{0}}{dN_{\pi}/dy}, \tag{88a}\] \[\frac{{\mathfrak{n}}_{D^{*a}}(\tau_{\rm ff})}{{\mathfrak{n}}_{\pi} (\tau_{\rm ff})} = \frac{(dN_{D^{*a}}/dy)_{0}}{dN_{\pi}/dy}. \tag{88b}\] In the case of Pb-Pb collisions in the centrality range 0-10% at \(\sqrt{s_{NN}}=5.02\,\)TeV, the multiplicity \(dN_{\pi}/dy\) for a single pion flavor measured by the ALICE collaboration is given in Eq. (22) [25]. The multiplicities \((dN_{D^{*a}}/dy)_{0}\) and \((dN_{D^{a}}/dy)_{0}\) for charm mesons before \(D^{*}\) decays inferred from SHM predictions are given in Eqs. (26) and (27). The resulting initial values of the ratios of charm-meson and pion number densities at kinetic freezeout for \(D^{0}\), \(D^{+}\), \(D^{*0}\), and \(D^{*+}\) are 0.00278, 0.00264, 0.00334, and 0.00328, respectively. The solutions to the rate equations in Eqs. (86) with the initial conditions in Eqs. (88) are shown in Fig. 12. The ratios \(\mathfrak{n}_{D^{0}}/\mathfrak{n}_{\pi}\) and \(\mathfrak{n}_{D^{*+}}/\mathfrak{n}_{\pi}\) decrease exponentially to \(0\) on time scales comparable to the \(D^{*}\) lifetimes. The ratio \(N_{D^{0}}/N_{D^{+}}\) of the numbers of \(D^{0}\) and \(D^{+}\) is predicted to increase from \(1.053\) at kinetic freezeout to about \(2.092\) at the detector. The naive prediction for the effects of \(D^{*}\) decays on the numbers \(N_{D^{0}}\) and \(N_{D^{+}}\) at the detector can be obtained by inserting the initial conditions at kinetic freezeout into Eqs. (9). The naive prediction for the ratio \(N_{D^{0}}/N_{D^{+}}\) at the detector is \(2.255\). This is about \(10\%\) larger than the ratio from solving the rate equations. Thus the rate equations in Eqs. (86) must include reactions other than \(D^{*}\) decays whose effects are not negligible after kinetic freezeout. ### Asymptotic Evolution As \(\tau\) increases, the pion number density \(\mathfrak{n}_{\pi}(\tau)\) decreases to \(0\) as \(1/V(\tau)\). As \(\mathfrak{n}_{\pi}\) approaches \(0\), most of the reaction rates in Eqs. (86) approach the finite constant reaction rates in the vacuum. The exceptions are \(\langle v\sigma_{\pi D^{*0}\to\pi D^{*+}}\rangle\) and \(\langle v\sigma_{\pi D^{*+}\to\pi D^{*0}}\rangle\), which are given in Eqs. (84b) and (84c). They have contributions with a factor of \(1/\Gamma_{0}\) from a \(D\)-meson \(t\)-channel singularity. Since \(\Gamma_{0}\), which is given in Eq. (57b), decreases to \(0\) in proportion to \(\mathfrak{n}_{\pi}\) as \(\mathfrak{n}_{\pi}\to 0\), these reaction rates increase as \(1/\mathfrak{n}_{\pi}\). The limiting behaviors of the reaction rates \(\langle v\sigma_{\pi D^{*b}\to\pi D^{*a}}\rangle\) as \(\mathfrak{n}_{\pi}\to 0\) in Eqs. (84) are \[\langle v\sigma_{\pi D^{*0}\to\pi D^{*0}}\rangle \longrightarrow \frac{1}{3\,\mathfrak{n}_{\pi}}\,\frac{\left(B_{00}\,\Gamma_{*0} \right)^{2}}{B_{00}\,\Gamma_{*0}+B_{+0}\,\Gamma_{*+}}, \tag{89a}\] \[\langle v\sigma_{\pi D^{*0}\to\pi D^{*+}}\rangle \longrightarrow \frac{1}{3\,\mathfrak{n}_{\pi}}\,\frac{\left(B_{00}\,\Gamma_{*0} \right)\left(B_{+0}\,\Gamma_{*+}\right)}{B_{00}\,\Gamma_{*0}+B_{+0}\,\Gamma_{* +}},\] (89b) \[\langle v\sigma_{\pi D^{*+}\to\pi D^{*0}}\rangle \longrightarrow \frac{1}{3\,\mathfrak{n}_{\pi}}\,\frac{\left(B_{00}\,\Gamma_{*0} \right)\left(B_{+0}\,\Gamma_{*+}\right)}{B_{00}\,\Gamma_{*0}+B_{+0}\,\Gamma_{* +}},\] (89c) \[\langle v\sigma_{\pi D^{*+}\to\pi D^{*+}}\rangle \longrightarrow \frac{1}{3\,\mathfrak{n}_{\pi}}\left(\frac{\left(B_{+0}\,\Gamma_ {*+}\right)^{2}}{B_{00}\,\Gamma_{*0}+B_{+0}\,\Gamma_{*+}}+B_{++}\,\Gamma_{*+ }\right), \tag{89d}\] where \(\Gamma_{*a}\) is the decay rate for \(D^{*a}\) in the vacuum given in Eqs. (5) and \(B_{ab}\) is the branching fraction for \(D^{*a}\to D^{b}\pi\) given in Eqs. (4). The factors of \(1/\mathfrak{n}_{\pi}\) in these asymptotic reaction rates can cancel explicit factors of \(\mathfrak{n}_{\pi}\) in the rate equations. At large times \(\tau\), the only terms in the rate equation that survive are 1-body terms with a single factor of a number density \(\mathfrak{n}_{D}\) or \(\mathfrak{n}_{D^{*}}\). There are 1-body terms from the decays \(D^{*}\to D\pi\) and \(D^{*}\to D\gamma\). The \(t\)-channel singularities produce additional 1-body terms. If the 1-body terms from \(D\)-meson \(t\)-channel singularities are taken into account, the asymptotic rate equations become \[\mathfrak{n}_{\pi}\frac{d}{d\tau}\left(\frac{\mathfrak{n}_{D^{+}} }{\mathfrak{n}_{\pi}}\right) \longrightarrow \left(1-B_{+0}\right)\Gamma_{*+}\,\mathfrak{n}_{D^{*+}}, \tag{90a}\] \[\mathfrak{n}_{\pi}\frac{d}{d\tau}\left(\frac{\mathfrak{n}_{D^{0}} }{\mathfrak{n}_{\pi}}\right) \longrightarrow \Gamma_{*0}\,\mathfrak{n}_{D^{*0}}+B_{+0}\,\Gamma_{*+}\, \mathfrak{n}_{D^{*+}},\] (90b) \[\mathfrak{n}_{\pi}\frac{d}{d\tau}\left(\frac{\mathfrak{n}_{D^{*+}} }{\mathfrak{n}_{\pi}}\right) \longrightarrow -\big{(}\Gamma_{*+}+\gamma\big{)}\,\mathfrak{n}_{D^{*+}}+\gamma \,\mathfrak{n}_{D^{*0}},\] (90c) \[\mathfrak{n}_{\pi}\frac{d}{d\tau}\left(\frac{\mathfrak{n}_{D^{*0}} }{\mathfrak{n}_{\pi}}\right) \longrightarrow -\big{(}\Gamma_{*0}+\gamma\big{)}\,\mathfrak{n}_{D^{*0}}+\gamma \,\mathfrak{n}_{D^{*+}}, \tag{90d}\] where the rate \(\gamma\) is \[\gamma=\frac{1}{1/(B_{00}\,\Gamma_{*0})+1/(B_{+0}\,\Gamma_{*+})}=21.9\ \mathrm{keV}. \tag{91}\] The terms in Eqs. (90) with the factor \(\gamma\) come from \(D\)-meson \(t\)-channel singularities. The asymptotic rate equations in Eqs. (90) can be solved analytically. If the numbers of \(D^{0}\), \(D^{+}\), \(D^{*0}\), and \(D^{*+}\) at kinetic freezeout are \((N_{D^{0}})_{0}\), \((N_{D^{+}})_{0}\), \((N_{D^{*0}})_{0}\), and \((N_{D^{*+}})_{0}\), the predicted asymptotic numbers of \(D^{0}\) and \(D^{+}\) are \[N_{0} = \big{(}N_{0}\big{)}_{0}+\big{(}N_{*0}\big{)}_{0}+B_{+0}\big{(}N_{ *+}\big{)}_{0}-\frac{\left(1-B_{+0}\right)\gamma}{\Gamma_{*+}\Gamma_{*0}+ \left(\Gamma_{*+}\!+\!\Gamma_{*0}\right)\gamma}\left[\Gamma_{*+}\,\big{(}N_{* 0}\big{)}_{0}\!-\!\Gamma_{*0}\,\big{(}N_{*+}\big{)}_{0}\right], \tag{92a}\] \[N_{+} = \big{(}N_{+}\big{)}_{0}+\left(1\!-\!B_{+0}\right)\big{(}N_{*+} \big{)}_{0}+\frac{\left(1-B_{+0}\right)\gamma}{\Gamma_{*+}\Gamma_{*0}+\left( \Gamma_{*+}\!+\!\Gamma_{*0}\right)\gamma}\left[\Gamma_{*+}\,\big{(}N_{*0} \big{)}_{0}\!-\!\Gamma_{*0}\,\big{(}N_{*+}\big{)}_{0}\right], \tag{92b}\] where \(\gamma\) is given in Eq. (91). The coefficients of \((N_{*0})_{0}\) and \((N_{*0})_{+}\) in Eqs. (92) depend only on \(B_{+0}\), \(B_{00}\), and the ratio \(\Gamma_{*0}/\Gamma_{*+}\). The prediction for the difference between the numbers of \(D^{0}\) and \(D^{+}\) are \[N_{0}-N_{+} = 2\,B_{+0}\left(N_{*+}\right)_{0}+\left(N_{0}-N_{+}\right)_{0}+ \left(N_{*0}-N_{*+}\right)_{0} \tag{93}\] \[-\frac{2(1-B_{+0})\,\gamma}{\Gamma_{*+}\Gamma_{*0}+\left(\Gamma_ {*+}\!+\!\Gamma_{*0}\right)\gamma}\left[\Gamma_{*+}\left(N_{*0}\right)_{0}- \Gamma_{*0}\left(N_{*+}\right)_{0}\right].\] If we impose the isospin-symmetry approximations \((N_{0})_{0}\approx(N_{+})_{0}\) and \((N_{*0})_{0}\approx(N_{*+})_{0}\), the difference reduces to \[N_{0}-N_{+}\approx 2\left(B_{+0}-\frac{\left(1-B_{+0}\right)\left(\Gamma_ {*+}-\Gamma_{*0}\right)\gamma}{\Gamma_{*+}\Gamma_{*0}+\left(\Gamma_{*+}\!+\! \Gamma_{*0}\right)\gamma}\right)\left(N_{*+}\right)_{0}. \tag{94}\] The two terms in the parantheses come from \(D^{*}\) decays and the \(D\)-meson \(t\)-channel singularity, respectively. The effect of \(D\)-meson \(t\)-channel singularities is to reduce the coefficient of \((N_{*+})_{0}\) from \(1.35\pm 0.01\), which includes the effects of \(D^{*}\) decays only, to \(1.30\pm 0.01\). The change in the coefficient is small but statistically significant. We use our initial conditions on the ratios of the charm-meson/pion number densities at kinetic freezeout to illustrate the effect of \(t\)-channel singularities on the ratio \(N_{0}/N_{+}\) of the observed numbers of \(D^{0}\) and \(D^{+}\). The ratio before \(D^{*}\) decays is \((N_{D^{0}})_{0}/(N_{D^{+}})_{0}=1.053\). The predicted numbers of \(D^{0}\) and \(D^{+}\) at the detector are given in Eqs. (92). Their ratio is predicted to increase to \(2.178\pm 0.016\) at the detector, where the error bar is from \(B_{+0}\), \(B_{00}\), \(\Gamma_{*0}\), and \(\Gamma_{*+}\) only. The naive ratio \(N_{D^{0}}/N_{D^{+}}\) ignoring \(t\)-channel singularities, which is obtained using Eqs. (9), is \(2.255\pm 0.014\). The difference between the predicted ratio taking into account \(t\)-channel singularities and the naive prediction is \(-0.077\pm 0.006\), which differs from \(0\) by about \(13\) standard deviations. ## VII Conclusion The reactions \(\pi D^{*}\to\pi D^{*}\) have \(t\)-channel singularities in \(6\) of the \(10\) scattering channels. These reactions can proceed at tree level through the decay \(D^{*}\to\pi D\) followed by the inverse decay \(\pi D\to D^{*}\). The \(t\)-channel singularity appears because the intermediate \(D\) can be on shell. The tree-level cross section diverges inside a narrow interval of the center-of-mass energy near the threshold, which is given by Eq. (1) if the incoming and outgoing \(\pi\) and \(D\) have the same flavor. If the singularity is regularized by inserting the width \(\Gamma\) of the \(D\) into its propagator, the cross section has a term with a factor of the \(D\) lifetime \(1/\Gamma\). The resulting enormous cross section is unphysical, because the lifetimes of the incoming and outgoing \(D^{*}\) are many orders of magnitude smaller than the \(D\) lifetime. A more physical regularization of the \(t\)-channel singularity in the reaction rate for \(\pi D^{*}\to\pi D^{*}\) could perhaps be obtained by a resummation of loop diagrams. There are \(n\)-loop diagrams with \(n+1\)\(D\) propagators, \(n\)\(D^{*}\) propagators, and \(n\) pion propagators in which all \(2n+1\) charm-meson propagators are nearly on shell. A resummation of these diagrams to all orders could produce a regularization of the \(t\)-channel singularity that is determined by the \(D^{*}\) width. As pointed out by Grzadkowski, Iglicki, and Mrowczynski in Ref. [1], the thermal widths of particles in a medium provide a physical regularization of \(t\)-channel singularities. In a hadronic medium, the \(t\)-channel singularity in the reaction rate for \(\pi D^{*}\to\pi D^{*}\) is regularized by the thermal width of \(D\). A physical example of such a hadronic medium is the hadron gas produced by a central relativistic heavy-ion collision. The effects of the hadron gas are particularly simple near and after kinetic freezeout, because it can be accurately approximated by a pion gas. The thermal widths of the charm mesons come primarily from coherent pion forward scattering. At leading order in the pion interactions, the thermal widths \(\Gamma_{+}\) of \(D^{+}\) and \(\Gamma_{0}\) of \(D^{0}\) are given by Eqs. (57). Before kinetic freezeout, the \(D\) widths are determined by the decreasing temperature \(T\). After kinetic freezeout, the \(D\) widths are determined by the kinetic freezeout temperature and the decreasing pion number density \(\mathfrak{n}_{\pi}\). In a hadronic medium, the \(t\)-channel singularities in the reaction rates for \(\pi D^{*}\to\pi D^{*}\) produce terms in the average reaction rates \(\langle v\sigma_{\pi D^{*}\to\pi D^{*}}\rangle\) inversely proportional to the thermal widths \(\Gamma_{+}\) and \(\Gamma_{0}\). These terms come from \(\pi D^{*}\) scattering in regions near the threshold, and they are sensitive to differences \(\Delta-m_{\pi}\) between \(D^{*}-D\) mass differences in the medium and pion masses in the medium. There are also nonsingular contributions from \(\pi D^{*}\) scattering that are determined primarily by the temperature \(T\) and are insensitive to the values of \(\Delta-m_{\pi}\). We found a simple prescription for the nonsingular reaction rates that allowed the total reaction rate to be approximated by the sum of the nonsingular term and the \(t\)-channel singularity terms. Our prescription for the nonsingular reaction rate is simply the rate in the limit \(\Delta\to m_{\pi}\). The resulting expressions for the reaction rates \(\langle v\sigma_{\pi D^{*a}\to\pi D^{*b}}\rangle\) in the pion gas after kinetic freezeout are given in Eqs. (84). After kinetic freezeout, the most dramatic dependence of reaction rates for \(\pi D^{*}\to\pi D^{*}\) on \(\mathfrak{n}_{\pi}\) comes from a multiplicative factor of \(1/\mathfrak{n}_{\pi}\). This dependence for \(\langle v\sigma_{\pi D^{*0}\to\pi D^{*+}}\rangle\) is made explicit in Eq. (85). In rate equations for the evolution of number densities, a reaction rate is multiplied by the number density of each of the incoming particles. In the hadron gas after kinetic freezeout, all the number densities are decreasing in inverse proportion to the expanding volume \(V(\tau)\). Thus \(n\)-body reactions, which have \(n\geq 2\) incoming particles, are suppressed compared to decays, which are 1-body reactions, by the additional factors of the number densities. A 2-body reaction whose rate is proportional to an inverse power of a number density provides an exception, because its effects in the rate equation can be comparable to 1-body terms. In particular, the effects of the reactions \(\pi D^{*}\to\pi D^{*}\) can be comparable to 1-body terms because of the multiplicative factor of \(1/\mathfrak{n}_{\pi}\) in the \(t\)-channel singularity term. Rate equations for the number density ratios \(\mathfrak{n}_{D^{a}}/\mathfrak{n}_{\pi}\) and \(\mathfrak{n}_{D^{*a}}/\mathfrak{n}_{\pi}\) are given in Eqs. (86). The numerical solutions of these rate equations after kinetic freezeout for specific initials conditions motivated by the Statistical Hadronization Model are illustrated in Fig. 12. There is a small but significant difference between the asymptotic charm-meson ratios and the naive predictions from Eqs. (9), which take into account only the decays of \(D^{*0}\) and \(D^{*+}\). The asymptotic forms of the rate equations in the limit \(\mathfrak{n}_{\pi}\to 0\) are given in Eqs. (90). The only rates that remain in that limit are 1-body terms, but those terms come not only from \(D^{*}\) decays but also from the \(t\)-channel singularities in \(\pi D^{*}\to\pi D^{*}\) reactions. The rates are completely determined by \(D^{*}\) decay rates and \(D^{*}\) branching fractions in the vacuum. From the analytic solutions of the asymptotic rate equations, we deduced the simple approximations in Eqs. (92) for the asymptotic numbers of \(D^{0}\) and \(D^{+}\) given the numbers of the \(D^{0}\), \(D^{+}\), \(D^{*0}\), and \(D^{*+}\) at kinetic freezeout. The new terms from \(t\)-channel singularities are those with a factor of the rate \(\gamma\). The predicted deviations of charm-meson ratios from the naive predictions from Eqs. (9) are small but much larger than the statistical errors from the \(D^{*}\) decay rates and branching fractions. The analytic predictions from Eqs. (92) give good approximations to numerical solutions of the more accurate rate equations in Eqs. (86). There are other charm-meson reactions with \(t\)-channel singularities including \(\pi D^{*}\to\pi\pi D\) and \(\pi\pi D\to\pi D^{*}\). The reaction \(\pi D^{*}\to\pi\pi D\) has a pion \(t\)-channel singularity from the decay \(D^{*}\to D\pi\) followed by the scattering \(\pi\pi\to\pi\pi\). In a hadronic medium, the \(t\)-channel singularity is regularized by the thermal width \(\Gamma_{\pi}\) of the pion. In a pion gas, the leading term in \(\Gamma_{\pi}\) is proportional to \(\mathfrak{n}_{\pi}\). Our preliminary result for the \(t\)-channel singularity contribution to the reaction rate for \(\pi D^{*}\to\pi\pi D\) can be reduced to \(\Gamma_{D^{*}\to D\pi}/\mathfrak{n}_{\pi}\), which is comparable to the \(t\)-channel singularity term in the reaction rate for \(\pi D^{*0}\to\pi D^{*+}\) in Eq. (85). This suggests that the contributions from pion \(t\)-channel singularities may be comparable to those from \(D\)-meson \(t\)-channel singularities. There have been previous studies of the effects of a thermal hadronic medium on charm mesons [31; 32; 33; 34]. In these studies, isospin splittings have been ignored and therefore the possibility of \(t\)-channel singularities has not been considered. It might be worthwhile to look for other aspects of the thermal physics of charm mesons in which the effects of \(t\)-channel singularities are significant. One such aspect is the production of the exotic heavy hadrons \(X(3872)\) and \(T_{c}^{+}(3875)\). Their tiny binding energies relative to a charm-meson-pair threshold imply that they are loosely bound charm-meson molecules. In previous studies of the production of charm-meson molecules, it has been assumed that they are produced before kinetic freezeout [35; 36; 37; 38; 39; 40]. It is possible that \(t\)-channel singularities could have a significant effect on their production after kinetic freezeout. The problem of \(t\)-channel singularities is an unavoidable aspect of reactions involving unstable particles. Unstable particles are ubiquitous in hadronic physics. In the Standard Model of particle physics, the weak bosons and the Higgs are unstable particles. Most models of physics beyond the Standard Model have unstable particles. We have identified a simple aspect of charm-meson physics in which the effects of \(t\)-channel singularities are significant. This provides encouragement to look for other effects of \(t\)-channel singularities in hadronic, nuclear, and particle physics. ###### Acknowledgements. KI would like to thank Ulrich Heinz for many helpful discussions during the early stages of this project. This work was supported in part by the U.S. Department of Energy under grant DE-SC0011726, by the Ministry of Science, Innovation and Universities of Spain under grant BES-2017-079860, by the National Natural Science Foundation of China (NSFC) under grant 11905112, by the Natural Science Foundation of Shandong Province of China under grant ZR2019QA012, by the Alexander von Humboldt Foundation, and by NSFC and the Deutsche Forschungsgemeinschaft (DFG) through the Sino-German Collaborative Research Center TRR110 (NSFC grant 12070131001, DFG Project-ID 196253076-TRR110). ## Appendix A Feynman rules for HH\(\chi\)Eft In \(\chi\)EFT, the propagator for a pion with momentum \(p\) and isospin indices \(i,j\) is \[\boxed{\frac{i\,\delta^{ij}}{p^{2}-m_{\pi}^{2}+i\epsilon}.} \tag{101}\] At LO in \(\chi\)EFT, the only interaction parameter for pions is the pion decay constant \(f_{\pi}\). The four-pion vertex is \[\pi^{i}(p)\pi^{j}(q)\to\pi^{m}(p^{\prime})\pi^{n}(q^{\prime}):\] \[\boxed{\frac{2i}{f_{\pi}^{2}}\left[s\,\delta^{ij}\delta^{mn}+t\, \delta^{im}\delta^{jn}+u\,\delta^{in}\delta^{jm}-\frac{3m_{\pi}^{2}+Q^{2}}{3}( \delta^{ij}\delta^{mn}+\delta^{im}\delta^{jn}+\delta^{in}\delta^{jm})\right],}\] (A2) where the Mandelstam variables are \(s=(p+q)^{2}\), \(t=(p-p^{\prime})^{2}\), and \(u=(p-q^{\prime})^{2}\) and \(Q^{2}=p^{2}+q^{2}+p^{\prime\,2}+q^{\prime\,2}-4m_{\pi}^{2}\). The interactions of charm mesons with pions can be described using heavy-hadron chiral effective field theory (HH\(\chi\)EFT). In HH\(\chi\)EFT, the 4-momentum of a charm meson is expressed as the sum of \(Mv\), with \(v\) a velocity 4-vector that satisfies \(v^{2}=1\), and a residual 4-momentum \(p\). The propagator for a pseudoscalar charm meson \(D\) with momentum \(Mv+p\) and isospin indices \(a,b\) is \[\boxed{\frac{i\,\delta_{ab}}{2(v\cdot p+i\epsilon)}.}\] (A3) In amplitudes that are sensitive to isospin splittings, \(v\cdot p\) in the propagator for \(D^{a}\) should be replaced by \(v\cdot p-(M_{a}-M)\). If \(D^{a}\) can be on its mass shell, the term \(+i\epsilon\) in the denominator should be replaced by \(+i\Gamma_{a}/2\), where \(\Gamma_{a}\) is the width of \(D^{a}\). The propagator for a vector charm meson \(D^{*}\) with momentum \(Mv+p\) and isospin indices \(a,b\) is \[\boxed{\frac{i\,\delta_{ab}\left(-g_{\mu\nu}+v_{\mu}v_{\nu}\right)}{2(v\cdot p -\Delta+i\epsilon)},}\] (A4) where \(\Delta=M_{*}-M\) is the mass difference between the vector and pseudoscalar meson. The mass shell for \(D^{(*)}\) is \(v\cdot p=\Delta\). In amplitudes that are sensitive to isospin splittings, \(v\cdot p-\Delta\) in the propagator for \(D^{*a}\) should be replaced by \(v\cdot p-(M_{*a}-M)\). If \(D^{*a}\) can be on its mass shell, the term \(+i\epsilon\) in the denominator should be replaced by \(+i\Gamma_{*a}/2\), where \(\Gamma_{*a}\) is the width of \(D^{*a}\). The interactions between charm meson and pions in HH\(\chi\)EFT at LO are determined by the pion decay constant \(f_{\pi}\) and a dimensionless coupling constant \(g_{\pi}\). The vertices for \(D^{(*)}\,\pi\to D^{(*)}\,\pi\) are \[D^{a}\,\pi^{i}(q) \longrightarrow D^{b}\,\pi^{j}(q^{\prime}): \boxed{+\frac{i}{2f_{\pi}^{2}}\,v\!\cdot\!(q+q^{\prime})\,[ \sigma^{i},\sigma^{j}]_{ab},}\] (A5a) \[D^{*a}_{\mu}\,\pi^{i}(q) \longrightarrow D^{*b}_{\nu}\,\pi^{j}(q^{\prime}): \boxed{-\frac{i}{2f_{\pi}^{2}}\,g^{\mu\nu}\,v\!\cdot\!(q+q^{ \prime})\,[\sigma^{i},\sigma^{j}]_{ab}.}\] (A5b) The vertices for \(D^{(*)}\to D^{(*)}\pi(q)\) are \[D^{*a}_{\mu} \longrightarrow D^{b}\,\pi^{i}(q): \boxed{+i\frac{\sqrt{2}\,g_{\pi}}{f_{\pi}}\,\sigma^{i}_{ab}\,q^{ \mu},}\] (A6a) \[D^{a} \longrightarrow D^{*b}_{\mu}\,\pi^{i}(q): \boxed{-i\frac{\sqrt{2}\,g_{\pi}}{f_{\pi}}\,\sigma^{i}_{ab}\,q^{ \mu},}\] (A6b) \[D^{*a}_{\mu} \longrightarrow D^{*b}_{\nu}\,\pi^{i}(q): \boxed{+i\frac{\sqrt{2}g_{\pi}}{f_{\pi}}\,\sigma^{i}_{ab}\,\epsilon ^{\mu\nu\alpha\beta}\,v_{\alpha}\,q_{\beta}.}\] (A6c) The vertices for \(D^{(*)}\pi(q)\to D^{(*)}\) are obtained by replacing \(q\) by \(-q\). Our convention for the Levi-Civita tensor in Eq. (100c) is \(\epsilon_{0123}=+1\). ## Appendix B Integrals over the Momentum of a Thermal Pion In this Appendix, we evaluate the integrals over the pion momentum that appear in the on-shell charm-meson self energies in HH\(\chi\)EFT at LO. ### \(i\) \(\epsilon\) Prescriptions The on-shell self energies for the charm mesons \(D^{a}\) and \(D^{*a}\) are given in Eqs. (41) and (50). The thermal average that appears in these self energies is \[{\cal F}_{cd}=\left\langle\frac{q^{2}}{\omega_{cdq}\left(q^{2}-q_{cd}^{2}+i \epsilon\right)}\right\rangle, \tag{101}\] where \(\omega_{cdq}=\sqrt{m_{\pi cd}^{2}+q^{2}}\), \(q_{cd}^{2}=\Delta_{cd}^{2}-m_{\pi cd}^{2}\), \(\Delta_{cd}\) is the \(D^{*c}\)-\(D^{d}\) mass splitting, and \(m_{\pi cd}\) is the mass of the pion produced by the transition \(D^{*c}\to D^{d}\pi^{i}\). The angular brackets represent the average over the Bose-Einstein momentum distribution of the pion, which is defined in Eq. (21) as the ratio of momentum integrals. The numerator of the thermal average \({\cal F}_{cd}\) in Eq. (101) can be expressed as an integral over a single momentum variable of the form \[{\cal F}(\sigma)=\lim_{\epsilon\to 0^{+}}\int_{0}^{\infty}{\rm d}q\,\frac{F(q^{2}) }{q^{2}-\sigma+i\epsilon}, \tag{102}\] where \(F(q^{2})\) is a smooth, real-valued function that decreases as \(q^{4}\) as \(q\to 0\) and decreases exponentially to \(0\) as \(q\to\infty\). The real parameter \(\sigma\), which can be positive or negative, is small compared to the scale of \(q^{2}\) set by \(F(q^{2})\). We would like to expand \({\cal F}(\sigma)\) in powers of \(\sigma\). The function \({\cal F}(\sigma)\) can be expressed as the sum of a principal-value integral and the integral of a delta function: \[{\cal F}(\sigma) = \int_{0}^{\infty}{\rm d}q\,F(q^{2})\left({\cal P}\frac{1}{q^{2}- \sigma}-i\pi\,\delta(q^{2}-\sigma)\right) \tag{103}\] \[= \int_{0}^{\infty}{\rm d}q\,\frac{F(q^{2})-F(\sigma)}{q^{2}- \sigma}-i\,\frac{\pi}{2\sqrt{\sigma}}\;F(\sigma)\,\theta(\sigma).\] We have used an identity to express the principal-value integral in terms of an ordinary integral. The Taylor expansion of the real part of \({\cal F}(\sigma)\) can be obtained by expanding the integrand in the second line of Eq. (103) as a Taylor expansion in \(\sigma\): \[{\rm Re}\big{[}{\cal F}(\sigma)\big{]} = \int_{0}^{\infty}{\rm d}q\,\frac{F(q^{2})-F(0)}{q^{2}}+\sigma\int _{0}^{\infty}{\rm d}q\,\frac{F(q^{2})-F(0)-F^{\prime}(0)\,q^{2}}{q^{4}} \tag{104}\] \[+\,\sigma^{2}\int_{0}^{\infty}{\rm d}q\,\frac{F(q^{2})-F(0)-F^{ \prime}(0)\,q^{2}-\frac{1}{2}\,F^{\prime\prime}(0)\,q^{4}}{q^{6}}+\ldots.\] The first term \(F(q^{2})/(q^{2})^{n}\) in each integrand can be obtained simply by expanding the left side of Eq. (26) in powers of \(\sigma\). The remaining terms in the integrand subtract the divergent terms in the Laurent expansion of \(F(q^{2})/(q^{2})^{n}\) in \(q^{2}\). ### Integral over Momentum The thermal average \(\mathcal{F}_{cd}\) is defined in Eq. (24). If \(\Delta_{cd}>m_{\pi cd}\), its real part can be expressed in terms of a principal-value integral that can be reduced to the form in the first term of the second line of Eq. (26): \[\mathrm{Re}\big{[}\mathcal{F}_{cd}\big{]}\;=\;\frac{1}{2\pi^{2}\,\mathfrak{n}_ {\pi}}\int_{0}^{\infty}dq\left(\frac{q^{4}}{\omega_{cdq}}\mathfrak{f}_{\pi}( \omega_{cdq})-\frac{q_{cd}^{4}}{\Delta_{cd}}\mathfrak{f}_{\pi}(\Delta_{cd}) \right)\frac{1}{q^{2}-q_{cd}^{2}}, \tag{27}\] where \(\omega_{cdq}=\sqrt{m_{\pi cd}^{2}+q^{2}}\) and \(q_{cd}^{2}=\Delta_{cd}^{2}-m_{\pi cd}^{2}\). The denominator has a zero at \(q_{cd}=\sqrt{\Delta_{cd}^{2}-m_{\pi cd}^{2}}\). The subtraction of the numerator makes the integral convergent. If \(\Delta_{cd}<m_{\pi cd}\), the subtraction at \(\omega_{cdq}=\Delta_{cd}\) is not necessary because the integral is convergent. The imaginary part of \(\mathcal{F}_{cd}\) is nonzero only if \(\Delta_{cd}>m_{\pi cd}\). It can be evaluated analytically using a delta function as in Eq. (26): \[\mathrm{Im}\big{[}\mathcal{F}_{cd}\big{]}\;=\;-\frac{1}{4\pi\,\mathfrak{n}_{ \pi}}\left[\frac{\mathfrak{f}_{\pi}(\Delta_{cd})}{\Delta_{cd}}\,q_{cd}^{3} \right]\theta\big{(}\Delta_{cd}-m_{\pi cd}\big{)}. \tag{28}\] ### Expansion in Isospin Splittings The thermal average over the Bose-Einstein distribution for a pion is defined in Eq. (21). The thermal average \(\mathcal{F}_{cd}\) defined in Eq. (24) depends on \(\Delta_{cd}^{2}-m_{\pi cd}^{2}\), which is linear in isospin splittings. The thermal average can be expanded in powers of \(\Delta_{cd}^{2}-m_{\pi cd}^{2}\) using the results presented in Section B.1. The real part can be expanded in integer powers of isospin splittings divided by \(m_{\pi}\). The leading term in the expansion of the real part of \(\mathcal{F}_{cd}\) is \[\mathrm{Re}\big{[}\mathcal{F}_{cd}\big{]}\;\approx\;\left\langle\frac{1}{ \omega_{q}}\right\rangle. \tag{29}\] The imaginary part of \(\mathcal{F}_{cd}\) is nonzero only if \(\Delta_{cd}>m_{\pi cd}\). It can be expanded in half-integer powers of isospin splittings divided by \(m_{\pi}\). The leading term in the expansion of the imaginary part is \[\mathrm{Im}\big{[}\mathcal{F}_{cd}\big{]}\;\approx\;\frac{\mathfrak{f}_{\pi} (m_{\pi})}{4\pi\,\mathfrak{n}_{\pi}}\left(-\frac{1}{m_{\pi}}\,q_{cd}^{3} \right)\theta\big{(}\Delta_{cd}-m_{\pi cd}\big{)}. \tag{30}\]
2304.11742
Six-Functor Formalisms I : Constructing functors using category of simplices
This article is first in a series of papers where we reprove the statements in constructing the Enhanced Operation Map and the abstract six-functor formalism developed by Liu-Zheng. In this paper, we prove a theorem regarding constructing functors between simplicial sets using the category of simplices. We shall reprove the statement using the language of marked simplicial sets and studying injective model structure on functor categories. The theorem is a crucial tool and will be used repeatedly in reproving the $\infty$-categorical compactification and constructing the so called Enhanced Operation Map in the forthcoming articles.
Chirantan Chowdhury
2023-04-23T20:22:21Z
http://arxiv.org/abs/2304.11742v1
# Six-Functor Formalisms I : Constructing functors using category of simplices. ###### Abstract This article is first in a series of papers where we reprove the statements in constructing the Enhanced Operation Map and the abstract six-functor formalism developed by Liu-Zheng. In this paper, we prove a theorem regarding constructing functors between simplicial sets using the category of simplices. We shall reprove the statement using the language of marked simplicial sets and studying injective model structure on functor categories. The theorem is a crucial tool and will be used repeatedly in reproving the \(\infty\)-categorical compactification and constructing the so called Enhanced Operation Map in the forthcoming articles. ###### Contents * 1 Introduction * 2 Marked Simplicial Sets and Cartesian Model Structure * 2.1 Marked Simplicial Sets * 2.2 Cartesian Model Structure * 3 Functors from the Category of simplices * 3.1 Properties and model structure on \((\mathrm{Set}_{\Delta})^{\mathcal{J}}\) * 3.2 The category of simplices. * 3.3 The mapping functor. * 4 The main theorem * 4.1 Statement and the proof. * 4.2 Relating the main theorem to 2-categorical compactifications. Appendices * A Remarks on absolute pullbacks. * B Remarks on anodynes. ## 1 Introduction The six-functor formalism was developed by Grothendieck and others in order to understand duality statements. The initial setup was developed in the setting of triangulated categories. In the recent years with the development of \(\infty\)-categories due to Lurie ([10],[11] and [12]), six-functor formalism are formulated in the setting of stable presentable \(\infty\)-categories. This was developed by Liu and Zheng ([8] and [9]) in order to extend six operations for derived categories of \(\ell\)-adic sheaves from schemes to algebraic stacks. Since then, there have been many developments of six functor formalism in many contexts in algebraic, arithmetic geometry and motivic homotopy theory([5], [13], [14], [7] and [1]). Unfortunately, the papers [8] and [9] are not published for quite some time and a lot of recent developments are based on crucial results developed in these papers. The main goal of the series of articles (this paper and [2], [3]) is to reprove these main results assuming the language of higher categories developed by Lurie. The two papers written by Liu and Zheng involves simplicial tools to extend the six functor formalism from smaller to bigger categories (schemes to algebraic stacks for example). The crucial ground-breaking idea due to them is constructing the _Enhanced Operation Map_ which is a morphism between simplicial sets encoding all the six functors and important properties like base change, projection formula. They also developed an algorithm called the DESCENT program which is a method to extend the formalism from a smaller subcategory to the bigger category using the Enhanced Operation Map. Recently, Mann's thesis ([13]) uses a simpler convenient notion of a _3-functor formalism_ and proves analogous results of the DESCENT program in order to construct six functor formalism for \(\mathfrak{p}\)-torsion etale sheaves associated to a small \(\mathfrak{v}\)-stack. Although the notion of \(\mathfrak{3}\)-functor formalism is much easier to explain and understand, but the construction of such formalisms still uses crucial theorems proved by Liu and Zheng which are very technical and hard to understand. Liu and Zheng prove a technical statement ([8, Proposition 2.1.4]) which is used repeatedly to construct the Enhanced operation map. In this article, we reprove a simpler version of this technical statement (Theorem 4.1.1) which is sufficient enough for studying the six functor formalism. Let us motivate this technical statement by revisiting the extraordinary pushforward map in the context of etale cohomology of schemes. Let \(\mathfrak{f}:\mathsf{X}\to\mathsf{Y}\) be a separated morphism of finite type of quasi-compact and quasi-separated schemes and \(\mathsf{\Lambda}\) be a torsion ring. We have the extraordinary pushforward map \[\mathfrak{f}_{!}:\mathcal{D}(\mathsf{X},\mathsf{\Lambda})\to\mathcal{D}( \mathsf{Y},\mathsf{\Lambda})\] which when restricted to open immersions is the map \(\mathfrak{f}_{\#}\) and to proper morphisms the map \(\mathfrak{f}_{*}\). The construction of \(\mathfrak{f}_{!}\) involves the general theory of gluing two psuedofunctors developed by Deligne ([4, Section 3]). Let us briefly recall the setup of the construction. **Definition 1.0.1**.: For any morphism \(\mathfrak{f}\) as above, we consider the \(2\)-category of compactifications \(\operatorname{Sch}^{\mathrm{comp}}\) whose objects are schemes and morphisms are triangles (1) where \(\mathsf{j}\) is open and \(\mathfrak{p}\) is proper. It is important to note that one can compose morphisms of such form due to Nagata's theorem of compactification. Then one can define a pseudo-functor \(\mathsf{F}_{\mathsf{c}}:\operatorname{Sch}^{\mathrm{comp}}\to\operatorname{Cat }_{1}\) which sends a scheme \(\mathsf{X}\) to \(\mathsf{D}(\mathsf{X},\mathsf{\Lambda})\) and a triangle of the form above to the composition \(\mathfrak{p}_{*}\circ\mathsf{j}_{\#}\) (here \(\operatorname{Cat}_{1}\) denotes the \(2\)-category of categories). The theory of gluing in \(2\)-categories tell us that the functor \(\mathsf{F}_{\mathsf{c}}\) can be extended to a functor \(\mathfrak{f}_{!}\) from the category \(\operatorname{Sch}^{\prime}\) consisting of schemes where morphisms are separated and finite type. In other words, the diagram (2) commutes. We shall reprove this statement (Corollary 4.2.1) by the end of the article using Theorem 4.1.1. The main points involved in the above construction are : 1. For any morphism \(f\in Sch^{\prime}\), the category of compactifications \(\operatorname{Comp}(f)\) (the fiber of \(f\) over \(pr\)) is filtered. In other words given any compactifications, there is a third compactifications which maps to both of the compactifications (this is shown in proving Corollary 4.2.1). 2. Any two compactifications gives the equivalent functors between the derived categories. In other words for two compactifications \((j,p)\) and \((j^{\prime},p^{\prime})\) we have \(p_{*}\circ j_{\#}\cong p^{\prime}_{*}\circ j^{\prime}_{\#}\). Thus we have a functor between categories: \[\operatorname{Comp}(f)\to Fun^{\cong}([1],Cat_{1})\] (3) where \(Fun^{\cong}([1],Cat_{1})\) is the largest groupoid inside \(Fun([1],Cat_{1})\). In the setting of \(\infty\)-categories, we replace the category \(Sch^{comp}\) by a simplicial set \(\delta^{*}_{2}N(Sch^{\prime})^{a,at}_{P,O}\). The \(n\)-simplices of \(\delta^{*}_{2}N(Sch^{\prime})^{art}_{P,O}\) are \(n\times n\) grids of the form (4) where vertical arrows are open, horizontal arrows are proper and each square is a pullback square. Also one has a natural morphism \(p_{Sch^{\prime}}:\delta^{*}_{2}N(Sch^{\prime})^{art}_{P,O}\to N(Sch^{\prime})\) induced by composition along the diagonal. The result of gluing that we shall reprove in the next article ([2]) is the following: **Theorem 1.0.2**.: _[_8_, Corollary 0.3]_ _Let \(F:\delta^{*}_{2}N(Sch^{\prime})^{art}_{P,O}\to\mathcal{D}\) be a functor where \(\mathcal{D}\) is an \(\infty\)-category. Then there exists a functor \(F^{\prime}:N(Sch^{\prime})\to\mathcal{D}\) such that the diagram_ (5) _commutes._ The above theorem and the discussion above on the level of \(2\)-categories poses the following question : **Question 1.0.3**.: Given a morphism of simplicial sets \(i:K^{\prime}\to K\) and a morphism \(f^{\prime}:K\to\mathcal{C}\) when \(\mathcal{C}\) is an \(\infty\)-category. Under what conditions we can construct a morphism \(f:K\to\mathcal{C}\) such that (6) commutes? Let us try to reformulate the conditions for gluing on the level of \(2\)-categories in the setting of simplicial sets: 1. For every morphism \(f\), there exists a filtered category \(\operatorname{Comp}(f)\). In the setting of simplicial sets, this must generalize to associating a weakly contractible simplicial set (a filtered category is weakly contractible) to every \(\sigma:\Delta^{n}\to K\). This leads to motivating the use of _the category of simplices_. The category of simplices consists of objects which are pairs \((n,\sigma)\) where \(n\geq 0\) and \(\sigma:\Delta^{n}\to K\) ( see Definition 3.2.1 for precise definition). The above condition can be reformulated into the existence of a functor \[\mathcal{N}:(\Delta_{/K})^{op}\to\operatorname{Set}_{\Delta}\] (7) such that \(\mathcal{N}(n,\sigma)\) is weakly contractible. 2. The functor \(\operatorname{Eq.}\) (3) can be realized as morphism in the functor category \(\operatorname{Fun}((\Delta_{/K})^{op},\operatorname{Set}_{\Delta})\). The left hand side of the functor is replaced by the functor \(\mathcal{N}\) in the above paragraph. A generalization of \(\operatorname{Fun}^{\cong}([1],\mathcal{D})\) is the _mapping functor_ (Definition 3.3.1) \[\operatorname{Map}[K,\mathcal{C}]:(\Delta_{/K})^{op}\to\operatorname{Set}_{ \Delta}\ \ (n,\sigma)\to\operatorname{Fun}^{\cong}(\Delta^{n},\mathcal{C}).\] (8) Then the simplicial analogue of \(\operatorname{Eq.}\) (3) is the existence of a natural transformation \[\alpha:\mathcal{N}\to\operatorname{Map}[K,\mathcal{C}].\] (9) We need some more notations in order to state the theorem. For any functor \(F\in\operatorname{Fun}((\Delta_{/K})^{op},\operatorname{Set}_{\Delta})\), let \(\Gamma(F)=\lim_{(\Delta_{/K})^{op}}F\in\operatorname{Set}_{\Delta}\) (Definition 3.1.4) and \(i^{*}F\) to be the functor \((\Delta_{/K^{*}})^{op}\to(\Delta_{/K})^{op}\xrightarrow{F}\operatorname{Set}_{\Delta}\) (Remark 3.1.8). The theorem is stated as follows: **Theorem 1.0.4**.: _(Theorem 4.1.1) Let \(K^{\prime},K\) be simplicial sets and \(\mathcal{C}\) be a \(\infty\)-category. Let \(f^{\prime}:K^{\prime}\to\mathcal{C}\) and \(i:K^{\prime}\to K\) be morphisms of simplicial sets. Let \(\mathcal{N}\in(\operatorname{Set}_{\Delta})^{(\Delta_{/K})^{op}}\) and \(\alpha:\mathcal{N}\to\operatorname{Map}[K,\mathcal{C}]\) be a natural transformation. If_ 1. _(Weakly contractibility) for_ \((n,\sigma)\in\Delta_{/K}\)_,_ \(\mathcal{N}(n,\sigma)\) _is weakly contractible,_ 2. _(Comptability with_ \(f^{\prime}\)_) there exists_ \(\omega\in\Gamma(i^{*}\mathcal{N})_{\lozenge}\) _such that_ \(\Gamma(i^{*}\alpha)(\omega)=f^{\prime}\)__ _then there exists a map_ \(f:K\to\mathcal{C}\) _such that the following diagram_ (10) _commutes. In other words,_ \(f^{\prime}\cong f\circ i\) _in_ \(\operatorname{Fun}(K^{\prime},\mathcal{C})\)_._ The above theorem describes some conditions in order to ensure the construction of the functor \(f\). In the theorem of \(\infty\)-categorical compactification and other theorems, we shall verify these conditions in order to construct functors. We briefly describe the sections of the article: 1. In Section 2, we recall relevant notions of marked simplicial sets due to Lurie ([10, Section 3.1]). In particular we review the Cartesian model structure on marked simplicial sets. 2. In Section 3, we firstly study the injective model structure on the functor category \(\operatorname{Fun}(\mathcal{J},\operatorname{Set}_{\Delta})\) where \(\mathcal{J}\) is an ordinary category. We review the category of simplices and the mapping functor (Definition 3.3.1). The section ends with proving an important proposition showing that the mapping functor is a fibrant object in the injective model structure (Proposition 3.3.5). This proposition helps us to prove Theorem 4.1.1 in the following section. 3. In Section 4, we state and proof the theorem (Theorem 4.1.1). We also see how gluing in two categories in the context of extraordinary pushforward map can be reproved using Theorem 4.1.1. 4. Appendices A and B concern on proving lemmas which are used in proving Proposition 3.3.5. ### Acknowledgments The paper was written while the author was a Post Doc at the University of Duisburg-Essen, supported by the ERC Grant "Quadratic Refinements in Algebraic Geometry". He would like to thank Alessandro D'Angelo for helpful discussions regarding the technical lemmas involved in the article. He would also like to express his gratitude to all members of ESAGA group in Essen. ## 2 Marked Simplicial Sets and Cartesian Model Structure In this section, we are going to recall definitions and results on marked simplicial sets ([10, Section 3.1]). Marked simplicial sets are simplicial sets with additional "markings". These play an important role in constructing the Straightening-Unstraightening functor (the \(\infty\)-categorical formalism of Grothendieck's construction of fibered categories). We shall also review the Cartesian model structure on marked simplicial sets which is needed for proving the Grothendieck's construction in \(\infty\)-categorical setting. ### Marked Simplicial Sets **Definition 2.1.1**.: [10, Definition 3.1.0.1] A _marked simplicial set_ is a pair \((X,\mathcal{E})\) where \(X\) is a simplicial set and \(\mathcal{E}\) is a set of edges \(X\) which contains every degenerate edge. We will say that an edge of \(X\) will be called marked if it belongs to \(\mathcal{E}\). A morphism \(f:(X,\mathcal{E})\to(X^{\prime},\mathcal{E}^{\prime})\) of marked simplicial sets is a map of simplicial sets \(f:X\to X^{\prime}\) having the property that \(f(\mathcal{E})\subset\mathcal{E}^{\prime}\), The category of marked simplicial sets will be denoted by \(\operatorname{Set}^{+}_{\Delta}\). Below are the typical notations used in the category of marked simplicial sets. **Notation 2.1.2**.: 1. Let \(S\) be simplicial set. Then let \(S^{\flat}\) denote the marked simplicial set \(S,s_{0}(S_{0}))\). Thus \(S^{\flat}\) is the simplicial set \(S\) with the markings containing only the degenerate edges of \(S\). 2. For a simplicial set \(S\) let \(S^{\sharp}\) denote the marked simplicial set \((S,S_{1})\), Thus \(S^{\sharp}\) is the marked simplicial set where every edge is marked. 3. For a Cartesian fibration \(p:X\to S\), let \(X^{\natural}\) denote the marked simplicial set \((X,\mathcal{E})\) where \(\mathcal{E}\) is the set of \(p\)-Cartesian edges. **Remark 2.1.3**.: For an \(\infty\)-category \(\mathcal{C}\), the unique map \(p_{\mathcal{C}}:\mathcal{C}\to\Delta_{0}\) is a Cartesian fibration where the \(p_{\mathcal{C}}\)-Cartesian edges are the equivalences in \(\mathcal{C}\). Thus \(\mathcal{C}^{\natural}\) is the marked simplical set \((\mathcal{C},\mathcal{E}_{\mathcal{C}})\) where \(\mathcal{E}_{\mathcal{C}}\) is the set of equivalences in \(\mathcal{C}\). The category of simplicial sets is Cartesian closed ([10, Section 3.1.3]). In other words, it has an internal mapping object which is defined as follows: **Definition 2.1.4**.: Let \(X,Y\in\operatorname{Set}^{+}_{\Delta}\), then there exists an internal mapping object \(Y^{X}\in\operatorname{Set}^{+}_{\Delta}\) equipped with an "evaluation map" (a morphism of marked simplicial sets) \[e_{X,Y}:Y^{X}\times X\to Y\] such that for every \(Z\in\operatorname{Set}^{+}_{\Delta}\), the map \(e_{X,Y}\) induces bijections \[\operatorname{Hom}_{\operatorname{Set}^{+}_{\Delta}}(Z,Y^{X})\xrightarrow{e_{ X,Y}}\operatorname{Hom}_{\operatorname{Set}^{+}_{\Delta}}(Z\times X,Y).\] Given the internal mapping object \(Y^{X}\), we can associate two simplicial sets \(\operatorname{Map}^{\flat}(X,Y)\) and \(\operatorname{Map}^{\sharp}(X,Y)\) which are defined follows: \[\operatorname{Map}^{\flat}(X,Y)_{n} =\operatorname{Hom}_{\operatorname{Set}^{+}_{\Delta}}((\Delta^{ n})^{\flat}\times X,Y)\] \[\operatorname{Map}^{\sharp}(X,Y)_{n} =\operatorname{Hom}_{\operatorname{Set}^{+}_{\Delta}}((\Delta^{ n})^{\sharp}\times X,Y).\] **Remark 2.1.5**.: Some remarks on the internal mapping object are as follows: 1. By definition, we see that \(\operatorname{Map}^{\sharp}(X,Y)\subset\operatorname{Map}^{\flat}(X,Y)\). 2. For a marked simplicial set \(X_{\mathcal{E}}=(X,\mathcal{E})\) and \(\infty\)-category \(\mathcal{C}\), \(\operatorname{Map}^{\flat}(X_{\mathcal{E}},\mathcal{C}^{\natural})\) is the full subcategory of the functor category \(\operatorname{Fun}(X,\mathcal{C})\) spanned by objects \(f:X\to\mathcal{C}\) which sends \(\mathcal{E}_{X}\) to equivalences in \(\mathcal{C}\). Hence \(\operatorname{Map}^{\flat}(X_{\mathcal{E}},\mathcal{C}^{\natural})\) is an \(\infty\)-category. 3. For a marked simplicial set \(X_{\mathcal{E}}=(X,\mathcal{E})\) and an \(\infty\)-category \(\mathcal{C}\), the simplicial set \(\operatorname{Map}^{\sharp}(X,\mathcal{C}^{\natural})\) is the largest Kan complex contained in \(\operatorname{Map}^{\flat}(X_{\mathcal{E}},\mathcal{C}^{\natural})\). In case \(X_{\mathcal{E}}=X^{\flat}\), then \(\operatorname{Map}^{\flat}(X^{\flat},\mathcal{C}^{\natural})\) is the largest Kan complex contained inside the \(\infty\)-category \(\operatorname{Fun}(X,\mathcal{C})\). 4. We have a pair of adjoint functors : \[(-)^{\flat}:\operatorname{Set}_{\Delta}\leftrightarrows\operatorname{Set}_{ \Delta}^{+}:\text{forgetful}\] (11) ### Cartesian Model Structure The category of marked simplicial sets admits a model structure called the Cartesian Model structure. **Definition 2.2.1**.: [10, Proposition 3.1.3.7] There exists a left proper combinatorial model structure on the category of marked simplicial sets \(\operatorname{Set}_{\Delta}^{+}\) called the _Cartesian Model Structure_ which is described as follows: 1. The cofibrations in \(\operatorname{Set}_{\Delta}^{+}\) are the morphisms \(f:(X,\mathcal{E}_{X})\to(Y,\mathcal{E}_{Y})\) which are cofibrations when regarded as morphism of the underlying simplicial sets (i.e. monomorphisms). 2. The weak equivalences are morphisms \(f:X\to Y\) which are _Cartesian equivalences_ i.e. for every \(\infty\)-category \(\mathcal{C}\) the natural morphism \[\operatorname{Map}^{\flat}(Y,\mathcal{C}^{\natural})\to\operatorname{Map}^{ \flat}(X,\mathcal{C}^{\natural})\] (12) is an equivalence of \(\infty\)-categories. 3. The fibrations are those morphisms which have right lifting property with respect to every map which is a cofibration and a Cartesian equivalence. **Remark 2.2.2**.: In the Cartesian Model structure, the fibrant objects are \(\mathcal{C}^{\natural}\) where \(\mathcal{C}\) is an \(\infty\)-category ([10, Proposition 3.1.4.1]). We have the notion of anodyne morphisms in category of simplicial sets which are exactly the trivial cofibrations in Kan model structure. In the context of marked simplicial sets, we have the notion of marked anodyne. **Definition 2.2.3**.: [10, Definition 3.1.1.1] The class of _marked anodyne morphisms_ in \(\operatorname{Set}_{\Delta}^{+}\) is the smallest weakly saturated class of morphisms with the following properties: 1. For each \(0<i<n\), the morphism \((\Lambda_{i}^{n})^{\flat}\hookrightarrow(\Delta^{n})^{\flat}\) is a marked anodyne. 2. Let \(\mathcal{E}\) denote the set of degenerate edges of \(\Delta^{n}\) together with the final edge \(\Delta^{(n-1,n)}\). Then for \(n>0\), the morphism \[(\Lambda_{n}^{n},((\Lambda_{n}^{n})_{1}\cap\mathcal{E})\to(\Delta^{n},\mathcal{ E})\] (13) is a marked anodyne. 3. The inclusion \[(\Lambda_{1}^{2})^{\sharp}\coprod_{(\Lambda_{1}^{2})^{\flat}}(\Lambda^{2})^{ \flat}\to(\Delta^{2})^{\sharp}\] (14) is a marked anodyne. 4. For every Kan complex \(K\), the morphism \(K^{\flat}\to K^{\sharp}\) is a marked anodyne. **Example 2.2.4**.: Marked anodynes are Cartesian equivalences ([10, Remark 3.1.3.]). The following proposition describes the morphisms in \(\operatorname{Set}^{+}_{\Delta}\) which admits right lifting property along marked anodyne. **Proposition 2.2.5**.: _[_10_, Proposition 3.1.1.6]_ _A map \(p:X\to S\) in \(\operatorname{Set}^{+}_{\Delta}\) admits right lifting property along marked anodyne morphisms iff the following conditions are satisfied :_ 1. _The map_ \(p\) _is an inner fibration of simplicial sets._ 2. _An edge of_ \(e\) _is marked iff_ \(p(e)\) _is marked and_ \(e\) _is_ \(p\)_-Cartesian._ 3. _For every object_ \(y\in X\) _and a marked edge_ \(\bar{e}:\bar{x}\to p(y)\) _in_ \(S\)_, there exists a marked edge_ \(e:x\to y\) _such that_ \(p(e)=\bar{e}\)_._ _In particular, \(p:X\to S\) which are Cartesian fibrations and the marked edges of \(X\) are \(p\)-Cartesian edges satisfy these conditions above._ An important of marked anodyne morphisms are the fact that they are stable under smash products along arbitrary cofibrations. **Proposition 2.2.6**.: _[_10_, Proposition 3.1.2.3]_ _If \(f:X\to X^{\prime}\) is a marked anodyne and \(g:Y\to Y^{\prime}\) is a cofibration in \(\operatorname{Set}^{+}_{\Delta}\), then the pushout product_ \[f\boxed{g}:X\times Y^{\prime}\coprod_{X\times Y}X^{\prime}\times Y\to X^{ \prime}\times Y^{\prime} \tag{15}\] _is a marked anodyne._ The following proposition gives a Quillen adjunction between model structures on simplicial sets and marked simplicial sets. **Proposition 2.2.7**.: _The pair of adjoint functors :_ \[(-)^{\sharp}:\operatorname{Set}_{\Delta}\rightleftarrows\operatorname{Set}^{ +}_{\Delta}:G \tag{16}\] _where \(G(Y,E_{Y})=Y_{E_{Y}}\) the subsimplicial set of \(Y\) spanned by edges in \(E_{Y}\) is a Quillen adjunction between the Kan Model structure on \(\operatorname{Set}_{\Delta}\) and the Cartesian Model structue on \(\operatorname{Set}^{+}_{\Delta}\)._ Proof.: At first, we have the bijection of \(\operatorname{Hom}\) sets \[\operatorname{Hom}_{\operatorname{Set}^{+}_{\Delta}}(X^{\sharp},(Y,E_{Y})) \cong\operatorname{Hom}_{\operatorname{Set}_{\Delta}}(X,Y_{E_{Y}}). \tag{17}\] This is true because given a morphism \(X\to Y_{E_{Y}}\), it induces a morphism \(X^{\sharp}\to(Y,E_{Y})\). In order to show it is a Quillen adjunction, we show that \((-)^{\sharp}\) sends cofibrations to cofibrations and trivial cofibrations to trivial cofibrations. Firstly, \((-)^{\sharp}\) sends cofibrations to cofibrations. This is true because the cofibrations are defined as cofibrations in the underlying map of simplicial sets. Thus we need to show that \(p^{\sharp}:X^{\sharp}\to Y^{\sharp}\) sends anodyne morphisms to Cartesian equivalences (they are already cofibrations). In other words, we need to show that for an \(\infty\)-category \(\mathcal{C}\), we need to show \[\operatorname{Map}^{\flat}(Y^{\sharp},\mathcal{C}^{\sharp})\to\operatorname{ Map}^{\flat}(X^{\sharp},\mathcal{C}^{\sharp}) \tag{18}\] is a trivial Kan fibration. Unravelling the definition of Kan fibration, we need to show the existence of the dotted arrow: (19) Let \(K\) be the largest Kan complex inside \(\mathcal{C}\). Then the morphism of \(\alpha\) on the level of simplicial sets can be rewritten as \[Y\times\partial\Delta^{n}\coprod_{X\times\partial\Delta^{n}}X\times\Delta^{n} \xrightarrow{\alpha}K \tag{20}\] As the pushout product of an anodyne and a monomorphism of simplicial sets is anodyne, it admits lifting along the morphism \(K\to\Delta^{0}\). Thus we have an extension \[Y\times\Delta^{n}\xrightarrow{\alpha^{\prime}}K. \tag{21}\] Rewritting in terms of marked simplicial sets, we get a morphism \[\alpha^{\prime}:Y^{\sharp}\times(\Delta^{n})^{\flat}\to\mathcal{C}^{\natural} \tag{22}\] extending \(\alpha\). This completes the proof of the proposition. ## 3 Functors from the Category of simplices In this section, we introduce the main tools needed for stating the main statement. Firstly, we introduce the Model structure on the category of functors from a small category \(\mathcal{J}\) to \(\operatorname{Set}_{\Delta}\). Then we introduce the category of simplicies \(\Delta_{/K}\) associated to a simplicial set \(K\) and introduce the functor \(\operatorname{Map}[K,\mathcal{C}]\). The final goal of this section is to prove that functor \(\operatorname{Map}[K,\mathcal{C}]\) is fibrant in the model structure \((\operatorname{Set}_{\Delta})^{\Delta/K}\). We are mainly referring to [8, Section 2] ### Properties and model structure on \((\operatorname{Set}_{\Delta})^{\mathcal{J}}\). **Definition 3.1.1**.: Let \(\mathcal{J}\) be a (small) ordinary category. Let \((\operatorname{Set}_{\Delta})^{\mathcal{J}}\) be the category where objects are functors from \(\mathcal{J}\to\operatorname{Set}_{\Delta}\) and morphisms are natural transformations. **Definition 3.1.2**.: As the Quillen model structure on \(\operatorname{Set}_{\Delta}\) is combinatorial and \(\mathcal{J}\) is a small category, by [10, Proposition A.2.8.2], there exists an injective model structure on \((\operatorname{Set}_{\Delta})^{\mathcal{J}}\) which is defined by the following classes of morphisms: 1. A morphism \(i:F\to G\) is called _anodyne_ if for every object \(j\in J\), the morphism of simplicial sets \(i(j):F(j)\to G(j)\) is anodyne in \(\operatorname{Set}_{\Delta}\). 2. A morphism \(i:F\to G\) is called a _weak equivalence_ if for every object \(j\in J\), the morphism of simplicial sets \(i(j):F(j)\to G(j)\) is a weak equivalence in \(\operatorname{Set}_{\Delta}\). 3. A morphism \(i:F\to G\) is said to be _injective fibration_ if it has right lifting property with respect to anodyne morphisms. **Notation 3.1.3**.: For every simplicial set \(X\), we have the constant simplicial set functor \(c(X):=X_{\mathcal{J}}\) defined by sending any object \(j\) to the simplicial set \(X\). The association is functorial and thus we have a functor : \[c:\operatorname{Set}_{\Delta}\to(\operatorname{Set}_{\Delta})^{\mathcal{J}}\] **Definition 3.1.4**.: We define the _global section functor_ \[\Gamma:(\operatorname{Set}_{\Delta})^{\mathcal{J}}\to\operatorname{Set}_{\Delta}\] as follows : \[\Gamma(\mathsf{F})=(\Gamma(\mathsf{F})_{\mathsf{n}}\coloneqq\operatorname{Hom}_{ (\operatorname{Set}_{\Delta})^{\mathcal{J}}}(\Delta_{\mathcal{J}}^{\mathsf{n} },\mathsf{F}))_{\mathsf{n}}.\] **Example 3.1.5**.: Let \(\mathsf{F}\) be the constant functor \(\mathsf{c}(\mathsf{X})\) where \(\mathsf{X}\in\operatorname{Set}_{\Delta}\). Let us compute \(\Gamma(\mathsf{F})\). The \(\mathsf{n}\)-simplices of \(\Gamma(\mathsf{F})\) are given by the set of natural transformations from \(\Delta_{\mathsf{I}}^{\mathsf{n}}\to\mathsf{c}(\mathsf{X})\). Every such natural transformation is equivalent to give a single map \(\Delta^{\mathsf{n}}\to\mathsf{X}\). In particular the \(\mathsf{n}\)-simplices of \(\Gamma(\mathsf{F})\) are given by \(\mathsf{n}\)-simplices of \(\mathsf{X}\).Thus \(\Gamma(\mathsf{F})=\mathsf{X}\). In particular, we prove that \(\Gamma\circ\mathsf{c}=\operatorname{id}_{\operatorname{Set}_{\Delta}}\). **Remark 3.1.6**.: Recall from classical category theory, given a complete category \(\mathcal{C}\) and an small category \(\mathrm{I}\), we have the pair of adjoint functors: \[\mathsf{c}:\mathcal{C}\leftrightarrows\operatorname{Fun}(\mathrm{I}, \mathcal{C}):\lim\] where \(\lim\) is the functor which takes an object which is a functor \(\mathsf{F}:\mathrm{I}\to\mathcal{C}\) to its limit \(\lim(\mathsf{F})\in\mathcal{C}\). Let \(\mathcal{C}=\operatorname{Set}_{\Delta}\) and \(\mathrm{I}=\mathcal{J}\). As the category of simplicial sets is complete, we see that \[\Gamma=\lim.\] In the other words, the global section functor is the limit functor which takes every functor to its limit in the category of simplicial sets. **Proposition 3.1.7**.: _The pair of functors:_ \[\mathsf{c}:(\operatorname{Set}_{\Delta})^{\mathcal{J}}\leftrightarrows \operatorname{Set}_{\Delta}:\Gamma\] _is a Quillen adjunction with the injective model structure on \((\operatorname{Set}_{\Delta})^{\mathcal{J}}\) and the Quillen model structure on \(\operatorname{Set}_{\Delta}\)._ Proof.: We already know that \(\Gamma\) and \(\mathsf{c}\) are adjoint to each other. In order to show it is a Quillen adjunction,we need to show that \(\mathsf{c}\) preserves cofibrations and trivial cofibrations. By definition \(\mathsf{c}\) takes an anodyne morphism \(\mathsf{X}\to\mathsf{S}\) to \(\mathsf{X}_{\mathcal{J}}\to\mathsf{S}_{\mathcal{J}}\) which is an anodyne in the injective model structure (as pointwise it is the anodyne morphism \(\mathsf{X}\to\mathsf{S}\)). It also preserves cofibrations in the similar argument. **Remark 3.1.8**.: Let \(\mathsf{g}:\mathcal{J}\to\mathcal{J}^{\prime}\) be a morphism of simplicial sets. This induces a pullback functor: on the level of functor categories: \[\mathsf{g}^{*}:(\operatorname{Set}_{\Delta})^{\mathcal{J}^{\prime}}\to( \operatorname{Set}_{\Delta})^{\mathcal{J}}.\] Thus for any element \(\mathcal{M}\in(\operatorname{Set}_{\Delta})^{\mathcal{J}^{\prime}}\), we have a functor \[(-)\circ\mathsf{g}:\Gamma(\mathcal{M})\to\Gamma(\mathsf{g}^{*}\mathcal{M}).\] Also for any morphism \(\Psi:\mathcal{N}\to\mathcal{M}\) in \((\operatorname{Set}_{\Delta})^{\mathcal{J}^{\prime}}\), we have the following commutative diagram of simplicial sets: (23) ### The category of simplices. **Definition 3.2.1**.: Let \(K\) be a simplicial set. Then the _category of simplicies over \(K\)_ is a category consisting of : 1. Objects : \((n,\sigma)\) where \(n\geq 0\) and \(\sigma\in K_{n}\). 2. Morphisms: \(p:(n,\sigma)\to(m,\sigma^{\prime})\) is a morphism \(p:[n]\to[m]\) such that \(p(\sigma)=\sigma^{\prime}\). **Remark 3.2.2**.: The definition of the category of simplicies is functorial i.e. a morphism of simplicial sets \(K\to K^{\prime}\) induces a map of category of simplices \[g^{*}:\Delta_{/K}\to\Delta_{/K^{\prime}}.\] **Lemma 3.2.3**.: _Let \(K\) be a simplicial set. Then the colimit of the functor_ \[\Delta_{/K}\to\operatorname{Set}_{\Delta}\] _given by_ \[(n,\sigma)\to\Delta^{n}\] _is given by \(\Delta^{n}\)._ Proof.: This follows from the fact that \(K\) is same as \(\operatorname{colim}_{\sigma:\Delta^{n}\to K}\Delta^{n}\) ([6, Lemma 3.1.3]). ### The mapping functor. **Definition 3.3.1**.: [8, Notation 2.6] Let \(K\) be a simplicial set and \(\mathcal{C}\) be a \(\infty\)-category. The _mapping functor_ \[\operatorname{Map}[K,\mathcal{C}]:(\Delta_{/K})^{\operatorname{op}}\to \operatorname{Set}_{\Delta}\] is defined as follows: \[(n,\sigma)\to\operatorname{Map}^{\sharp}((\Delta^{n})^{\sharp},\mathcal{C}^{ \natural})\cong\operatorname{Fun}^{\varpi}(\Delta^{n},\mathcal{C}).\] **Remark 3.3.2**.: If \(g:K\to K^{\prime}\) a map of simplicial sets, then the pullback functor : \[g^{*}:(\operatorname{Set}_{\Delta})^{(\Delta_{/K^{\prime}})^{\operatorname{op} }}\to(\operatorname{Set}_{\Delta})^{(\Delta_{/K})^{\operatorname{op}}}\] maps \(\operatorname{Map}[K^{\prime},\mathcal{C}]\to\operatorname{Map}[K,\mathcal{C}]\). **Lemma 3.3.3**.: _For a simplicial set \(K\) and an \(\infty\)-category \(\mathcal{C}\), we have the following equality of simplicial sets:_ \[\Gamma(\operatorname{Map}[K,\mathcal{C}])=\operatorname{Map}^{\sharp}(K^{ \flat},\mathcal{C}).\] Proof.: By definition of \(\Gamma\), the set of \(m\)-simplices of \(\Gamma(\operatorname{Map}[K,\mathcal{C}])\) is the limit of the functor : \[\Delta_{/K}\to\operatorname{Sets}\ \ ;\ \ (n,\sigma)\to\operatorname{Hom}_{ \operatorname{Set}^{+}_{\Delta}}((\Delta^{m})^{\sharp}\times(\Delta^{n})^{ \flat},\mathcal{C}^{\natural}).\] On the other hand, the \(m\)-simplices of \(\operatorname{Map}^{\sharp}(K^{\flat},\mathcal{C}^{\natural})\) is the set \[\operatorname{Hom}_{\operatorname{Set}^{+}_{\Delta}}((\Delta^{m})^{\sharp} \times K^{\flat},\mathcal{C}^{\natural}).\] Thus we need to show that \[\lim_{(n,\sigma)}\operatorname{Hom}_{\operatorname{Set}^{+}_{\Delta}}(( \Delta^{m})^{\sharp}\times(\Delta^{n})^{\flat},\mathcal{C}^{\natural})= \operatorname{Hom}_{\operatorname{Set}^{+}_{\Delta}}((\Delta^{m})^{\sharp} \times\operatorname{colim}_{(n,\sigma)}(\Delta^{n})^{\flat},\mathcal{C}^{ \natural})\] is same as the set \(\operatorname{Hom}_{\operatorname{Set}^{+}_{\Delta}}((\Delta^{m})^{\sharp} \times K^{\flat},\mathcal{C}^{\natural})\). This reduces down to showing that the colimit of the functor \[\Delta_{/K}\to\operatorname{Set}_{\Delta}^{+}\ \ (n,\sigma)\to(\Delta^{n})^{\flat}\] is given by \(K^{\flat}\). By Eq. (11), we know that \((-)^{\flat}\) preserves colimits. Thus we need to show that the colimit of the functor \[\Delta_{/K}\to\operatorname{Set}_{\Delta}\ \ (n,\sigma)\to(\Delta^{n})\] is \(K\). This is precisely Lemma 3.2.3. **Remark 3.3.4**.: By above lemma, we see that the global section of the functor \(\operatorname{Map}[K,\mathcal{C}]\) is given by \(\operatorname{Map}^{\sharp}(K^{\flat},\mathcal{C}^{\natural})\) which is the largest Kan complex contained in \(\operatorname{Fun}(K,\mathcal{C})\). Objects in the connected components of this Kan complex yield us functors \(K\to\mathcal{C}\) which are equivalent to one another. Thus in order to construct a functor \(K\to\mathcal{C}\) upto equivalence, it is same to give an object in the connected part of \(\Gamma(\operatorname{Map}[K,\mathcal{C}])\). The following proposition is a crucial property of the functor \(\operatorname{Map}[K,\mathcal{C}]\). **Proposition 3.3.5**.: _Let \(K\) be a simplicial set and \(\mathcal{C}\) be an \(\infty\)-category, the functor \(\operatorname{Map}[K,\mathcal{C}]\) is injectively fibrant with respect to the injective model structure on \((\operatorname{Set}_{\Delta})^{(\Delta_{/K})^{\circ p}}\). In other words for every commutative diagram in \((\operatorname{Set}_{\Delta})^{(\Delta_{/K})^{\circ p}}\)_ (24) _where \(i:\mathcal{N}\to\mathcal{M}\) is an anodyne, there exists a dotted arrow \(\beta\) making the diagram commutative._ **Remark 3.3.6**.: Some remarks on the proposition are as follows: 1. If we only consider the functor from \(\Delta_{/K}^{\operatorname{nd}}\), then Proposition 3.3.5 still holds. The proof gets much simpler as the technical part is to deal with the non-degenerate simplices. However \(\Delta_{/K}^{\operatorname{nd}}\) is functorial with respect to the monomorphism of simplicial sets. This is generally not the case in the situation of compactifications and other cases. Thus we really need to prove it on the level of \(\Delta_{/K}\), 2. The proof that we give is rewriting the proof given by Liu-Zheng in [8, Proposition 2.8]. We try to present a more cleaner version of the proof explaining the technicalities. Proof.: Let \(\Delta_{/K}^{\leq n}\) be the subcategory of \(\Delta_{/K}\) spanned by \((m,\sigma)\) where \(m\leq n\). We construct the functor \(\beta\) by induction on \(n\). In other words, we construct \(\beta_{n}\) from \(\Delta_{/K}^{\leq n}\). 1. **The case \(n=0\):** Let \((0,\sigma)\) be an element of \(\Delta_{/K}\). We need to construct the map \(\beta(0,\sigma)\) such that the following diagram of simplicial sets commute: \[\begin{CD}\mathcal{N}(0,\sigma)@>{\alpha(0,\sigma)}>{}>\operatorname{Map}^{ \sharp}((\Delta^{0})^{\flat},\mathcal{C}^{\natural})\\ @V{i(0,\sigma),\cdots,\cdots,\cdots,\cdots,\cdots,\cdots,\cdots,\cdots, \cdots}>{}>\operatorname{\beta}(0,\sigma)\\ \mathcal{M}(0,\sigma)@>{}>{}>.\end{CD}\] (25) As \(\operatorname{Map}^{\sharp}((\Delta^{0})^{\flat},\mathcal{C}^{\natural})\) is the largest Kan complex contained inside \(\mathcal{C}\). It admits right lifting property with respect to anodynes. As \(i(0,\sigma)\) is anodyne, the statement holds. 2. **Induction step \(n=k-1\Rightarrow n=k\)**: Let \((k,\sigma)\) be an object of \(\Delta_{/K}\). We need to construct a map \[\beta(k,\sigma):\mathcal{M}(k,\sigma)\to\operatorname{Map}^{\sharp}((\Delta^{k })^{\flat},\mathcal{C}^{k})\] (26) such that the map is compatible with the face maps \(d^{k}_{i}:(k-1,\tau_{i})\to(k,\sigma)\) and the degeneracy maps \(s^{k-1}_{j}:(k,\sigma)\to(k-1,\rho_{j})\). This means the above map \(\beta(k,\sigma)\) must makes the following squares commute \(\forall\,i,j\) : (27) Let \[\mathcal{M}(k,\sigma)\xrightarrow[\beta(k,\sigma)]{\beta(k,\sigma)}\operatorname {Map}^{\sharp}((\Delta^{k})^{\flat},\mathcal{C}^{k})\] The commutativity of the squares forces the following conditions : 1. The first square gives that we have a map \[\beta(k,\sigma)^{\deg}:\mathcal{M}(k,\sigma)^{\deg}\to\operatorname{Map}^{ \sharp}((\Delta^{k})^{\flat},\mathcal{C}^{k}).\] In other words, we have a map \[\beta(k,\sigma)^{\deg}:(\mathcal{M}(k,\sigma)^{\deg})^{\sharp}\times(\Delta^{ k})^{\flat}\to\mathcal{C}^{k}.\] 2. The second square for all \(i\), gives that combining \(\beta(k-1,\tau_{i})\), we have the map : \[\beta(k,\sigma)^{\operatorname{face}}:\mathcal{M}(k,\sigma)\to\operatorname{ Map}^{\sharp}((\partial\Delta^{k})^{\flat},\mathcal{C}^{k}).\] In other words, we have a map of marked simplicial sets: \[\beta(k,\sigma)^{\operatorname{face}}:\mathcal{M}(k,\sigma)^{\sharp}\times( \partial\Delta^{k})^{\flat}\to\mathcal{C}^{\flat}.\] (30) Note that the map \(\beta(k,\sigma)\) can be written as a map of marked simplicial sets: \[\beta(k,\sigma):(\mathcal{M}(k,\sigma))^{\sharp}\times(\Delta^{k})^{\flat} \to\mathcal{C}^{k}.\] The maps \(\beta(k,\sigma)^{\deg},\beta(k,\sigma)^{\operatorname{face}}\) along the existence of the map \[\alpha(k,\sigma):(\mathcal{N}(k,\sigma))^{\sharp}\times(\Delta^{k})^{\flat} \to\mathcal{C}^{k}\] gives us the map \[\mathcal{N}(k,\sigma)\xrightarrow[\beta(k,\sigma)^{\prime}]{\beta(k,\sigma) ^{\prime}}\mathcal{C}^{\flat}.\] (31) where \[\mathcal{M}(k,\sigma)^{\prime}:=(\mathcal{N}(k,\sigma)\cup\mathcal{M}(k, \sigma)^{\deg})^{\sharp}\times(\Delta^{k})^{\flat}\coprod_{(\mathcal{N}(k, \sigma)\cup\mathcal{M}(k,\sigma)^{\deg})^{\sharp}\times(\partial\Delta^{k})^{ \flat}}(\mathcal{M}(k,\sigma))^{\sharp}\times(\partial\Delta^{k})^{\flat}.\] (32) Hence the construction of \(\beta(k,\sigma)\) is equivalent to existence of the dotted arrow \[\begin{CD}\mathcal{M}(k,\sigma)^{\prime}@>{\beta(k,\sigma)^{\prime}}>{}>{}> \mathcal{C}^{\natural}\\ @V{}V{\mathfrak{j}}V@V{}V{\beta(k,\sigma)}V\\ \mathcal{M}(k,\sigma)^{\sharp}\times(\Delta^{k})^{\flat}\end{CD} \tag{33}\] such that the diagram commutes. As \(\mathcal{C}^{\natural}\) is fibrant in Cartesian model structure (Remark 2.2.2), we need to show that \(i\) is a trivial cofibration. Notice that \(j\) is pushout product of \(j_{1}^{\sharp}:(\mathcal{N}(k,\sigma)\cup\mathcal{M}(k,\sigma)^{\deg})^{\sharp} \hookrightarrow\mathcal{M}(k,\sigma)^{\sharp}\) and a cofibration \(j_{2}:(\partial\Delta^{k})^{\flat}\hookrightarrow(\Delta^{k})^{\flat}\). If we show that \(j_{1}^{\sharp}\) is a trivial cofibration, then the pushout product of a trivial cofibration and a cofibration will be a trivial cofibration. Thus we show that \(j_{1}^{\sharp}\) is a trivial cofibration in \(\operatorname{Set}^{+}_{\Delta}\). By Proposition 2.2.7, we see that it suffices to show that \(j_{1}\) is an anodyne in \(\operatorname{Set}_{\Delta}\). **Claim 3.3.7**.: If \(j_{1}^{\prime}:\mathcal{N}(k,\sigma)^{\deg}\to\mathcal{M}(k,\sigma)^{\deg}\) is anodyne, then \(j_{1}\) is anodyne. Proof.: We have the following commutative diagram (34) where the square is a pushout square. Thus we have \(j_{1}^{\prime}\) is anodyne. As \(j_{1}^{\prime\prime}\) and \(i(k,\sigma)\) are anodyne and \(j_{1}\) is a monomorphism, by Lemma B.0.1, we get that \(j_{1}\) is anodyne. **Notation 3.3.8**.: Let \(s:(k,\sigma)\to(k,\rho)\) be a degeneracy map. Then let \(\mathcal{M}(k)_{s}\) denote the image of \(\mathcal{M}(s)\). We prove the following claim : **Claim 3.3.9**.: Let \(s_{1},\cdots,s_{1}\) be a collection of degeneracy maps where \(s_{j}:(k,\sigma)\to(k-1,\rho_{j})\). Then the morphism \[\cup_{j=1}^{l}\mathcal{N}(k)_{s_{j}}\to\cup_{j=1}^{l}\mathcal{M}(k)_{s_{j}} \tag{35}\] is anodyne. **Claim implies proof of \(j_{1}^{\prime}\) being anodyne:** Applying the above claim to \(k\)-distinct degeneracy maps gives us that \(\mathcal{N}(k,\sigma)^{\deg}:=\cup_{j=0}^{k-1}\mathcal{N}(\rho_{j})\to \mathcal{M}(k,\sigma)^{\deg}\) is anodyne hence completing the proof. Proof of Claim 3.3.9.: We prove this by induction on \(l\). * \(l=l\)**:** In this case, we see that the \(\mathcal{N}(k)_{s_{1}}\cong\mathcal{N}(k-1,\sigma_{1})\). This is true as \(\mathcal{N}(s_{1})\) is a monomorphism, then the image is isomorphic to its domain. Thus the map \(\mathcal{N}(k)_{s_{1}}\to\mathcal{M}(k)_{s_{1}}\) is isomorphic to the map \(\mathcal{N}(k-1,\rho_{1})\to\mathcal{M}(k-1,\rho_{1})\) which is an anodyne. * \(l=m-1\implies l=m\)**:** For every \(1\leq i\leq m-1\), we consider a pair of morphisms \((s_{1},s_{m})\). By Lemma A.0.2, we get there exists an absolute pushout square : Thus for \(\mathcal{N}\) (similarly for \(\mathcal{M}\)), we have a pullback square (as \(\mathcal{N}\) and \(\mathcal{M}\) are functors from \(\Delta^{\mathsf{op}}_{/\mathsf{k}}\)): \[\begin{CD}\mathcal{N}(\mathsf{k}-\mathsf{2},\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{\mathsf{\mathsf{ \mathsf{ \mathsf{ \mathsf{ }}}}}}}}}}}} \xrightarrow{\mathcal{N}(\mathsf{k}^{\prime})}\mathcal{N}( \mathsf{k}-\mathsf{I},\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ }}}}}}}}}}} \xrightarrow{\mathcal{N}(\mathsf{k}_{\mathsf{\mathsf{\mathsf{ \mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{ \mathsf{ \mathsf{{ \mathsf{ { \mathsf{ \mathsf{ { \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ { \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ { \mathsf{ \mathsf{ \mathsf{ \mathsf{ { \mathsf{ \mathsf{ \mathsf{ { \mathsf{ \mathsf{ { \mathsf{ \mathsf{ { \mathsf{ \mathsf{ \mathsf{ { \mathsf{ \mathsf{ { \mathsf{ \mathsf{ { \mathsf{ \mathsf{ { \mathsf{ { \mathsf{ \mathsf{ { \mathsf{ { \mathsf{ { \mathsf{ { \mathsf{ { \mathsf{ { \mathsf{ { \mathsf{ { \mathsf{ { \mathsf{ { \mathsf{ { { \mathsf{ { \mathsf{ { \mathsf{ { { \mathsf{ { \mathsf{ { { \mathsf{ { \mathsf{ { { \mathsf{ { { \mathsf{ { { \mathsf{ { { \mathsf{ { { \mathsf{ { { \mathsf{ { { \mathsf{ { { \mathsf{ { { \mathsf{ { { \mathsf{ { { \mathsf{ { { \mathsf{ { { \mathsf{ { { \mathsf{ { { \mathsf{ { { \mathsf{ { { \mathsf{ { { \mathsf{ { { \mathsf{ { { \mathsf{ { { \mathsf{ { \mathsf{ { { \mathsf{ { \mathsf{ { { \mathsf{ { \mathsf{ { { \mathsf{ { \mathsf{ { \mathsf{ { \mathsf{ { \mathsf{ { \mathsf{ { \mathsf{ { { \mathsf{ { \mathsf{ { \mathsf{ { \mathsf{ { { \mathsf{ { \mathsf{ { \mathsf{ { \mathsf{ { \mathsf{ { \mathsf{ { \mathsf{ { \mathsf{ { \mathsf{ { \mathsf{ \mathsf{ { \mathsf{ \mathsf{ { \mathsf{ \mathsf{ { \mathsf{ \mathsf{ { \mathsf{ \mathsf{ { \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ { \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ For every split epimorphism \(s:(n,\zeta)\rightarrow(n^{\prime},\zeta^{\prime})\), we have the diagram : \[\begin{CD}\mathcal{N}(n^{\prime},\zeta^{\prime})@>{\mathcal{N}(s)}>{}>\mathcal{N}(n, \zeta)@>{\mathcal{N}(d)}>{}>\mathcal{N}(n^{\prime},\zeta^{\prime})\\ @V{i(n^{\prime},\zeta^{\prime})}V{}V@V{i(n,\zeta)}V{(n^{\prime},\zeta^{ \prime})}V@V{i(n^{\prime},\zeta^{\prime})}V{(n^{\prime},\zeta^{\prime})}V\\ \mathcal{M}(n^{\prime}\zeta^{\prime})@>{\mathcal{M}(s)}>{}>\mathcal{M}(n,\zeta) @>{\mathcal{M}(d)}>{}>\mathcal{M}(n,\zeta).\end{CD} \tag{42}\] where \(d\) is a section of \(s\). As \(i(n,\zeta)\) is a monomorphism, applying Lemma A.0.2 gives us that left square is a pullback square. Applying this fact on the epimorphisms \(s_{m}:(k,\sigma)\rightarrow(k-l,\rho_{m})\) and \(s^{\prime}_{i}:(k-l,\rho_{m})\rightarrow(k-2,\rho_{lm})\), we get the pullback squares : \[\begin{CD}\mathcal{N}(k-2,\rho_{im})@>{\mathcal{N}(s^{\prime}_{i})}>{}> \mathcal{N}(k-1,\rho_{m})@>{\mathcal{N}(s_{m})}>{}>\mathcal{N}(k,\sigma)\\ @V{i(k-2,\rho_{im})}V{(k-1,\rho_{m})}V@V{i(k,\sigma)}V{i(k,\sigma)}V\\ \mathcal{M}(k-2,\rho_{im})@>{\mathcal{M}(s^{\prime}_{i})}>{}>\mathcal{M}(k-1, \rho_{m})@>{\mathcal{M}(s_{m})}>{}>\mathcal{M}(k,\sigma).\end{CD} \tag{43}\] This gives us a pullback square: \[\begin{CD}\mathcal{N}(k-2,\rho_{im})@>{\mathcal{N}(s^{\prime}_{i})}>{}> \mathcal{N}(k-1,\rho_{m})\\ @V{i(k-2,\rho_{im})}V{}V@V{i(k,\sigma)\mathcal{N}(s_{m})}V{}V\\ \mathcal{M}(k-2,\rho_{im})@>{\mathcal{N}(s_{m}s^{\prime}_{i})}>{}>\mathcal{M}(k, \sigma).\end{CD} \tag{44}\] Taking intersection of the images of the maps in the pullback square we get that \[\mathcal{N}(k)_{s_{m}}\cap\mathcal{M}(k)_{s^{\prime}_{i}s_{m}}=\mathcal{N}(s^ {\prime}_{i}s_{m}). \tag{45}\] Thus the conditions of Lemma B.0.2 are verified. Hence \(f_{3}\) is anodyne. This completes the proof of Claim 3.3.9. This also completes the induction of process for \(n=k\). Hence we have constructed the map \(\beta(k,\sigma)\) lifting \(\alpha(k,\sigma)\). ## 4 The main theorem In the last section of the article, we prove the main statement of theorem. ### Statement and the proof. **Theorem 4.1.1**.: _Let \(K^{\prime},K\) be simplicial sets and \(\mathcal{C}\) be a \(\infty\)-category. Let \(f^{\prime}:K^{\prime}\rightarrow\mathcal{C}\) and \(i:K^{\prime}\rightarrow K\) be morphisms of simplicial sets. Let \(\mathcal{N}\in(\operatorname{Set}_{\Delta})^{(\Delta_{/K})^{\infty}}\) and \(\alpha:\mathcal{N}\rightarrow\operatorname{Map}[K,\mathcal{C}]\) be a natural transformation. If_ 1. _(Weakly contractibility) for_ \((n,\sigma)\in\Delta_{/K}\)_,_ \(\mathcal{N}(n,\sigma)\) _is weakly contractible,_ 2. _(Compatability with_ \(f^{\prime}\)_) there exists_ \(\omega\in\Gamma(i^{*}\mathcal{N})_{0}\) _such that_ \(\Gamma(i^{*}\alpha)(\omega)=f^{\prime}\)_,_ _then there exists a map \(f:K\to\mathcal{C}\) such that the following diagram_ (46) _commutes. In other words, \(f^{\prime}\cong f\circ i\) in \(Fun(K^{\prime},\mathcal{C})\)._ Proof.: Let \[\mathcal{N}^{\triangleright}:\Delta^{\circ p}_{/K}\to Set_{\Delta}\ \ (n, \sigma)\to\mathcal{N}(\sigma)^{\triangleright}.\] The canonical morphism \[\eta:\mathcal{N}\to\mathcal{N}^{\triangleright}\] is an anodyne. This is because for every weakly contractible simplicial set \(K\), the inclusion \(K\to K^{\triangleright}\) is an anodyne ([10, Corollary 4.4.4.10]). As \(Map[K,\mathcal{C}]\) is injectively fibrant (Proposition 3.3.5), the map \(\alpha\) has the following factorization : (47) Let \(z\) be the cone point of the simplicial set \(\Gamma(\mathcal{N}^{\triangleright})\). Then we define \[f:=\Gamma(\beta)(z):K\to\mathcal{C}.\] We need to check is the compatibility of \(f\) with \(f^{\prime}\). Pulling back along \(i\), we get that \(i^{*}\alpha\) has the following factorization: (48) Let \(z^{\prime}\) be the cone point of \(\Gamma(i^{*}\mathcal{N}^{\triangleright})\). Note that we have the following commutative diagram of simplicial sets (Remark 3.1.8): (49) Note that \(z\circ i=z^{\prime}\). Thus we have that \[\Gamma(i^{*}\beta)(z^{\prime})=\Gamma(i^{*}\beta)(z\circ i)=\Gamma(\beta)(z) \circ i=f\circ i. \tag{50}\] As the cone point is connected to every vertex, we see that \(\Gamma(i^{*}\eta)(\omega)\) and \(z^{\prime}\) are connected. Thus \[\Gamma(i^{*}\beta\circ i^{*}\eta)(\omega)=\Gamma(i^{*}\alpha)(\omega)=f^{ \prime}\ \ \text{and}\ \ \Gamma(i^{*}\beta)(z^{\prime})=\Gamma(\beta)(z)\circ i=f\circ i(Eq. (50))\] are connected vertices in the Kan complex \(Map^{\sharp}(K^{\flat},\mathcal{C}^{\natural})\). Thus we have \[f^{\prime}\equiv f\circ i\ \ \text{in}\ \ \text{Fun}(K^{\prime},\mathcal{C}).\] **Remark 4.1.2**.: We point out some remarks of the theorem : 1. In the proof of the theorem, we used [10, Corollary 4.4.4.10]. The corollary says that let \[\mathsf{h}:\mathsf{S}\to\mathsf{K}\] (51) be a map of simplicial sets where \(\mathsf{S}\) is a weakly contractible simplicial set and \(\mathsf{K}\) is a Kan complex. Then the morphism \(\mathsf{h}\) admits a colimit : \[\mathsf{h}^{\prime}:\mathsf{S}^{\triangleright}\to\mathsf{K}.\] (52) 2. Let us explain how \(\mathsf{f}\) is evaluated for any arbitrary \(\mathsf{n}\)-simplex \(\sigma:\Delta^{\mathsf{n}}\to\mathsf{K}\). Evaluating \(\alpha\) at the obejts \((\mathsf{n},\sigma)\in\Delta_{/\mathsf{K}}\), we get the following morphism : \[\alpha(\mathsf{n},\sigma):\mathcal{N}(\mathsf{n},\sigma)\to\operatorname{Fun}^ {\cong}(\Delta^{\mathsf{n}},\mathcal{C}).\] (53) As \(\mathcal{N}(\mathsf{n},\sigma)\) is weakly contractible, we have an extension : \[\mathcal{N}(\mathsf{n},\sigma)^{\triangleright}\xrightarrow{\alpha^{ \prime}(\mathsf{n},\sigma)}\operatorname{Fun}^{\cong}(\Delta^{\mathsf{n}}, \mathcal{C}).\] (54) We then define \(\mathsf{f}(\sigma)=\alpha^{\prime}(\mathsf{n},\sigma)(z)=\lim\alpha(\mathsf{n},\sigma)\) where \(z\) is the cone point. If there exists \(\sigma^{\prime}\in\mathsf{K}^{\prime}_{\mathsf{n}}\) mapping to \(\sigma\), then the second condition of the theorem says that \(\mathsf{f}^{\prime}(\sigma^{\prime})\cong\mathsf{f}(\sigma)\) in \(\operatorname{Fun}(\Delta^{\mathsf{n}},\mathcal{C})\). ### Relating the main theorem to 2-categorical compactifications. In this subsection, we see how Theorem 4.1.1 reproves Deligne's compactfications with some assumptions. We are working in the context of derived categories of \(\ell\)-adic sheaves and assume that we have \((-)_{\#}\) for open immersions and \((-)_{*}\) for proper morphisms. Let \(\operatorname{Sch}^{\mathrm{comp}}\) be the 2-category from Definition 1.0.1. Let \(\mathsf{F}_{\mathsf{c}}\) be the morphism : \[\operatorname{Sch}^{\mathrm{comp}}\to\operatorname{Cat}_{\mathsf{ I}} \tag{55}\] with the following properties: 1. Let \(\mathsf{X}\in\operatorname{Sch}^{\mathrm{comp}}\), then \(\mathsf{F}_{\mathsf{c}}(\mathsf{X})=\mathcal{D}(\mathsf{X},\Lambda)\) where \(\mathcal{D}(\mathsf{X},\Lambda)\) is the derived category of constructible \(\ell\)-adic sheaves with values in coefficient ring \(\Lambda\). 2. For a compactification \((\mathsf{j},\mathsf{p})\), we have \(\mathsf{F}_{\mathsf{c}}(\mathsf{j},\mathsf{p})=\mathsf{p}_{*}\mathsf{j}_{\#}\). 3. Let \(\mathsf{g}:(\mathsf{j},\mathsf{p})\to(\mathsf{j}^{\prime},\mathsf{p}^{\prime})\) be a 2-morphism between two compactifications, then we have \(\mathsf{F}_{\mathsf{c}}(\mathsf{g})\) is an natural transformation which is an isomorphism. This is often called the _support property_. We have the usual projection morphism \[\operatorname{pr}:\operatorname{Sch}^{\mathrm{comp}}\to\operatorname{Sch}^{ \prime}. \tag{56}\] **Corollary 4.2.1**.: _Let \(\mathsf{F}_{\mathsf{c}}\) be the morphism with properties mentioned above. Then \(\mathsf{F}_{\mathsf{c}}\) extends to a functor \(\mathsf{f}_{!}:\operatorname{Sch}^{\prime}\to\operatorname{Cat}_{\mathsf{ I}}\) such that the following diagram commutes_ Proof.: Taking Duskin Nerves on the categories we get the functor : \[\mathsf{N}^{\mathsf{D}}_{\bullet}(\mathsf{F}_{\mathsf{c}}):\mathsf{N}^{\mathsf{D}}_ {\bullet}(\operatorname{Sch}^{\mathrm{comp}})\to\mathsf{N}^{\mathsf{D}}_{ \bullet}(\operatorname{Cat}_{1})\subset\operatorname{Cat}_{\infty}. \tag{57}\] Let \(\mathsf{K}=\mathsf{N}^{\mathsf{D}}_{\bullet}(\operatorname{Sch}^{\prime})= \mathsf{N}(\operatorname{Sch}^{\prime})\). Define : \[\mathcal{N}:(\Delta_{/\mathsf{K}})^{\mathrm{op}}\to\operatorname{Set}_{\Delta} \ \ (\mathsf{n},\sigma)\to\operatorname{Komp}(\sigma)) \tag{58}\] where \(\operatorname{Komp}(\sigma)=\mathsf{N}(\operatorname{pr})^{-1}(\sigma)\). Note that \(\operatorname{Komp}(\sigma)\) is the nerve of category \(\operatorname{Comp}(\sigma)\) where morphisms are chains of \(\mathsf{n}-\mathsf{I}\) composable compactifications over \(\sigma\) and morphisms are maps between the compactifications. The natural transformation \[\mathfrak{x}(\mathsf{n},\sigma):\mathcal{N}(\mathsf{n},\sigma)\to\operatorname {Fun}^{\oplus}(\Delta^{\mathsf{n}},\mathsf{N}^{\mathsf{D}}_{\bullet}( \operatorname{Cat}_{1}))\ \ \tau\in\operatorname{Comp}(\sigma)\to\mathsf{F}_{\mathsf{c}}(\tau). \tag{59}\] Note that image of \(\mathfrak{x}(\mathsf{n},\sigma)\) lies in the Kan complex because of the third condition on the morphism \(\mathsf{F}_{\mathsf{c}}\). We need to check the conditions of Theorem 4.1.1: 1. We need to show that \(\mathsf{N}(\operatorname{Comp}(\sigma))\) is weakly contractible. As filtered categories are weakly contractible, we show that the category \(\operatorname{Comp}(\sigma)\) is filtered. As \(\sigma\) is a chain of \(\mathsf{n}-\mathsf{I}\) composable morphisms in \(\operatorname{Sch}^{\prime}\), it is enough to show that \(\operatorname{Comp}(\mathsf{f})\) where \(\sigma\) is the one simplex \(\mathsf{f}:\mathsf{X}\to\mathsf{Y}\). This boils down to showing that given any two compactifications \((\mathsf{j},\mathsf{p})\) and \((\mathsf{j}^{\prime},\mathsf{p}^{\prime})\), there exists a third compactification \((\mathsf{j}^{\prime\prime},\mathsf{p}^{\prime\prime})\) mapping to both of them. We have the following commutative diagram : where \(\mathsf{r},\mathsf{s}\) are proper morphisms. Choose a compactification of \(\mathsf{g}:\mathsf{X}\xrightarrow{\mathsf{j}^{\prime\prime}}\mathsf{Z}^{ \prime\prime}\xrightarrow{\mathsf{q}}\mathsf{W}\). Then the diagram \[\mathsf{X}\xrightarrow{\mathsf{j}^{\prime\prime}}\mathsf{Z}^{\prime\prime} \xrightarrow{\mathsf{p}^{\prime\prime}}\mathsf{Y}\] (60) where \(\mathsf{p}^{\prime\prime}=\mathsf{p}\circ\mathsf{r}\circ\mathsf{q}\) is a compactification of \(\mathsf{f}\) which admits morphisms to \((\mathsf{j},\mathsf{p})\) and \((\mathsf{j}^{\prime},\mathsf{p}^{\prime})\) via \(\mathsf{r}\circ\mathsf{q}\) and \(\mathsf{s}\circ\mathsf{q}\) respectively. 2. The second condition follows from the construction of \(\mathfrak{x}\). We need to define \(\omega\in\Gamma(\mathsf{N}(\operatorname{pr})^{*}\mathcal{N})_{\mathsf{0}}\) such that \(\Gamma(\mathsf{i}^{*}\omega)=\mathsf{F}_{\mathsf{c}}\). Defining \(\omega\) is equivalent to a compatible collection of \(\mathsf{0}\)-simplices of \(\mathcal{N}(\mathsf{N}(\operatorname{pr})(\mathsf{n},\tau))\). The collection \(\tau\in\mathcal{N}(\mathsf{N}(\operatorname{pr})(\mathsf{n},\tau))\) does the job. Applying Theorem 4.1.1,we get an extension \(\mathsf{N}^{\mathsf{D}}_{\bullet}(\mathsf{F}_{!}):\mathsf{N}(\operatorname{ Sch}^{\prime})\to\mathsf{N}^{\mathsf{D}}_{\bullet}(\operatorname{Cat}_{1})\). Restricting it to the homotopy categories, we get the morphism \[\mathsf{F}_{!}:\operatorname{Sch}^{\prime}\to\operatorname{Cat}_{1}. \tag{61}\] This completes the proof of the corollary. **Remark 4.2.2**.: If we unravel the definitions of the functor \(F_{!}\), we get the following: let \(f:X\to Y\) be a morphism in \(Sch^{\prime}\), we see that \[F_{!}(f)\cong\varprojlim_{(j,p)\in Comp(f)}p\ast j_{\#}. \tag{62}\] In the setting of \(\infty\)-categories, the proof of theorem of \(\infty\)-categorical compactification is more technical than the one on the level of \(2\)-categories. The proof needs some studying on specific combinatorial simplicial sets related to compactifications and pullback squares. All of these will be explained in detail in the next article ([2]) where the notations are again from the work of Liu-Zheng in [8]. ## Appendix A Remarks on absolute pullbacks. **Definition A.0.1**.: Let \(\mathcal{C}\) be an \(1\)-category. A square in \(\mathcal{C}\) is said to be an _absolute pullback (pushout)_ if every functor \(F:\mathcal{C}\to\mathcal{D}\) sends the square to a pullback (pushout) square in \(\mathcal{D}\). **Lemma A.0.2**.: _Let \(\mathcal{C}\) be a \(1\)-category. Suppose we have a diagram in \(\mathcal{C}\) of the form_ (63) _where_ 1. \(s\circ d=id_{A}\)_,_ 2. \(s^{\prime}\circ d^{\prime}=id_{A^{\prime}}\)__ 3. \(j\) _is a monomorphism (epimorphism)._ _Then the left (right) square is a pullback (pushout). If \(j\) is a split monomorphism (epimorphism), then the left (right) square is an absolute pullback (pushout)._ Proof.: Let us prove that the left square is a pullback square. Let \(X\) be an object in \(\mathcal{C}\) and suppose we have a following diagram (64) such that \(j\circ p=d^{\prime}\circ q\). The aim is to construct a morphism \(\tau\) such that 1. \(d\circ r=p\), 2. \(i\circ\tau=q\). Define \(r:=s\circ p\). Let us verify the conditions mentioned above. 1. \(i\circ\tau=q\) \[i\circ\tau=i\circ s\circ p\] \[=s^{\prime}\circ j\circ p\] \[=s^{\prime}\circ d^{\prime}\circ q\] \[=q.\] 2. \(d\circ\tau=p\). As \(j\) is a monomorphism it is enough to show \(j\circ d\circ\tau=j\circ p\). \[j\circ d\circ\tau=j\circ d\circ s\circ p\] \[=d^{\prime}\circ i\circ s\circ p\] \[=d^{\prime}\circ s^{\prime}\circ j\circ p\] \[=d^{\prime}\circ s^{\prime}\circ d^{\prime}\circ q\] \[=d^{\prime}\circ q\] \[=j\circ p\] This proves why the first square is a pullback square. In order to prove it is an absolute pullback, let \(F:C\to D\) be a functor. In order to prove that \(F\) applied to first square is still a pullback square, we shall repeat the proof above. Let \(X^{\prime}\) be an object of \(D\) and suppose we have the following diagram: (65) such that \(F(j)\circ p=F(d^{\prime})\circ q\). We define \(\tau:=F(d)\circ p\). In order to prove that \(F(i)\circ\tau=p\), the argument is the same as explained before. For the second equality, the fact that \(j\) is a split monomorphism implies there exists \(j^{\prime}:B^{\prime}\to B\) such \(j^{\prime}\circ j=id_{B}\). We get the similar equality \(F(j)\circ F(d)\circ\tau=F(j)\circ p\). Precomposing with \(F(j^{\prime})\), we get that \(F(d)\circ\tau=p\). The following lemma is about the existence of absolute pushouts in the category of simplices. **Lemma A.0.3**.: _Let \(K\) be a simplicial set and let \((n,\sigma)\in\Delta_{/K}\). Let \(s_{i}^{n-1}:(n,\sigma)\to(n-1,\sigma_{i})\) and \(s_{j}^{n-1}:(n,\sigma_{i})\to(n-1,\sigma_{j})\) be two distinct epimorphisms induced by the degeneracy maps. Then the pushout of \(s_{i}^{n-1}\) along \(s_{j}^{n-1}\) exist in \(\Delta_{/K}\). Moreover, it is an absolute pushout._ Proof.: Without loss of generality assume \(i<j\). By the simplicial identities in \(\Delta\) we have the following diagram (66) where all the squares are commutative. Notice that we have simplicial identities \(s_{1}^{n-1}\circ d_{i}^{n}=id_{(n-1,\sigma_{i})}\) and \(s_{1}^{n-2}\circ d_{i}^{n-1}=id_{(n-2,\sigma_{ij})}\) ([6, Section 3.1]). As \(s_{1}^{n}\) is a split epimorphism \((s_{j}^{n-1}\circ d_{j}^{n}=id_{(n-1,\sigma_{j})})\), applying Lemma A.0.2, gives us that pushout exists and the pushout square is an absolute pushout. ## Appendix B Remarks on anodynes. **Lemma B.0.1**.: _Let_ \[\tikzfig{fig:b1} \tag{67}\] _be a commutative diagram in \(\operatorname{Set}_{\mathsf{A}}\). If \(f,h\) are anodynes and \(g\) is a monomorphism of simplicial sets, then \(g\) is anodyne._ Proof.: By two out of three property of weak equivalences, we get that \(g\) is a weak equivalence. As \(g\) is a monomorphism and hence a cofibration, \(g\) is a trivial cofibration i.e. an anodyne. The following lemma will be used in proving Proposition 3.3.5. **Lemma B.0.2**.: _Let_ \[\tikzfig{fig:b2} \tag{68}\] _be a cube in \(\operatorname{Set}_{\mathsf{A}}\) with the following properties :_ 1. \(f_{0},f_{1}\) _and_ \(f_{2}\) _are anodynes._ 2. _The front and back squares are pushout squares._ 3. _The morphism_ \(f_{01}:B_{0}\coprod_{\mathsf{A}_{\circ}}A_{1}\to B_{1}\) _is a monomorphism._ _Then \(f_{3}\) is anodyne._ Proof.: Let \(B_{1}^{\prime}:=A_{1}\coprod_{\mathsf{A}_{\circ}}B_{0}\). Consider the cube : \[\tikzfig{fig:b2} \tag{69}\] where all the six squares are pushouts. As anodynes are preserved under pushouts, we get that \(\mathfrak{f}^{\prime}_{1}\) and \(\mathfrak{f}^{\prime}_{3}\) are anodyne morphisms. The cube Eq. (69) decomposes Eq. (68) into a bigger diagram : (70) As \(\mathfrak{f}^{\prime}_{1}\) and \(\mathfrak{f}_{1}\) are anodynes and \(\mathfrak{f}_{01}\) is a monomorphism, by Lemma B.0.1, we get that \(\mathfrak{f}_{01}\) is anodyne. As the squares in the front face of the bigger cubes are all pushouts, we get that \(\mathfrak{f}_{23}\) is anodyne (being the pushout of \(\mathfrak{f}_{01}\)). Thus \(\mathfrak{f}_{3}=\mathfrak{f}_{23}\circ\mathfrak{f}^{\prime}_{3}\) is anodyne.
2305.06117
Gauss sums and Van der Geer--Van der Vlugt curves
We study Van der Geer--Van der Vlugt curves in a ramification-theoretic view point. We give explicit formulae on L-polynomials of these curves. As a result, we show that these curves are supersingular and give sufficient conditions for these curves to be maximal or minimal.
Daichi Takeuchi, Takahiro Tsushima
2023-05-10T13:08:39Z
http://arxiv.org/abs/2305.06117v1
# Gauss sums and Van der Geer-Van der Vlugt curves ###### Abstract We study Van der Geer-Van der Vlugt curves in a ramification-theoretic view point. We give explicit formulae on \(L\)-polynomials of these curves. As a result, we show that these curves are supersingular and give sufficient conditions for these curves to be maximal or minimal. + Footnote †: 2020 _Mathematics Subject Classification_. Primary: 11G20, 11S15, 14F20; Secondary: 11F85. + Footnote †: 2020 _Mathematics Subject Classification_. Primary: 11G20, 11S15, 14F20; Secondary: 11F85. ## 1 Introduction Let \(q\) be a power of a prime number \(p_{0}\) and \(\mathbb{F}_{q}\) be a finite field with \(q\) elements. Let \(\mathbb{F}\) be an algebraic closure of \(\mathbb{F}_{q}\). Let \(\mathrm{Fr}_{q}\colon\mathbb{F}\to\mathbb{F};\ x\mapsto x^{q^{-1}}\) be the geometric Frobenius automorphism. Let \(\ell\nmid p\) be a prime number. A smooth projective geometrically connected curve \(C\) over \(\mathbb{F}_{q}\) is said to be supersingular if all the eigenvalues of \(\mathrm{Fr}_{q}\) on \(H^{1}(C_{\mathbb{F}},\overline{\mathbb{Q}}_{\ell})\) are \(q^{1/2}\) times roots of unity. By Tate's theorem, \(C\) is supersingular if and only if the Jacobian of \(C\) is isogenous to a product of supersingular elliptic curves over \(\mathbb{F}\). Let \(p\) be a power of \(p_{0}\) and suppose that \(q\) is a power of \(p\). Let \(R(x)=\sum_{i=0}^{e}a_{i}x^{p^{i}}\in\mathbb{F}_{q}[x]\) be an additive polynomial of degree \(p^{e}\). Let \(C_{R}\) be the affine curve over \(\mathbb{F}_{q}\) defined by \(y^{p}-y=xR(x)\) in \(\mathbb{A}_{\mathbb{F}_{q}}^{2}=\operatorname{Spec}\mathbb{F}_{q}[x,y]\). Let \(\overline{C}_{R}\) denote the smooth compactification of \(C_{R}\). We call \(\overline{C}_{R}\) the Van der Geer-Van der Vlugt curve. Assume that \[(p_{0},e)\neq(2,0),\] which guarantees that the genus of \(\overline{C}_{R}\) is positive. In [4], Van der Geer and Van der Vlugt showed that the family \(\{\overline{C}_{R}\}_{R}\) has various interesting properties. Among them, they proved that they are supersingular in the case where \(p\) is a prime number. This is shown by constructing an algorithm to take explicit quotients of \(C_{R}\). The proof is complicated in the case where \(p\) is even. In the case where \(p\) is an odd prime number, a detailed proof of this theorem is given in [2]. This theorem is broadly used in Number theory and Coding theory. In this paper, by using tools from \(\ell\)-adic cohomology theory, we give another method to describe the \(L\)-polynomials of \(\overline{C}_{R}\) which is simple and can be applied regardless of the parity of the characteristic of \(\mathbb{F}_{q}\). We start with observing that \(C_{R}\) admits an action of a certain Heisenberg group. Using this group action, we decompose the cohomology group \(H^{1}(\overline{C}_{R,\mathbb{F}},\overline{\mathbb{Q}}_{\ell})\) into the direct sum of \(1\)-dimensional representations of \(\operatorname{Gal}(\mathbb{F}/\mathbb{F}_{q})\). Then, applying Laumon's product formula for epsilon factors ([5]) to each direct summand, we compute the Frobenius eigenvalues in terms of epsilon factors of characters. It is classically known that the epsilon factors of characters are calculated by Gauss sums. As a consequence, we can show that \(\overline{C}_{R}\) is supersingular in the case where \(p\) is a power of a prime number. We describe our results more precisely. Let \(E_{R}(x):=R(x)^{p^{e}}+\sum_{i=0}^{e}(a_{i}x)^{p^{e-i}}\) and \[H_{R}:=\{(a,b)\in\mathbb{F}^{2}\mid E_{R}(a)=0,\ b^{p}-b=aR(a)\}.\] This set naturally has a group structure and acts on the curve \(C_{R}\). The center \(Z(H_{R})\) equals \(\{0\}\times\mathbb{F}_{p}\). Let \(A_{R}\subset H_{R}\) be a maximal abelian subgroup. Then \(A_{R}\) contains the center \(\mathbb{F}_{p}\). For a finite abelian group \(A\), let \(A^{\vee}:=\operatorname{Hom}(A,\overline{\mathbb{Q}}_{\ell}^{\times})\) denote the character group. For \(\psi\in\mathbb{F}_{p}^{\vee}\setminus\{1\}\), let \[A^{\vee}_{\psi}:=\{\xi\in A^{\vee}_{R}\mid\xi|_{\mathbb{F}_{p}}=\psi\}.\] Then the \(L\)-polynomial \[L_{\overline{C}_{R}/\mathbb{F}_{q}}(T):=\det(1-\operatorname{Fr}_{q}T;H^{1}( \overline{C}_{R,\mathbb{F}},\overline{\mathbb{Q}}_{\ell}))\] has the following decomposition. **Theorem 1.1**.: We assume that \(A_{R}\subset\mathbb{F}_{q}^{2}\). For each \(\psi\in\mathbb{F}_{p}^{\vee}\setminus\{1\}\) and \(\xi\in A^{\vee}_{\psi}\), there exists a certain number \(\tau_{\xi}\) which is \(q^{1/2}\) times a root of unity such that a formula \[L_{\overline{C}_{R}/\mathbb{F}_{q}}(T)=\prod_{\psi\in\mathbb{F}_{p}^{\vee} \setminus\{1\}}\prod_{\xi\in A^{\vee}_{\psi}}(1-\tau_{\xi}T)\] holds. Consequently, \(\overline{C}_{R}\) is supersingular. We regard \(\tau_{\xi}\) as a Gauss sum attached to \(\xi\). An explicit formula for \(\tau_{\xi}\) in terms of local epsilon factor is given in Proposition 3.1. In Corollary A.9, we give another explicit formula for \(\tau_{\xi}\) without using epsilon factors. Using the Grothendieck trace formula and a mechanism of taking quotients of \(C_{R}\) by abelian subgroups of \(H_{R}\), we deduce this formula. A projective smooth geometrically connected curve \(C\) over \(\mathbb{F}_{q}\) is said to be \(\mathbb{F}_{q^{n}}\)-maximal (resp. \(\mathbb{F}_{q^{n}}\)-minimal) if \(|C(\mathbb{F}_{q^{n}})|=q^{n}+1+2g(C)q^{n/2}\) (resp. \(|C(\mathbb{F}_{q^{n}})|=q^{n}+1-2g(C)q^{n/2}\)), where \(g(C)\) denotes the genus of \(C\). In other words, \(C\) is \(\mathbb{F}_{q^{n}}\)-maximal (resp. \(\mathbb{F}_{q^{n}}\)-minimal) if and only if \(\operatorname{Fr}_{q^{n}}\) acts on \(H^{1}(C_{\mathbb{F}},\overline{\mathbb{Q}}_{\ell})\) as scalar multiplication by \(-q^{n/2}\) (resp. \(q^{n/2}\)). Maximal curves are important in Coding theory. By evaluating the Gauss sums \(\{\tau_{\xi}\}_{\xi}\), Theorem 1.1 implies the following. **Theorem 1.2**.: We assume that \(A_{R}\subset\mathbb{F}_{q}^{2}\). 1. The curve \(\overline{C}_{R}\) is \(\mathbb{F}_{q^{4p_{0}}}\)-minimal. 2. Assume that \(f\) is odd and \(p_{0}\not\equiv 1\pmod{4}\). Then \(\overline{C}_{R}\) is \(\mathbb{F}_{q^{2p_{0}}}\)-maximal. 3. Assume that \(p_{0}=2\) and \(H_{R}\subset\mathbb{F}_{q}^{2}\). Then \(f\) is even and \(\overline{C}_{R}\) is \(\mathbb{F}_{q^{2}}\)-minimal. This work was supported by JSPS KAKENHI Grant Numbers 20K03529/21H00973 and by RIKEN Special Postdoctoral Researcher Program. ## 2 Van der Geer-Van der Vlugt curves The curve \(C_{R}\) admits a large automorphism group containing a Heisenberg group. We recall this fact briefly. Let \[E_{R}(x): =R(x)^{p^{e}}+\sum_{i=0}^{e}(a_{i}x)^{p^{e-i}},\] \[f_{R}(x,y): =-\sum_{i=0}^{e-1}\left(\sum_{j=0}^{e-i-1}(a_{i}x^{p^{i}}y)^{p^{j} }+(xR(y))^{p^{i}}\right)\in\mathbb{F}_{q}[x,y].\] We easily check that \[f_{R}(x,y)^{p}-f_{R}(x,y)=-x^{p^{e}}E_{R}(y)+xR(y)+yR(x). \tag{2.1}\] Let \(V_{R}:=\{x\in\mathbb{F}\mid E_{R}(x)=0\}\) and \(H_{R}:=\{(a,b)\in V_{R}\times\mathbb{F}\mid b^{p}-b=aR(a)\}\) be the group whose group law is defined by \[(a,b)\cdot(a^{\prime},b^{\prime})=(a+a^{\prime},b+b^{\prime}+f_{R}(a,a^{ \prime})).\] We recall some basic properties of \(H_{R}\). **Lemma 2.1**.: ([7, Lemma 2.6])__ 1. _The center_ \(Z(H_{R})\) _equals_ \(\{0\}\times\mathbb{F}_{p}\)_._ 2. _The quotient_ \(H_{R}/Z(H_{R})\) _is isomorphic to_ \(V_{R}\) _via_ \((a,b)\mapsto a\)_._ 3. _The mapping_ \(H_{R}\times H_{R}\to Z(H_{R});\ (x,y)\mapsto xyx^{-1}y^{-1}\) _induces a non-degenerate symplectic pairing_ \(\omega_{R}\colon V_{R}\times V_{R}\to\mathbb{F}_{p};\ (a,a^{\prime})\mapsto f_{R}(a,a^{ \prime})-f_{R}(a^{\prime},a)\)_._ Proof.: The assertions (1) and (2) are proved in [7, Lemma 2.6(1)]. The assertion (3) is a consequence of [7, Lemma 2.4] and [7, Lemma 2.6(2)]. **Lemma 2.2**.: Let \(h\in H_{R}\). The order of \(h\) divides \(p_{0}\) if \(p_{0}\) is odd and \(4\) if \(p_{0}=2\). Proof.: We write \(h=(a,b)\). Then \(h^{i}=(ia,ib+\binom{i}{2}f_{R}(a,a))\) for an integer \(i\geq 2\). Hence the claim follows. The element \(H_{R}\ni(a,b)\) acts on \(C_{R,\mathbb{F}}\ni(x,y)\) by \[(x,y)\cdot(a,b)=(x+a,y+f_{R}(x,a)+b). \tag{2.2}\] Let \(A_{R}\subset H_{R}\) be a maximal abelian subgroup: note that such subgroups are in \(1\) to \(1\) correspondence with those subgroups of \(V_{R}\) which are maximally totally isotropic with respect to \(\omega_{R}\). From now on, we always assume that \[(p_{0},e)\neq(2,0),\quad A_{R}\subset\mathbb{F}_{q}^{2}. \tag{2.3}\] There exists an additive polynomial \(F_{R}(x)\in\mathbb{F}_{q}[x]\) such that \(F_{R}\mid E_{R}\) and \(A_{R}\) is the inverse image of \(\{x\in\mathbb{F}\mid F_{R}(x)=0\}\subset V_{R}\) by \(H_{R}\to V_{R}\). On the latter condition in (2.3), we remark the following. **Lemma 2.3**.: Assume that \(p\neq 2\). The condition \(\{x\in\mathbb{F}\mid F_{R}(x)=0\}\subset\mathbb{F}_{q}\) implies that \(A_{R}\subset\mathbb{F}_{q}^{2}\). Proof.: Let \(a\in\mathbb{F}_{q}\) be an element satisfying \(F_{R}(a)=0\). Then \(2^{-1}f_{R}(a,a)\in\mathbb{F}_{q}\) and \((2^{-1}f_{R}(a,a))^{p}-2^{-1}f_{R}(a,a)=aR(a)\). Hence the claim follows. In the characteristic two case, the claim in the above lemma does not hold in general (cf. [4, Proposition (3.1)]). We consider the finite Galois etale morphism \[\phi\colon C_{R}\to\mathbb{A}_{\mathbb{F}_{q}}^{1};\ (x,y)\mapsto F_{R}(x),\] whose Galois group is \(A_{R}\). Let \(\psi\in\mathbb{F}_{p}^{\vee}\setminus\{1\}\) and \(A_{\psi}^{\vee}:=\{\xi\in A_{R}^{\vee}\mid\xi|_{\mathbb{F}_{p}}=\psi\}\). For \(\xi\in A_{\psi}^{\vee}\), let \(\mathscr{Q}_{\xi}\) denote the smooth sheaf on \(\mathbb{A}_{\mathbb{F}_{q}}^{1}\) defined by \(\xi\) and \(\phi\). Let \(\mathscr{L}_{\psi}(s)\) denote the Artin-Schreier sheaf on \(\mathbb{A}_{\mathbb{F}_{q}}^{1}=\operatorname{Spec}\mathbb{F}_{q}[s]\) defined by \(a^{p}-a=s\) and \(\psi\in\mathbb{F}_{p}^{\vee}\). For a morphism of schemes \(f\colon X\to\mathbb{A}_{\mathbb{F}_{q}}^{1}\), let \(\mathscr{L}_{\psi}(f)\) denote the pull-back of \(\mathscr{L}_{\psi}(s)\) by \(f\). We write \(\mathbb{A}^{1}\) for the affine line over \(\mathbb{F}\). **Lemma 2.4**.: 1. We have isomorphisms \[H_{\mathrm{c}}^{1}(C_{R,\mathbb{F}},\overline{\mathbb{Q}}_{\ell}) \simeq\bigoplus_{\psi\in\mathbb{F}_{p}^{\vee}\setminus\{1\}}H_{ \mathrm{c}}^{1}(\mathbb{A}^{1},\mathscr{L}_{\psi}(xR(x))),\] \[H_{\mathrm{c}}^{1}(\mathbb{A}^{1},\mathscr{L}_{\psi}(xR(x))) \simeq\bigoplus_{\xi\in A_{\psi}^{\vee}}H_{\mathrm{c}}^{1}( \mathbb{A}^{1},\mathscr{Q}_{\xi})\quad\text{for $\psi\in\mathbb{F}_{p}^{\vee} \setminus\{1\}$}.\] Moreover, we have \(\dim H_{\mathrm{c}}^{i}(\mathbb{A}^{1},\mathscr{Q}_{\xi})=0\) if \(i\neq 1\) and \(=1\) if \(i=1\). We have \(\deg\operatorname{Sw}_{\infty}(\mathscr{Q}_{\xi})=2\). 2. The canonical map \(H_{\mathrm{c}}^{1}(C_{R,\mathbb{F}},\overline{\mathbb{Q}}_{\ell})\to H^{1}( \overline{C}_{R,\mathbb{F}},\overline{\mathbb{Q}}_{\ell})\) is an isomorphism. Proof.: We show (1). Let \[\phi_{1}\colon C_{R}\to\mathbb{A}_{\mathbb{F}_{q}}^{1};\ (x,y)\mapsto x,\quad \phi_{2}\colon\mathbb{A}_{\mathbb{F}_{q}}^{1}\to\mathbb{A}_{\mathbb{F}_{q}}^{1 };\ x\mapsto F_{R}(x).\] Then \(\phi=\phi_{2}\circ\phi_{1}\). We have \(\phi_{1*}\overline{\mathbb{Q}}_{\ell}\simeq\bigoplus_{\psi\in\mathbb{F}_{p}^{ \vee}}\mathscr{L}_{\psi}(xR(x))\) and \(\phi_{2*}\mathscr{L}_{\psi}(xR(x))\simeq\bigoplus_{\xi\in A_{\psi}^{\vee}} \mathscr{Q}_{\xi}\). Hence the isomorphisms in (1) follow. We easily check that \(H_{\mathrm{c}}^{i}(\mathbb{A}^{1},\mathscr{L}_{\psi}(xR(x)))=0\) for \(i\neq 1\) and \(\dim H_{\mathrm{c}}^{1}(\mathbb{A}^{1},\mathscr{L}_{\psi}(xR(x)))=p^{e}\) by the Grothendieck-Ogg-Shafarevich formula. Hence \(H_{\mathrm{c}}^{i}(\mathbb{A}^{1},\mathscr{Q}_{\xi})=0\) for \(i\neq 1\) and \(\dim H_{\mathrm{c}}^{1}(\mathbb{A}^{1},\mathscr{Q}_{\xi})=1\) by \(|A_{\psi}^{\vee}|=p^{e}\). Again by the Grothendieck-Ogg-Shafarevich formula, the last claim follows. Since the covering \(\phi_{1}\colon C_{R}\to\mathbb{A}_{\mathbb{F}_{q}}^{1}\) is totally ramified, \(\overline{C}_{R}\setminus C_{R}\) consists of one point. Hence (2) follows. We say that a polynomial \(f(x)\in\mathbb{F}_{q}[x]\) is reduced if \(\mathbb{F}_{q}[x]/(f(x))\) is reduced. Let \(\delta(y):=\sum_{i=0}^{d}b_{i}y^{p_{0}^{i}}\in\mathbb{F}_{q}[y]\setminus\{0\}\) be a reduced additive polynomial whose roots are contained in \(\mathbb{F}_{p}\). We can write \(y^{p}-y=\delta(\nu(y))\) with an additive polynomial \(\nu(y)\). Let \(V_{\delta}:=\{y\in\mathbb{F}\mid\delta(y)=0\}\). We have a surjective homomorphism \(\mathbb{F}_{p}\to V_{\delta};\ x\mapsto\nu(y)\). This induces an injection \(V_{\delta}^{\vee}\hookrightarrow\mathbb{F}_{p}^{\vee}\). Let \(C_{\delta}\) denote the smooth affine curve over \(\mathbb{F}_{q}\) defined by \(\delta(y)=xR(x)\). **Corollary 2.5**.: Let \(\tau_{\xi}\) be the eigenvalue of \(\operatorname{Fr}_{q}\) on \(H^{1}_{c}(\mathbb{A}^{1},\mathscr{Q}_{\xi})\). Then we have \[L_{\overline{C}_{\delta}/\mathbb{F}_{q}}(T)=\prod_{\psi\in V_{\delta}^{\vee} \setminus\{1\}}\prod_{\xi\in A_{\psi}^{\vee}}(1-\tau_{\xi}T).\] Proof.: Similarly as Lemma 2.4(1), we have \(H^{1}_{c}(C_{\delta,\mathbb{F}},\overline{\mathbb{Q}}_{\ell})\simeq\bigoplus_ {\psi\in V_{\delta}^{\vee}\setminus\{1\}}H^{1}_{c}(\mathbb{A}^{1},\mathscr{L}_ {\psi}(xR(x)))\). We have a sequence of finite morphisms \(C_{R}\to C_{\delta}\to\mathbb{A}^{1}_{\mathbb{F}_{q}}\) whose composite map is given by \(\phi_{1}\colon(x,y)\mapsto x\). This extends to maps between the compactifications \(\overline{C}_{R}\to\overline{C}_{\delta}\to\mathbb{P}^{1}_{\mathbb{F}_{q}}\). Since \(\overline{C}_{R}\setminus C_{R}\) consists of one \(\mathbb{F}_{q}\)-rational point, the same holds true for \(C_{\delta}\). Consequently, the natural map \(H^{1}_{c}(C_{\delta,\mathbb{F}},\overline{\mathbb{Q}}_{\ell})\to H^{1}( \overline{C}_{\delta,\mathbb{F}},\overline{\mathbb{Q}}_{\ell})\) is an isomorphism. We analyze \(\tau_{\xi}\) in the next section. ## 3 Evaluations of \(\tau_{\xi}\) ### Formulae for \(\tau_{\xi}\) in terms of local epsilon factors For a non-archimedean local field \(F\), let \(\mathcal{O}_{F}\) be its ring of integers and \(\mathfrak{m}_{F}\) denote its maximal ideal. Let \(U^{i}_{F}:=1+\mathfrak{m}^{i}_{F}\) for \(i\geq 1\). Let \(K:=\mathbb{F}_{q}((t))\). Let \(\operatorname{res}\colon\Omega^{1}_{K}\to\mathbb{F}_{q}\) denote the residue map. For a finite extension of fields \(E/F\), let \(\operatorname{Tr}_{E/F}\) denote the trace map from \(E\) to \(F\). We fix \(\psi\in\mathbb{F}^{\vee}_{p}\setminus\{1\}\). Let \(\psi_{\mathbb{F}_{q}}:=\psi\circ\operatorname{Tr}_{\mathbb{F}_{q}/\mathbb{F}_ {p}}\) and \[\Psi\colon K\to\overline{\mathbb{Q}}^{\times}_{\ell};\ x\mapsto\psi_{\mathbb{ F}_{q}}(\operatorname{res}(xdt)).\] We take a separable closure \(\overline{K}\) of \(K\) and \(W_{K}\) denote the Weil group of \(\overline{K}/K\). Let \(j\colon\mathbb{A}^{1}_{\mathbb{F}_{q}}\to\mathbb{P}^{1}_{\mathbb{F}_{q}}\) be the canonical inclusion and let \(x\) be the standard parameter on \(\mathbb{A}^{1}_{\mathbb{F}_{q}}\). We identify the local field of \(\mathbb{P}^{1}_{\mathbb{F}_{q}}\) at \(\infty\) with \(K\) by setting \(t=1/x\). Let \(\xi\in A_{\psi}^{\vee}\) be an extension of \(\psi\). Then \(\mathscr{Q}_{\xi}\) induces a smooth character of \(W_{K}\), which we denote by \(\xi\). By Lemma 2.4(1), we have \(\deg\operatorname{Sw}(\xi)=2\). Let \(c\in\mathfrak{m}_{K}^{-3}\setminus\mathfrak{m}_{K}^{-2}\). We consider the Gauss sum \[\tau(\xi,\Psi):=q^{-1}\sum_{x\in\mathcal{O}_{K}^{\times}/U^{3}_{K}}\xi^{-1}( cx)\Psi(cx).\] Note that the sum is independent of a choice of \(c\). The second assertion of the following proposition is a consequence of Laumon's product formula. **Proposition 3.1**.: 1. We have \(\xi(t)=1\). 2. We have \(\tau_{\xi}=-\tau(\xi,\Psi)\). Proof.: (1) Let \(F:=\mathbb{F}_{q}(x)\) and \(\mathbb{A}^{\times}_{F}\) be the idele group of \(F\). Let \(\tilde{\xi}\) denote the character \(\mathbb{A}^{\times}_{F}/F^{\times}\to\overline{\mathbb{Q}}^{\times}_{\ell}\) corresponding to \(\mathscr{Q}_{\xi}\). By Artin's reciprocity law, we have the following equality \[\tilde{\xi}(x_{0})\xi(1/t)=1,\] where \(x_{0}\) denotes the idele in \(\mathbb{A}^{\times}_{F}\) whose components are given as follows: at the place defined by \(0\in\mathbb{A}^{1}_{\mathbb{F}_{q}}\), it is \(x\). At the other places, they are \(1\). Since the stalk \(\mathscr{Q}_{\xi,0}\) at \(0\) is isomorphic to \(\mathscr{L}_{\psi}(xR(x))_{0},\) on which \(\mathrm{Fr}_{q}\) acts as the identity, we have \(\tilde{\xi}(x_{0})=1\). The assertion follows. (2) By Lemma 2.4(1) and [5, Theoreme (3.2.1.1)], we have \[\det(-\mathrm{Fr}_{q};H^{1}_{c}(\mathbb{A}^{1},\mathscr{Q}_{\xi}))=q\cdot \varepsilon_{\psi}(\mathbb{P}^{1}_{\mathbb{F}_{q},(\infty)},j_{!}\mathscr{Q}_{ \xi},-dx).\] Let \(t=x^{-1},\) which is a uniformizer at \(\infty.\) We have \[\varepsilon_{\psi}(\mathbb{P}^{1}_{\mathbb{F}_{q},(\infty)},j_{! }\mathscr{Q}_{\xi},-dx) =\varepsilon_{\psi}(\mathbb{P}^{1}_{\mathbb{F}_{q},(\infty)},j_{! }\mathscr{Q}_{\xi},t^{-2}dt)\] \[=\xi(t)^{-2}q^{-2}\varepsilon_{\psi}(\mathbb{P}^{1}_{\mathbb{F}_ {q},(\infty)},j_{!}\mathscr{Q}_{\xi},dt).\] By [3, (5.8.2)], we have \[\varepsilon_{\psi}(\mathbb{P}^{1}_{\mathbb{F}_{q},(\infty)},j_{!}\mathscr{Q} _{\xi},dt)=q\cdot\tau(\xi,\Psi).\] Then the assertion follows as \(\xi(t)=1\) by (1). ### Evaluation of Gauss sums Let \[G_{\psi}:=\sum_{x\in\mathbb{F}_{q}}\psi_{\mathbb{F}_{q}}(x^{2}).\] A similar result to the following lemma is found in [1, Proposition 8.7(ii)]. **Lemma 3.2**.: Assume that \(p\neq 2\). Let \((\cdot,\cdot)_{K}\) denote the quadratic Hilbert symbol over \(K\). Let \(\gamma\in K^{\times}/U_{K}^{2}\) be the unique element satisfying \[\xi\left(1+x+\frac{x^{2}}{2}\right)=\Psi(\gamma x)\quad\text{for $x\in \mathfrak{m}_{K}$}.\] Then we have \(\tau(\xi,\Psi)=\xi^{-1}(\gamma)\Psi(\gamma)(2\gamma,t)_{K}G_{\psi}.\) Proof.: The sum \(q\tau(\xi,\Psi)=\sum_{x\in\mathcal{O}_{K}^{\times}/U_{K}^{3}}\xi^{-1}(\gamma x )\Psi(\gamma x)\) can be computed as follows: \[\sum_{x\in\mathcal{O}_{K}^{\times}/U_{K}^{3}}\xi^{-1}(\gamma x) \Psi(\gamma x) =\sum_{\zeta\in\mathbb{F}_{q}^{\times},z\in\mathfrak{m}_{K}/ \mathfrak{m}_{K}^{3}}\xi^{-1}(\gamma\zeta(1+z))\Psi(\gamma\zeta(1+z))\] \[=\sum_{\zeta\in\mathbb{F}_{q}^{\times}}\xi^{-1}(\gamma\zeta)\Psi (\gamma\zeta)\sum_{z\in\mathfrak{m}_{K}/\mathfrak{m}_{K}^{3}}\xi^{-1}(1+z)\xi \left(1+\zeta z+\frac{(\zeta z)^{2}}{2}\right).\] The sum \(\sum_{z\in\mathfrak{m}_{K}/\mathfrak{m}_{K}^{3}}\xi^{-1}(1+z)\xi(1+\zeta z+ \frac{(\zeta z)^{2}}{2})\) can be written as \[\sum_{y\in\mathfrak{m}_{K}/\mathfrak{m}_{K}^{2},z\in\mathfrak{m} _{K}^{2}/\mathfrak{m}_{K}^{3}}\xi^{-1}(1+y)\xi^{-1}(1+z)\xi\left(1+\zeta y+ \frac{(\zeta y)^{2}}{2}\right)\xi(1+\zeta z)\] \[=\sum_{y\in\mathfrak{m}_{K}/\mathfrak{m}_{K}^{2}}\xi^{-1}(1+y)\xi \left(1+\zeta y+\frac{(\zeta y)^{2}}{2}\right)\sum_{z\in\mathfrak{m}_{K}^{2}/ \mathfrak{m}_{K}^{3}}\xi(1+(\zeta-1)z).\] The last part \(\sum_{z}\) is zero unless \(\zeta=1\). Therefore, we can compute \[q\tau(\xi,\Psi) =q\xi^{-1}(\gamma)\Psi(\gamma)\sum_{y\in\mathfrak{m}_{K}/\mathfrak{ m}_{K}^{2}}\xi^{-1}(1+y)\xi\left(1+y+\frac{y^{2}}{2}\right)\] \[=q\xi^{-1}(\gamma)\Psi(\gamma)\sum_{y\in\mathfrak{m}_{K}/ \mathfrak{m}_{K}^{2}}\Psi\left(\frac{\gamma y^{2}}{2}\right)=q\xi^{-1}(\gamma )\Psi(\gamma)(2\gamma,t)_{K}G_{\psi}.\] Hence the assertion follows. The following is shown in an elementary way. **Lemma 3.3**.: Let \(\tau\in\mathbb{Z}[i]\). Assume that \(|\tau|^{2}=2^{n}\) with an integer \(n\geq 1\). We have \[\frac{\tau}{2^{n/2}}\in\begin{cases}\left\{e^{\pm\frac{\pi i}{4}},e^{\pm\frac {3\pi i}{4}}\right\}&\text{if $n$ is odd,}\\ \left\{\pm 1,\pm i\right\}&\text{if $n$ is even.}\end{cases}\] Proof.: The ring \(\mathbb{Z}[i]\) has a unique prime ideal \((1+i)\) which lies over the ideal \((2)\) of \(\mathbb{Z}\). Hence we can write \(\tau=\zeta(1+i)^{n}\) where \(\zeta=\pm 1,\pm i.\) Since \(\frac{1+i}{\sqrt{2}}\) is a primitive \(8\)-th root of unity, the assertion follows. Let \(\mu_{n}:=\{x\in\overline{\mathbb{Q}}_{\ell}\mid x^{n}=1\}\) for an integer \(n\geq 1\). **Lemma 3.4**.: Assume that \(p\) is even. We have \(\tau_{\xi}\in\mathbb{Z}\left[\mu_{4}\right]\). Proof.: By Lemma 2.2, the image of \(\xi\) is contained in \(\mu_{4}\). Hence the claim follows from applying Grothendieck's trace formula to \(\tau_{\xi}=\operatorname{Tr}(\operatorname{Fr}_{q};H^{1}_{\mathrm{c}}( \mathbb{A}^{1},\mathscr{Q}_{\xi}))\). **Corollary 3.5**.: We write \(q=p_{0}^{f}\) with a prime number \(p_{0}\). 1. We have \(\tau_{\xi}\in\mu_{4p_{0}}q^{1/2}\). 2. Assume that \(f\) is odd and \(p_{0}\not\equiv 1\pmod{4}\). Then we have \(\tau(\xi,\Psi)^{2p_{0}}=-q^{p_{0}}\). Proof.: Assume that \(p_{0}\neq 2\). Lemma 2.2 implies that \(\xi^{-1}(c)\in\mu_{p_{0}}\). Lemma 3.2 implies that \(\tau_{\xi}^{2p_{0}}=(-1)^{f(p_{0}-1)/2}q^{p_{0}}\). Hence the claims \((1)\) and \((2)\) in this case follow. Assume that \(p_{0}=2\). By Proposition 3.1, Lemma 3.3 and Lemma 3.4, we know that \(\tau_{\xi}/q^{1/2}\in\mu_{8}\) and this value is a primitive \(8\)-th root of unity if \(2\nmid f\). ### Conclusion Now, we give a generalization of [4, Theorems (9.4) and (13.7)] (cf. [2, Proposition 8.5]). **Theorem 3.6**.: We write \(q=p_{0}^{f}\) with a prime number \(p_{0}\). 1. The curve \(\overline{C}_{R}\) is \(\mathbb{F}_{q^{4p_{0}}}\)-minimal. In particular, \(\overline{C}_{R}\) is supersingular. 2. Assume that \(f\) is odd and \(p_{0}\not\equiv 1\pmod{4}\). Then \(\overline{C}_{R}\) is \(\mathbb{F}_{q^{2p_{0}}}\)-maximal. Proof.: The claims follow from Lemma 2.4, Proposition 3.1 and Corollary 3.5. **Example 3.7**.: We give an example which fits into the situation in Theorem 3.6(2). Assume that \(q=p\) is a prime number satisfying \(p\not\equiv 1\pmod{4}\). Let \(R(x)=x^{p}-x\). We consider the curve \(\overline{C}_{R}\). We easily check \(\mathbb{F}_{p}\subset V_{R}\). The \(\mathbb{F}_{p}\)-subspace \(\mathbb{F}_{p}\subset V_{R}\) is totally isotropic. Then \(A_{R}=\mathbb{F}_{p}^{2}\). Clearly (2.3) is satisfied. **Corollary 3.8**.: Assume that \(p_{0}=2\) and \(H_{R}\subset\mathbb{F}_{q}^{2}\). Write \(q=p_{0}^{f}\). Then \(f\) is even and \(\overline{C}_{R}\) is \(\mathbb{F}_{q^{2}}\)-minimal. Proof.: Let the notation be as in the proof of Corollary 3.5. By Lemma 3.3, if \(f\) is odd, \(\tau_{\xi}/q^{1/2}\) is a primitive \(8\)-th root of unity. Hence it suffices to show \(\tau_{\xi}/q^{1/2}\in\{\pm 1\}\). We assume the contrary. Let \(\pi_{\psi}\) be the unique irreducible representation of \(H_{R}\) whose central character equals \(\psi\). Then \(\dim\pi_{\psi}=p^{e}\). Hence we have an isomorphism \(H_{\mathrm{c}}^{1}(\mathbb{A}^{1},\mathscr{L}_{\psi}(xR(x)))\simeq\pi_{\psi}\) as \(H_{R}\)-representations. By \(H_{R}\subset\mathbb{F}_{q}^{2}\), Schur's lemma implies that \(\mathrm{Fr}_{q}\) acts as a scalar multiplication on \(H_{\mathrm{c}}^{1}(\mathbb{A}^{1},\mathscr{L}_{\psi}(xR(x)))\): \[a_{\psi}:=\mathrm{Tr}(\mathrm{Fr}_{q};H_{\mathrm{c}}^{1}(\mathbb{A}^{1}, \mathscr{L}_{\psi}(xR(x))))=p^{e}\tau_{\xi}.\] By the assumption, we know that \(a_{\psi}\not\in\mathbb{Z}\). The Grothendieck trace formula implies that \[\mathrm{Tr}(\mathrm{Fr}_{q};H_{\mathrm{c}}^{1}(\mathbb{A}^{1},\mathscr{L}_{ \psi}(xR(x))))=-\sum_{x\in\mathbb{F}_{q}}\psi_{\mathbb{F}_{q}}(xR(x)).\] Since the image of \(\psi\) is \(\{\pm 1\}\), we have \(a_{\psi}\in\mathbb{Z}\), which is a contradiction. Hence the claims follow. **Remark 3.9**.: Assume that \(p_{0}=2\). In the proof of Lemma 3.4, we use only the Grothendieck trace formula. Hence we obtain Corollary 3.5, Theorem 3.6, and Corollary 3.8 without using Laumon's product formula in this case. ## Appendix A Another computation of \(\tau_{\xi}\) In this appendix, without using the local class field theory, we directly compute the exact value of \(\tau_{\xi}\) in the case \(p_{0}\neq 2\). ### Taking quotients of \(C_{r}\) Let \((a,b)\in H_{R}\) be an element such that \[a\neq 0,\quad b=\frac{f_{R}(a,a)}{2}.\] (A.4) The following lemma is given in [4, Propositions (9.1) and (13.5)], [2, Proposition 7.2] and [7, Lemma 4.9] in the case where \(p\) is prime. This lemma gives an algorithm of taking quotients of \(C_{R}\) by certain abelian subgroups in \(H_{R}\). **Lemma A.1**.: Assume \(e\geq 1\). 1. Let \[\Delta_{0}(x):=-\frac{x}{a}\left(\frac{b}{a}x-f_{R}(x,a)\right)\] \[u:=x^{p}-a^{p-1}x,\quad v:=y-\Delta_{0}(x).\] (A.5) Then there exists an additive polynomial \(R_{1}(u)\in\mathbb{F}[u]\) of degree \(p^{e-1}\) satisfying \(v^{p}-v=uR_{1}(u)\). The leading coefficient of \(R_{1}(u)\) is \[\begin{cases}-\dfrac{a_{e}}{a^{p-1}}&\text{if $e>1$},\\ -\dfrac{a_{e}}{2a^{p-1}}&\text{if $e=1$}.\end{cases}\] 2. Let \(U_{a}:=\{(\xi a,\xi^{2}b)\in H_{R}\mid\xi\in\mathbb{F}_{p}\}\). The quotient \(C_{R}/U_{a}\) is isomorphic to \(C_{R_{1}}\). Proof.: We show the claims in just the same way as [7, Lemma 4.9]. By Lemma A.1(1), \[xR(x)=uR_{1}(u)+\Delta_{0}(x)^{p}-\Delta_{0}(x).\] (A.6) We write \(u(x)\) for \(u\). Let \((a^{\prime},b^{\prime})\in H_{R}\) be an element satisfying (A.4). Assume \(\omega_{R}(a,a^{\prime})=0\). Then \((a,b)\) commutes with \((a^{\prime},b^{\prime})\) by Lemma 2.1(3). Hence the action of \((a^{\prime},b^{\prime})\) induces the automorphism of \(C_{R_{1}}\simeq C_{R}/U_{a}\). **Lemma A.2**.: Let \(\pi(a^{\prime},b^{\prime}):=(u(a^{\prime}),2^{-1}f_{R_{1}}(u(a^{\prime}),u(a ^{\prime})))\in\mathbb{F}^{2}\). 1. We have \[\Delta_{0}(x+a^{\prime})+f_{R_{1}}(u(x),u(a^{\prime}))=\Delta_{0}(x)+\Delta_{0 }(a^{\prime})+f_{R}(x,a^{\prime}).\] (A.7) 2. We have \(\pi(a^{\prime},b^{\prime})\in H_{R_{1}}\). 3. The action of \((a^{\prime},b^{\prime})\) on \(C_{R}\) induces the automorphism \(\pi(a^{\prime},b^{\prime})\) on \(C_{R_{1}}\). Proof.: All the claims are shown in the same way as [7, Lemma 4.10 and (4.10)]. **Corollary A.3**.: Let the notation be as in Lemma A.1. Let \(A\subset V_{R}\) be a totally isotropic subspace of dimension \(d\) with respect to \(\omega_{R}\). Let \(a_{1},\dots,a_{d}\) be a basis of \(A\) over \(\mathbb{F}_{p}\). Assume \(a=a_{d}\). Then \(A^{\prime}:=\sum_{i=1}^{d-1}\mathbb{F}_{p}u(a_{i})\subset V_{R_{1}}\) is a totally isotropic subspace of dimension \(d-1\) with respect to \(\omega_{R_{1}}\). Proof.: Assume \(\sum_{i=1}^{d-1}x_{i}u(a_{i})=0\) with \(x_{i}\in\mathbb{F}_{p}\). Then \(u(\sum_{i=1}^{d-1}x_{i}a_{i})=0\). Hence \(\sum_{i=1}^{d-1}x_{i}a_{i}\in\mathbb{F}_{p}a\). This implies that \(x_{i}=0\) for every \(1\leq i\leq d-1\). Thus \(\dim_{\mathbb{F}_{p}}A^{\prime}=d-1\). Let \(1\leq i\neq j\leq d-1\). From (A.7) and \(\omega_{R}(a_{i},a_{j})=0\), it follows that \(\omega_{R_{1}}(u(a_{i}),u(a_{j}))=0\). Thus the claim follows. Let \(A\subset V_{R}\) be a maximal totally isotropic subspace with respect to \(\omega_{R}\). We identify \(A\) with an abelian subgroup of \(H_{R}\) by the group homomorphism \(A\hookrightarrow H_{R};\ a\mapsto(a,2^{-1}f_{R}(a,a))\). The following is a generalization of [2, Theorem 7.4] to the case where \(p\) is a power of a prime number. **Proposition A.4**.: Let \[c_{A}:=\begin{cases}(-1)^{e}\dfrac{a_{e}}{2}\prod_{\alpha\in A\setminus\{0\}} \alpha^{-1}&\text{if }e\geq 1,\\ a_{0}&\text{if }e=0.\end{cases}\] The quotient \(C_{R}/A\) is isomorphic to the curve defined by \(y^{p}-y=c_{A}x^{2}\). Proof.: We note \(\dim_{\mathbb{F}_{p}}A=e\). We take a basis \(\{a_{1},\dots,a_{e}\}\) of \(A\) over \(\mathbb{F}_{p}\). Let \(a:=a_{e}\). Lemma A.1 implies the finite etale morphism \(C_{R}\to C_{R_{1}}\). Let \(a^{\prime}_{i}:=u(a_{i})\in A^{\prime}=\sum_{i=1}^{e-1}\mathbb{F}_{p}a^{\prime }_{i}\subset V_{R_{1}}\), which is totally isotropic with respect to \(\omega_{R_{1}}\) by Corollary A.3. Taking \(a^{\prime}_{e-1}\) as \(a\) and applying Lemma A.1, we obtain a morphism \(C_{R_{1}}\to C_{R_{2}}\). We proceed this process. Thus we obtain a finite etale morphism \(\phi\colon C_{R}\to C_{R_{e}}\) of degree \(p^{e}\). By Lemma A.1(1), the curve \(C_{R_{e}}\) is defined by \(y^{p}-y=c_{A}x^{2}\). By Lemma A.2(3), the morphism \(\phi\) factors through \(C_{R}\to C_{R}/A\xrightarrow{\phi^{\prime}}C_{R_{e}}\). Since \(C_{R}\to C_{R}/A\) has degree \(p^{e}\), \(\phi^{\prime}\) is an isomorphism. **Lemma A.5**.: We write \(F_{R}(x)=\sum_{i=0}^{e}b_{i}x^{p^{i}}\). Then we have \(c_{A}=(-1)^{e+1}(a_{e}b_{e})/(2b_{0})\) if \(e\geq 1\). Proof.: By \(\prod_{\alpha\in A\setminus\{0\}}\alpha=-b_{0}/b_{e}\), the assertion follows from Proposition A.4. ### Value of \(\tau_{\xi}\) Let \(A:=\{x\in\mathbb{F}\mid F_{R}(x)=0\}\), which is a totally isotropic subspace of \(V_{R}\) with respect to \(\omega_{R}\) in Lemma 2.1(3). Assume (2.3). In particular, \[A\subset\mathbb{F}_{q}\cap V_{R}.\] (A.8) Hence there exists an additive polynomial \(a(x)\in\mathbb{F}_{q}[x]\) such that \(x^{q}-x=a(F_{R}(x))\). We write \(q=p^{s}\). For \(t\in\mathbb{F}_{q}\), we take \(x\in\mathbb{F}\) such that \(F_{R}(x)=t\) and let \[b(x,t):=\sum_{i=0}^{s-1}(xR(x))^{p^{i}}-f_{R}(x,x^{q}-x).\] (A.9) **Lemma A.6**.: The value \(b(x,t)\) is independent of the choice of \(x\), for which we write \(b(t)\). Furthermore, \((a(t),b(t))\in A_{R}\). Proof.: First note that \(F_{R}(a(x))=x^{q}-x\) and hence \(a(t)\in A\) by \(t\in\mathbb{F}_{q}\). Let \(y\in A\). Then \(\omega_{R}(a(t),y)=0\), since \(A\) is totally isotropic. We simply write \(\operatorname{Tr}\) for \(\operatorname{Tr}_{\mathbb{F}_{q}/\mathbb{F}_{p}}\). We have \(\operatorname{Tr}(yR(y))=2^{-1}\operatorname{Tr}(2yR(y))=2^{-1}\operatorname {Tr}(f_{R}(y,y)^{p}-f_{R}(y,y))=0\) using (2.1) and \(y\in\mathbb{F}_{q}\). Let \(x^{\prime}:=x+y\). By (A.8), we have \(y\in\mathbb{F}_{q}\cap V_{R}\). By \(F_{R}(x)=t\), \(x^{q}-x=a(t)\). Using (2.1), we compute \[b(x^{\prime},t)-b(x,t) =\sum_{i=0}^{s-1}(yR(x)+xR(y))^{p^{i}}+\operatorname{Tr}(yR(y))-f_ {R}(y,a(t))\] \[=\sum_{i=0}^{s-1}(f_{R}(x,y)^{p}-f_{R}(x,y))^{p^{i}}-f_{R}(y,a(t))\] \[=f_{R}(x^{q},y)-f_{R}(x,y)-f_{R}(y,a(t))=\omega_{R}(a(t),y)=0.\] Hence \(b(x,t)\) is independent of \(x\). Again by (2.1), (A.8) and \(x^{q}-x=a(t)\), we obtain \[b(t)^{p}-b(t)=(xR(x))^{q}-xR(x)-xR(a(t))-a(t)R(x)=a(t)R(a(t)).\] **Lemma A.7**.: Let \(\psi\in\mathbb{F}_{p}^{\vee}\setminus\{1\}\) and \(\xi\in A_{\psi}^{\vee}\). Then we have \[\tau_{\xi}=-\sum_{t\in\mathbb{F}_{q}}\xi(a(t),b(t)).\] Proof.: Let \(t\in\mathbb{F}_{q}\). We take \((x,y)\in C_{R}\) such that \(F_{R}(x)=t\). Recall that \(x^{q}-x=a(t)\). Clearly, \(y^{q}-y=\sum_{i=0}^{s-1}(y^{p}-y)^{p^{i}}=\sum_{i=0}^{s-1}(xR(x))^{p^{i}}\). Hence \(y^{q}=y+f_{R}(x,a(t))+b(t)\). Hence \((x^{q},y^{q})=(x,y)\cdot(a(t),b(t))\) by (2.2). By applying the Grothendieck trace formula to \(\tau_{\xi}=\operatorname{Tr}(\operatorname{Fr}_{q};H_{\mathrm{c}}^{1}( \mathbb{A}^{1},\mathscr{Z}_{\xi}))\), the assertion follows. By Proposition A.4, the curve \(C_{R}/A\) is defined by \(y^{p}-y=c_{A}x^{2}\). We consider the quotient morphism \[\pi\colon C_{R}\to C_{R}/A;\ (x,y)\mapsto(F_{R}(x),y-\Delta(x)).\] Then \[xR(x) =c_{A}F_{R}(x)^{2}+\Delta(x)^{p}-\Delta(x),\] (A.10) \[f_{R}(x,a)+\frac{f_{R}(a,a)}{2} =\Delta(x+a)-\Delta(x)\quad\text{for $a\in A$},\] where the second equality follows from \(\pi((x,y)\cdot(a,2^{-1}f_{R}(a,a)))=\pi(x,y)\) and (2.2). **Proposition A.8**.: We have \[b(t)-\frac{f_{R}(a(t),a(t))}{2}=\operatorname{Tr}_{\mathbb{F}_{q}/\mathbb{F} _{p}}(c_{A}t^{2})\] Proof.: Using (A.10), \(\Delta(x)\in\mathbb{F}_{q}[x]\) and \(x^{q}-x=a(t)\), we compute \[b(t)-\frac{f_{R}(a(t),a(t))}{2} =\operatorname{Tr}_{\mathbb{F}_{q}/\mathbb{F}_{p}}(c_{A}t^{2})+ \Delta(x^{q})-\Delta(x)-(\Delta(x+a(t))-\Delta(x))\] \[=\operatorname{Tr}_{\mathbb{F}_{q}/\mathbb{F}_{p}}(c_{A}t^{2}).\] We consider the character \[\xi^{\prime}\colon\mathbb{F}_{q}\to\overline{\mathbb{Q}}_{\ell}^{\times};\ t \mapsto\xi\left(a(t),\frac{f_{R}(a(t),a(t))}{2}\right).\] Let \(\eta\in\mathbb{F}_{q}^{\times}\) be the element such that \(\xi^{\prime}(t)=\psi_{\mathbb{F}_{q}}(\eta t)\) for \(t\in\mathbb{F}_{q}\). **Corollary A.9**.: Let \(\psi\in\mathbb{F}_{p}^{\vee}\setminus\{1\}\) and \(\xi\in A_{\psi}^{\vee}\). Let \(\left(\frac{x}{q}\right)=x^{\frac{q-1}{2}}\) for \(x\in\mathbb{F}_{q}^{\times}\). Then we have \[\tau_{\xi}=-\psi_{\mathbb{F}_{q}}\left(-\frac{\eta^{2}}{4c_{A}}\right)\cdot \left(\frac{c_{A}}{q}\right)G_{\psi}.\] Proof.: Lemma A.7 and Proposition A.8 imply that \[\tau_{\xi}=-\sum_{t\in\mathbb{F}_{q}}\psi_{\mathbb{F}_{q}}(c_{A}t^{2}+\eta t) =-\psi_{\mathbb{F}_{q}}\left(-\frac{\eta^{2}}{4c_{A}}\right)\cdot\left(\frac{c _{A}}{q}\right)G_{\psi}.\]
2310.05870
Detecting Multipartite Entanglement Patterns using Single Particle Green's Functions
We present a protocol for detecting multipartite entanglement in itinerant many-body electronic systems using single particle Green's functions. To achieve this, we first establish a connection between the quantum Fisher information (QFI) and single particle Green's functions by constructing a set of witness operators built out of single electron creation and destruction operators in a doubled system. This set of witness operators is indexed by a momenta $k$. We compute the QFI for these witness operators and show that for thermal ensembles it can be expressed as an auto-convolution of the single particle spectral function. We then apply our framework to a one-dimensional fermionic system to showcase its effectiveness in detecting entanglement in itinerant electron models. We observe that the detected entanglement level is sensitive to the wave vector associated with witness operator. Our protocol will permit detecting entanglement in many-body systems using scanning tunneling microscopy and angle-resolved photoemission spectroscopy, two spectroscopies that measure the single particle Green's function. It offers the prospect of the experimental detection of entanglement through spectroscopies beyond the established route of measuring the dynamical spin response.
Rajesh K. Malla, Andreas Weichselbaum, Tzu-Chieh Wei, Robert M. Konik
2023-10-09T17:06:29Z
http://arxiv.org/abs/2310.05870v3
# Detecting Multipartite Entanglement Patterns using Single Particle Green's Functions ###### Abstract We propose a protocol for detecting multipartite entanglement in itinerant many-body electronic systems using the quantum Fisher information (QFI). We establish a connection between the QFI and single-particle Green's functions by identifying a set of non-trivial witness operators. To construct these operators, we employ a doubling of the system wherein we introduce two identical copies of the original model. While the witness operator hops electrons between copies, the copies do not interact with one another. We apply this methodology to a finite-sized fermionic system and showcase its effectiveness in detecting entanglement in spinless itinerant electron models. We show that the detected entanglement level is sensitive to the wave vector associated with the hopping process. We also demonstrate the important role that symmetry has in detecting levels of entanglement. Our protocol paves the way for detecting entanglement in many-body systems using scanning tunneling microscopy and angle-resolved photoemission spectroscopy, thus offering exciting prospects beyond the detection of entanglement via the dynamical spin response accessed in neutron scattering experiments. _Introduction:_ Multipartite entanglement is a fundamental feature of quantum physics that describes the entanglement of more than two subsystems, i.e., particles, or various degrees of freedom [1; 2]. It encodes a higher level of complex correlations as compared to bipartite entanglement and enables applications in quantum metrology [3; 4], quantum communication and teleportation [5; 6; 7], as well as quantum computation [8; 9]. In many-body physics, quantum entanglement plays a central role in understanding the emerging collective phenomena including quantum spin liquidity [10; 11; 12], topological order [13; 14; 15], and quantum criticality [16; 17; 18; 19]. Detecting quantum entanglement beyond bi-partite entanglement [20; 21; 22] is challenging as the complexity and computational requirements increase with the number of subsystems. The quantum Fisher information (QFI), initially proposed as a concept in quantum metrology [23; 24; 25; 26], has been employed to quantify multipartite entanglement [27; 28; 29; 30]. Exploiting experimentally accessible quantities that connect to the QFI has been one of the most promising strategies for detecting multipartite entanglement. In particular, dynamical susceptibilities have recently proposed to be good candidates for quantifying multipartite entanglement [29] and have been used in inelastic neutron scattering experiments on magnetic systems [31; 32; 33; 34]. While multipartite entanglement has been studied in itinerant electronic systems [35; 36; 37; 38], the existing approaches do not connect single electron response functions to multipartite entanglement detection. It is our purpose here to introduce new protocols that address this omission. In this Letter, we present a theoretical protocol for experimentally detecting various _patterns_ of multipartite entanglement in itinerant electronic systems. Our approach utilizes the spectral information obtained from single electron probes such as scanning tunneling microscopy (STM) and angle-resolved photoemission spectroscopy (ARPES) experiments. Our protocol relies on computing the quantum Fisher information (QFI) of a witness operator connected to the single-particle Green's function. We show however that the naive choice of a witness operator involving a single fermion creation and destruction operator fails to detect the entanglement. To construct the necessary witness operators, we introduce a doubling trick. This involves preparing two copies of the original system where the witness operator hops between the copies that do not otherwise interact. With this approach, we identify a parameterized set of non-trivial witness operators that can effectively quantify the entanglement. Importantly, the amount of detected entanglement is found to be dependent on the wave vector associated with the hopping. We find that comparing measured QFI values against QFI bounds over a set of wavevectors, \(0\leq k<2\pi\), detects entanglement more efficiently than focusing on any single \(k\). Our proposal thus provides a distinct approach for the experimental detection of multipartite entanglement using readily available tools whenever the single-particle Green's function can be measured and so opens up new avenues for investigating and understanding entanglement in itinerant electronic systems. _Background:_ The QFI of a state \(|\psi\rangle\) relative to an observable \(\mathcal{O}\), denoted by \(F_{Q}(|\psi\rangle,\mathcal{O})\), is a fundamental concept in quantum metrology [23]. It puts a bound on the precision in the quantum measurement of a phase \(\theta\) of a unitary operator, \(U=e^{i\theta\hat{\mathcal{O}}}\), with \(\hat{\mathcal{O}}\) being a Hermitian observable. The bound is known as the quantum Cramer-Rao bound, \((\Delta\theta)^{2}\geq 1/[mF_{Q}(\hat{\mathcal{O}},\rho)]\), where \(m\) is the number of independent measurements used to infer \(\theta\). The QFI for a mixed state \(\rho\) is defined by \(F_{Q}(\rho,\hat{\mathcal{O}})=2\sum_{aa^{\prime}}\frac{(\rho_{a}-\rho_{ef})^{2} }{\rho_{a}+\rho_{ef}}|\langle a|\hat{\mathcal{O}}|a^{\prime}\rangle|^{2}\). For a pure state, this simpli fies to the variance \(F_{Q}(\rho,\hat{\mathcal{O}})=4(\langle\psi|\hat{\mathcal{O}}^{2}|\psi\rangle- \langle\psi|\hat{\mathcal{O}}|\psi\rangle^{2})\equiv 4\Delta^{2}\mathcal{O}\). From this, it is apparent that the QFI is tightly related to fluctuations, and thus the precision with which a quantum state \(\rho\) can estimate the observable \(\hat{\mathcal{O}}\) itself. The value of the QFI for a given state can be related to the amount of multipartite entanglement it possesses [27]. This value is highly sensitive to the choice of witness operator \(\mathcal{O}\)[28; 29; 30]. The general form of the witness operator that we consider here is \(\hat{\mathcal{O}}=\sum_{i}\hat{\mathcal{O}}_{i}\), where \(\hat{\mathcal{O}}_{i}\) denotes a local operator acting on-site \(i\) and whose spectra lies within the interval \((h_{i,\text{min}},h_{i,\text{max}})\). _If no symmetries are taken into account_, \((h_{i,\text{min}},h_{i,\text{max}})\) determine the bounds for which the QFI of a quantum state \(\rho\) must exceed in order for a certain amount of entanglement to be detected. Specifically, \(\rho\) is \((m+1)\)-partite entangled (with \(m\) being an integer) if its QFI satisfies \[F_{Q}(\rho,\hat{\mathcal{O}})>m\sum_{i}(h_{i,\text{max}}-h_{i,\text{min}})^{2}. \tag{1}\] Remarkably the QFI can be connected to dynamical susceptibilities in many-body physics [29]: \[F_{Q}(|\psi\rangle,\hat{\mathcal{O}})=\frac{2}{\pi}\int_{-\infty}^{\infty}d \omega\tanh(\frac{\omega}{2T})\text{ Im }\chi_{ret}(\omega,T). \tag{2}\] In this formulation, witness operators of the form, \(\mathcal{O}_{i}=\hat{S}_{i}\), have been connected to measurements of the dynamic spin structure factor [31; 32; 33; 34] to infer that low-dimension spin systems have a certain amount of entanglement. In this letter, our primary objective is to expand the detection of multipartite entanglement to alternative experimental spectroscopies focusing on single electron properties such as measured in STM and ARPES experiments. _Approach:_ While this might appear straightforward, the naive choice, building a witness operator out of single fermion operators, i.e., \(\mathcal{O}_{i}=c_{i}^{\dagger}+c_{i}\), does not work. Here \(h_{i,max/min}=\pm 1\) but because \(\mathcal{O}_{i}\mathcal{O}_{j}+\mathcal{O}_{j}\mathcal{O}_{i}=2\delta_{ij}\), \(F_{Q}(\mathcal{O})\) has \(4N\) as an upper bound and so according to Eq. 1 will always fail to detect any entanglement. Applying symmetries does not help in this case. If we impose a condition that the overall electron number is a good quantum number of a state, the bound given by Eq. 1 is potentially an overestimate, but then \(F_{Q}(|\psi\rangle,\mathcal{O})\) is _identically_ 4N regardless of quantum state \(|\psi\rangle\) and so again cannot discriminate between levels of entanglement. In part, the failure of the naive choice arises because the witness operator is fermionic and so has a trivial anti-commutation relation. To get around this limitation while still maintaining a connection to the single particle response function, we create two identical copies of our system, denoted as \(A\) and \(B\). The copies do not interact with one another. The density matrix of the doubled system, denoted as \(\rho_{D}\), is thus defined as the tensor product of the original density matrix, \(\rho\), for each copy, yielding \(\rho_{D}=\rho_{A}\otimes\rho_{B}=\rho\otimes\rho\). Because the two copies are non-interacting, the Hamiltonian of the doubled system denoted as \(H_{D}\), is expressed as \(H_{D}=H_{A}\otimes I_{B}\oplus I_{A}\otimes H_{B}\), where \(H_{A}=H_{B}=H\) represents the Hamiltonian of the original system, and \(I_{A,B}\) are the identity matrices corresponding to the Hilbert space of the respective copies. Most importantly, the dynamical information of the original system completely determines the dynamics of the doubled system, as the two copies are independent of each other. Within this framework, we propose that operators that hop between the two copies can serve as promising candidates for detecting multipartite entanglement. Such operators, involving fermion bilinears, are bosonic. Specifically, the witness operator can be expressed in the following form for an N-site spinless electron system: \[\hat{\mathcal{O}}(\bar{a})=\sum_{j=1}^{N}a_{j}\hat{c}_{jA}^{\dagger}\hat{c}_{ jB}+\text{H.c.}. \tag{3}\] Here, \(\bar{a}=(a_{1},\dots,a_{N})\) is a vector describing the hopping strength at each site, with \(\hat{c}_{jA/B}^{\dagger}\) and \(\hat{c}_{jA/B}\) the creation and annihilation operators at site \(j\) for copies \(A\) and \(B\). We assume hopping is local, i.e., it occurs between sites \(j\) of copies \(A\) and \(B\). This ensures our QFI satisfies additivity. For witness operators of the form Eq. 3, the quantity \(\text{Im}\chi_{ret}(\omega,T)\) in Eq. 2 relates to a convolution of the single particle spectral densities \(A_{i_{A}j_{A}}(\omega)\) and \(A_{i_{B}j_{B}}(\omega)\) (SM [39], Sec. S1). Since both copies are identical, we can remove the index \(A\) and \(B\), and express the QFI in terms of the spectral density of the original system, \(A_{ij}(\omega)\). If we use this witness operator without imposing any symmetry constraints, we see that in order to detect \((m+1)\)-partite entanglement, the simple bound based on the local min/max eigenvalues requires \(F_{Q}\) to exceed \(4Nm\) as the min/max eigenvalues of the hopping operator are \(\mp 1\). However, it is easy to see numerically that at least for \(a_{i}\)'s whose modulus is 1, \(F_{Q}(\mathcal{O})\) can never be larger than \(4N\). To turn \(F_{Q}(\mathcal{O})\) into an effective detector of entanglement, we need to do more beyond simply doubling the system. We first consider the effects of symmetries upon the entanglement bounds. We consider how the following three symmetries: i) overall electron number; ii) electron number parity; and iii) time reversal invariance alter the bounds. We will see that the first two of these symmetries can reduce the bound that the QFI has to exceed to see a certain amount of entanglement. The second alteration to the standard protocol is to determine entanglement bounds by looking not at a single witness operator, but instead at a continuum set of witness operators parameterized by the wavevector \(k\) determining the coefficients \(a_{j}\) in Eq. 3 via \(a_{j}(k)=e^{ikj}\). To see how the symmetries affect the entanglement bounds that the QFI has to exceed in order to detect a certain level of entanglement, we consider a toy example of a two-site system with spinless fermions. Let us consider wavefunctions where we have one electron and the electron number is a good quantum number. There are three possible one-fermion states: \(|\psi_{1}\rangle=c_{1}^{\dagger}|0\rangle\), \(|\psi_{2}\rangle=c_{2}^{\dagger}|0\rangle\), and \(|\psi_{3}\rangle=(u_{1}c_{1}^{\dagger}+u_{2}c_{2}^{\dagger})|0\rangle\), where \(|0\rangle\) is the vacuum state and \(u_{1}\), \(u_{2}\) are coefficients. The wavefunctions \(|\psi_{1}\rangle\) and \(|\psi_{2}\rangle\) have 1-partite entanglement, i.e., are unentangled, while \(|\psi_{3}\rangle\) has 2-partite entanglement [35]. To detect this entanglement, we examine the QFI corresponding to our witness operator \(\hat{\mathcal{O}}\) of the form Eq. 3. For any choice of \(a_{1}\) and \(a_{2}\), the QFI for the states \(|\psi_{1}\rangle\) and \(|\psi_{3}\rangle\) is zero. However, for \(|\psi_{3}\rangle\), it equals \(F_{Q}=8a_{1}^{2}u_{1}^{2}+8a_{2}^{2}u_{2}^{2}-8(a_{1}u_{1}^{2}+a_{2}u_{2}^{2}) ^{2}\). Notably, when \(u_{1}=u_{2}=1/\sqrt{2}\) (nominally the maximally entangled state in the one-electron sector), the QFI simplifies to \(F_{Q}(|\psi_{3}\rangle,\hat{\mathcal{O}})=2(a_{1}-a_{2})^{2}\). This analysis reveals that a suitable witness operator for a two-site system requires \(a_{1}\neq a_{2}\). Provided this is so, our witness operator successfully distinguishes between one-partite entanglement (\(F_{Q}=0\)) and two-partite entanglement (\(F_{Q}\neq 0\)) in this toy two-site model. Setting \(a_{1}=1\) and \(a_{2}=0\) yields a one-site witness operator, but one still capable of detecting entanglement between sites, showing that spatial entanglement can be detected in local probe scenarios like scanning tunneling microscopy (STM). Next, we expand our protocol to an N-site system of spinless fermions. To quantify the degree of multipartite entanglement in a given quantum state, we first need to find the maximum attainable value of \(F_{Q}(|\psi\rangle,\mathcal{O}(k))\), for wavefunctions defined on the N-site system, employing our designated k-dependent witness operators. Although a full analytical determination of this maximal value as a function of \(k\) is unavailable, we can numerically compute it for finite N. Similar to the 2-site case, a state with 1-partite entanglement has an \(F_{Q}\) that is identically zero. Therefore, any finite value of \(F_{Q}\) indicates that the wavefunction has at least two-partite entanglement. We now turn to determine the maximum for states with at least 2-partite entanglement. The expression for the QFI involving the witness operator in Eq. 3 with \(a_{j}(k)=e^{ijk}\) for a pure state \(|\psi\rangle\) with a definite particle number (DPN) of \(N_{e}\) electrons equals \[F_{Q}(|\psi\rangle,\hat{\mathcal{O}}(k))=8N_{e}-8\sum_{i,j=1}^{N}\cos(k(i-j)) |\langle c_{i}^{\dagger}c_{j}\rangle|^{2}, \tag{4}\] where \(\langle...\rangle\) denotes the expectation value. We consider more general forms of \(F_{Q}\) where the wavefunction, \(|\psi\rangle\), has an indefinite electron number (IPN) in the supplementary material (SM [39], Sec. S2). This QFI is _additive_ as \(\mathcal{O}(k)\) involves hopping between the same sites \(i\) of copies A and B: if a wavefunction can be represented as the tensor product of \(M\) blocks, \(|\psi(N)\rangle=|\psi(N_{1})\rangle\otimes|\psi(N_{2})\rangle\otimes\cdots \otimes|\psi(N_{M})\rangle\), with \(\sum_{i=1}^{M}N_{i}=N\), the QFI for \(|\psi(N)\rangle\) satisfies \(F_{Q}(|\psi(N)\rangle,\hat{\mathcal{O}}(\bar{a}(k))=\sum_{i=1}^{M}F_{Q}(|\psi (N_{i})\rangle,\mathcal{O}(\bar{a}(k))).\) While the QFI for this witness operator has \(4N\) as a strict upper bound, we will now show that for wavefunctions with DPN, the bounds for particular entanglement _patterns_ (to be explained) as a function of the wavevector \(k\) are less restrictive. For concreteness, we start with entanglement patterns in an 8-site system with DPN. Five such patterns for half-filling are given in Fig. 1: \(\{8\}\), \(\{6,2\}\), \(\{4,4\}\), \(\{4,2,2\}\), \(\{2,2,2,2\}\). Here the notation \(\{z_{1},z_{2},...\}\) indicates that we have a wavefunction that decomposes into a block with \(z_{1}\)-partite entanglement plus a block with \(z_{2}\)-partite entanglement, and so on. We exclude here patterns involving 1-partite entangled subblocks: \(\{6,1,1\}\), \(\{4,2,1,1\}\), \(\{4,1,1,1,1\}\), \(\{2,2,2,1,1\}\), \(\{2,2,1,1,1,1\}\), and \(\{2,1,1,1,1,1\}\). Subblocks with 1-partite entanglement with DPN contribute 0 to the QFI and so are trivial. We have determined numerically what the maximal QFI density, \(f_{Q}(k)\), defined as Figure 1: The maximal QFI for various entanglement patterns is plotted as a function of \(k\) for a 8-site spinless fermionic system at half-filling, i.e., \(N_{e}=4\). The entanglement patterns are shown schematically in the inset. The system is split up into independent blocks where electrons are confined within a block as visualized by the colored clouds. Figure 2: Shown are the maximal QFIs for a 4-site fermionic spinless spinless system with and without charge conservation. The green curve corresponds to the case with a definite electron number (DPN) at half filling and is the same as the curve for 4-partite entanglement in Fig. 1. The red curve represents an indefinite electron number (IPN) while the two dashed curves represent the maximal QFI with an indefinite electron number but where the electron number parity is either even or odd. \(f_{Q}(k)\equiv F_{Q}(\mathcal{O}(\bar{a}(k)))_{\text{max}}/4N\), is for each entanglement pattern and reported it in Fig. 1. In Fig. 1 we see the maximal QFI density for each pattern is a non-trivial function of \(k\) that possesses reflection symmetry about \(k=\pi\). As already stated, the maximal possible value of the QFI is 4N at any k. This maximum is obtained at the commensurate wavevectors \(k=2\pi n/N\), where \(n=1,\cdots,N-1\). At \(k=0,2\pi\) the maximal QFI value is \(2N\), \(1/2\) the maximal possible value. We provide insights as to why these peaks occur at these wave vectors in Sec. S3 of [39]. These maximal values do not depend on the ordering of the subblocks within a pattern. For example, a pattern \(\{4,2,2\}\) has the same maxima as the patterns \(\{2,4,2\}\) or \(\{2,2,4\}\). We also see from Fig. 1 the effects of having a 2-partite subblock on what the maximal QFI is. At \(k=0,2\pi\), such 2-partite subblocks give 0 contribution to the maximal QFI. However, if a subblock has size \(n>2\), its contribution to the maximal QFI at \(k=0,2\pi\) is \(2n\). Correspondingly, if a pattern has \(n\) 2-partite subblocks its maximal value at \(k=0,2\pi\) is \(4(N/2-n)\). We stress here that in this way of computing the maximal QFI, we are not merely providing the data to determine if there is an entangled subblock of a certain size in the system. If the bound for a particular pattern in Fig. 1 is exceeded for a particular wavefunction, the entire pattern is excluded. So while \(\{4,4\}\) and \(\{4,2,2\}\) both have 4-partite entangled blocks, we can nonetheless, using the bounds of Fig. 1, distinguish between the two. Fig. 2 is constructed for an N=8 site system. However, it has greater applicability to much larger system sizes. If we have a system with \(8K\) sites, \(K>1\), and its block structure is a repetition of one of the patterns in Fig. 1, i.e., \(\{8,\cdots,8\}\) with \(\{8\}\) repeated \(K\)-times, then the maximal QFI density for this pattern in the \(8K\)-site system is identical to that pattern in Fig. 1(b). The replication of a particular 8-site pattern only affects the maximal QFI through a trivial scale factor. This result follows from the additivity of the QFI - see Sec. S4 of [39]. To this point, we have only considered QFI maxima for wavefunctions that have DPN. In Fig. 2 we explore relaxing this condition for an \(N=4\) site system. We consider the maximal QFIs for wavefunctions with an indefinite electron number (IPN) and for wavefunctions that are eigenstates of electron number parity. We see that IPN wavefunctions have a maximal QFI of \(4N\) at the same \(k\)'s as DPN wavefunctions. However, the IPN has a more complicated structure and additional values of \(k\) where the maximal QFI is itself maximal, including \(k=0\). We can understand the increase in the maximal QFI at \(k=0\) in going from DPN to IPN wavefunctions as a result of superconducting correlations being allowed - see Sec. S2 of [39]. If we examine wavefunctions with IPN but where electron parity is a good quantum number, we see odd parity wavefunctions in general have smaller maximal QFIs that even parity wavefunctions apart from small regions on either side of \(k/2\pi=1/8,7/8\). Beyond questions of imposing electron number as a good quantum number upon the wavefunctions, we have examined whether the wavefunctions satisfying time reversal invariance (TRI), i.e., being real or complex, change the QFI maxima. We have found it does not despite the parameter space of non-TRI wavefunctions being double that of TRI wavefunctions. _Results:_ Having established the maximal QFIs for a variety of different entanglement patterns, we now turn to whether a given wavefunction of an itinerant electron model has a QFI that exceeds some of the bounds and so excludes a particular entanglement pattern. We apply our protocol to eigenstates of the \(t-U\) Hamiltonian for spinless fermions, \[H=t\sum_{i}(c_{i}^{\dagger}c_{i+1}+c_{i+1}^{\dagger}c_{i})+U\sum_{i}\hat{n}_{ i}\hat{n}_{i+1}, \tag{5}\] where \(t\) is the nearest neighbor hopping strength and \(U\) is the nearest-neighbor Coulomb interaction. Figure 3: The QFI of (a) ground state wavefunctions for various interaction strengths and (b) wavefunctions corresponding to higher excited states of a fixed interaction strength \(U=12\), are shown for the half-filled \(t\!-\!U\) Hamiltonian (5) with \(N=8\) sites (plot markers), alongside the maximal QFI curves for various entanglement patterns (solid lines; same as in Fig. 1 for reference). Parameter \(n\) indicates that a wavefunction is chosen from the \(n\)-th excited state manifold with \(n=0\) being the ground state manifold. The wavefunctions are obtained for periodic boundary condition (PBC) and \(t=1\). In Fig. 3, we plot the QFI for a mixture of states, both ground and excited state wavefunctions for different values of \(U\) alongside the QFI maxima obtained in Fig. 1. The QFIs for both ground state wavefunctions and excited state wavefunctions for any interaction strength, including \(U=0\), lie above the maximal QFI for {2,2,2,2} entanglement pattern. The higher entanglement pattern {4,2,2} is excluded for the ground state wave functions with interaction strength \(U=4\), 8, and 16, while the {6,2} pattern is excluded when the interaction strengths are \(U=8\) and 16. The results show that the patterns {4,4}, and {8} are not excluded for any ground state wavefunctions, see Fig. 3a. We also plot the QFI for wavefunctions corresponding to some of the excited states at \(U=12\) in Fig. 3b. We find that the wavefunctions from the second and fourth excited state manifold exceed the maximal QFI for the {4,4} pattern, so leaving only the {8} pattern (i.e., fully entangled) as a possibility. In conclusion, we have developed a protocol by which information contained in measurements of the single particle response function can be converted into entanglement bounds. This entails the analysis of a fictitious doubled system together with the determination of entanglement bounds that are not based on single site eigenvalues of a single local witness operator, but that are tied to a continuum set of witness operators (parameterized by a wavevector \(k\)) and multipartite entanglement patterns. Rather than bounds determining whether a single block of sites is entangled, we find bounds that exclude a particular partition of entanglement across the entire system. While we have focused on spatial entanglement, our method is agnostic as to the degrees of freedom being entangled. We can easily with our approach, for example, work in \(k\)-space, and ask whether entanglement exists across \(k\)-modes. _Acknowledgments.-_ This work was supported by the U.S. Department of Energy, Office of Basic Energy Sciences, under Contract No. DE-SC0012704.
2303.05948
Is it possible to consider a thin hologram as a slide?
A thin phase hologram with sinusoidal modulation of the refractive index is considered. The applicability of the approach is discussed, in which it is assumed that light passes through a hologram, as through a slide, which leads to a sinusoidal modulation of the wave phase. A consequence of this consideration are expressions for the diffraction orders amplitudes, which coincide with the expressions obtained in the Raman-Nath theory of light scattering by ultrasonic waves. The paper compares expressions for the amplitude of the first order of diffraction, obtained in the framework of several theoretical approaches, including perturbation theory in a rigorous formulation of the boundary value problem. As the thickness of the hologram tends to zero, the expression obtained in this way coincides with the expression obtained in the two-wave approximation of the coupled-wave theory. Both expressions satisfy the electrodynamic reciprocity theorem. Under the same conditions, the expression obtained as a result of the slide-like consideration does not satisfy the reciprocity theorem and differs from the indicated expressions by a factor equal to the ratio of the cosines of the diffracted and incident waves. It is concluded that considering a thin hologram as a slide can lead to noticeable errors.
Anatoly M. Smolovich
2023-03-10T14:31:03Z
http://arxiv.org/abs/2303.05948v1
## Abstract ## Abstract A thin phase hologram with sinusoidal modulation of the refractive index is considered. The applicability of the approach is discussed, in which it is assumed that light passes through a hologram, as through a slide, which leads to a sinusoidal modulation of the wave phase. A consequence of this consideration are expressions for the diffraction orders amplitudes, which coincide with the expressions obtained in the Raman-Nath theory of light scattering by ultrasonic waves. The paper compares expressions for the amplitude of the first order of diffraction, obtained in the framework of several theoretical approaches, including perturbation theory in a rigorous formulation of the boundary value problem. As the thickness of the hologram tends to zero, the expression obtained in this way coincides with the expression obtained in the two-wave approximation of the coupled-wave theory. Both expressions satisfy the electrodynamic reciprocity theorem. Under the same conditions, the expression obtained as a result of the slide-like consideration does not satisfy the reciprocity theorem and differs from the indicated expressions by a factor equal to the ratio of the cosines of the diffracted and incident waves. It is concluded that considering a thin hologram as a slide can lead to noticeable errors. thin hologram, wave equation, Raman-Nath diffraction, perturbation theory, coupled-wave theory ## 1 Introduction We are interested in approaches that make it possible to obtain simple approximate analytical expressions for the amplitudes of the diffraction orders of a thin hologram with a sinusoidal modulation of the refractive index. At first glance, it seems that the passage of a light wave through a hologram whose thickness tends to zero can be represented as the multiplication of the wave amplitude by some two-dimensional complex transmission function. In other words, it seems that the process is similar to the passage of a beam of light from a slide projector through a slide. With this consideration, the phase of the transmitted wave acquires a sinusoidal modulation [1]. The field in the half-space behind the grating can then be represented as a superposition of plane waves (diffraction orders) with amplitudes proportional to Bessel functions of the corresponding order [2-6]. The expressions obtained in this manner coincide with the well-known Raman-Nath expressions for the diffraction order amplitudes [7]. The use of the Raman-Nath expressions for thin holograms raises the following questions: a) the Raman-Nath theory derived for the case when the grating period significantly exceeds the wavelength of light, and in holography, these are usually values of the same order; b) the Raman-Nath theory was constructed for volume periodic structures, and for thin holograms, it is assumed that the hologram thickness is negligibly small. In addition, it turned out that the use of the Raman-Nath expressions for the amplitudes of waves diffracted by each layer in the thin-grating decomposition method [2] can lead to an energy imbalance [8]. To understand the above issues, in this study we consider the problem of light diffraction by a hologram of finite thickness with a sinusoidal modulation of the refractive index. A rigorous statement of the boundary-value problem is used, taking into account the transmitted and reflected diffraction orders [9, 10]. The wave equation is solved using perturbation theory [11-14]. We compare the expression for the first diffraction order obtained in the first approximation of the perturbation theory with the corresponding expression obtained in simplified consideration, as the hologram thickness (\(d\)) tends to zero. To achieve this, both expressions are expanded to a series in \(d\) and limited to a linear term. Both expressions are analyzed from the perspective of the fulfillment of the electrodynamic reciprocity theorem [15, 12]. For a particular case, the expressions are compared with the results of the two-wave approximation of coupled-wave theory [16]. Their use in the thin-grating decomposition method [2] is also discussed. ## 2 Simplified consideration of the passage of a plane wave through a thin hologram Let a thin hologram be limited by the planes z = 0 and z = d, the refractive index of the hologram has a sinusoidal modulation \(n=n_{{}_{0}}+n_{{}_{1}}\cos(\frac{2\pi x}{\Lambda})\), where \(\Lambda\) is the grating period (Fig. 1). Let a plane wave with a wave vector \(k\) lying in the \(xz\) plane falls on the hologram from the region \(z{<}0\) (Fig. 1), where \(k=k_{{}_{0}}n_{{}_{0}}=\frac{2\pi n_{{}_{0}}}{\lambda}\), and \(\lambda\) is the wavelength of light. It is usually assumed that a wave passing through a thin phase sinusoidal hologram acquires sinusoidal phase modulation; that is, up to a constant factor, the field in the \(z{=}d\) plane is equal to (Section 8.5 in [1]): \[\exp[i\xi\cos(\frac{2\pi x}{\Lambda})], \tag{1}\] where \(\xi\) is a constant defined below. Next, we consider the fact that the function \[\exp[ixu+iz(k^{2}-u^{2})^{{}^{1/2}}], \tag{2}\] where \(u\) is arbitrary, is the exact solution of the wave equation [17] for \(z{>}d\)[18]. Owing to the linearity of the wave equation, its solution is also the sum \[\sum_{m=-\infty}^{\infty}i^{m}J_{{}_{m}}(\xi)\exp\Biggl{\{}ix(k_{{}_{x}}+\frac {2\pi m}{\Lambda})+iz[k^{2}-(k_{{}_{x}}+\frac{2\pi m}{\Lambda})^{2}\,]^{{}^{1/2 }}\Biggr{\}}, \tag{3}\] where \(J_{m}\) is an \(m\)th-order Bessel function. Expression (3) exactly satisfies the unperturbed wave equation in the region \(z{>}d\), and coincides with (1) in the plane \(z=\) d (Section 7.2.4 [19]). To determine the specific values of the amplitudes of the diffraction orders, \(\xi\) must be related to hologram parameters. It is generally believed [2-6] that \[\xi=\frac{k_{0}n_{1}d}{\cos\theta}\,, \tag{4}\] where \(\theta\) is the angle between the vector \(\mathbf{k}\) and the normal to the hologram. This expression is obtained if we assume that the incident wave passes through a thin hologram as through a slide, i.e. a plane wave propagates inside the hologram according to the laws of geometric optics, which gives a complete phase shift \(\frac{k_{0}d}{\cos\theta}\bigg{[}n_{0}+n_{1}\cos(\frac{2\pi x}{\Lambda})\bigg{]}\). Then from (3, 4) we can obtain the following expression for the amplitude of the \(m\)-th diffraction order: \[E_{m}=(i)^{m}J_{m}\bigg{(}\frac{2\pi dn_{1}}{\lambda\cos\theta}\bigg{)}. \tag{5}\] Expression (21) coincides with the Raman-Nath expression for the amplitudes of diffraction orders [7]. The use of Raman-Nath expressions here raises the following questions: a) Raman-Nath expressions obtained for the case when the grating period significantly exceeds the wavelength of light, and in holography these are usually values of the same order; b) the Raman-Nath theory is constructed for bulk periodic structures, and the thin hologram thickness is assumed to be negligible. To answer these questions, the next section uses a rigorous formulation of the boundary value problem, which is solved using perturbation theory. Figure 1: The geometry of a phase grating with an incident plane wave and diffracted waves. The wave vectors are denoted as follows: \(\mathbf{k}\) is the wave vector of the incident wave, \(\mathbf{k}_{\text{-b}}\), \(\mathbf{k}_{\text{0}}\), \(\mathbf{k}_{\text{1}}\) are the wave vectors of the transmitted diffraction orders; \(\mathbf{k}^{\mathbf{R}}\), \(\mathbf{k}^{\mathbf{R}}\), \(\mathbf{k}^{\mathbf{R}}\) are the wave vectors of the reflected diffraction orders; \(\mathbf{Q}\) is the grating vector. Solution of the wave equation in the first approximation of perturbation theory Consider a flat phase hologram of thickness \(d\) with sinusoidal modulation of refractive index \(n\) and grating vector \(Q\) in the \(xz\) plane (Fig. 1). We assume that the \(x\)-axis lies in the hologram plane, and that the \(z\)-axis coincides with the normal of the hologram. In this case, in the region \(0{\leq}x{\leq}d\), the refractive index is \[n=n_{{}_{0}}+n_{{}_{1}}\cos(Qr), \tag{6}\] where \(r\) denotes the radius vector. Let a linearly polarized plane wave with a wave vector \(k\) lying in the \(xz\) plane and an electric vector having only the \(y\)-component falls on the hologram from the region \(z{<}0\). In this case, the resulting electric field also has only the \(y\)-component \(E_{y}\), and the problem becomes scalar. \(E_{y}\) must satisfy the following wave equation [17]. \[\nabla^{2}E_{{}_{y}}+n^{2}k_{{}_{0}}^{2}E_{{}_{y}}=0\,. \tag{7}\] In (7), time factor exp(-\(i\omega t\)) is omitted. The local refractive index of medium \(n\) in (7) is described by (6) within the hologram (\(0{\leq}x{\leq}d\)) and \(n=n_{0}\) outside (\(z{<}0\), \(z{>}d\)). The continuity conditions of the tangential components of the vectors of the electric and magnetic fields were used as boundary conditions at \(z{=}0\) and \(z=d\). It can be obtained from Maxwell's equations [17] that the tangential component of the magnetic field, \(H_{x}\), is proportional to \(\frac{\partial E_{{}_{y}}}{\partial z}\). Therefore, in our case, the continuity conditions of \(E_{y}\) and \(\frac{\partial E_{{}_{y}}}{\partial z}\) are used as the boundary conditions. We will solve this problem using perturbation theory [12], which represents the electric field of the wave in the region \(0{\leq}x{\leq}d\) as follows1: Footnote 1: We use expression (8), which is somewhat different from the similar expression in [12]. \[E_{{}_{y}}=\exp(i\mathbf{kr})[E^{(0)}+E^{(1)}+E^{(2)}+...]. \tag{8}\] According to the usual practice of perturbation theory, it is assumed that \(E^{(0)}\gg\quad\gg\quad\gg\quad\). In this case, \(E^{(0)}{\exp(i\mathbf{kr})}\) must satisfy the unperturbed (\(n{=}n_{0}\)) equation (7), and \(E^{(0)}\) is a constant assumed to be equal to unity. The equation for the first perturbation theory approximation follows from (6-8): \[\frac{\partial^{2}(E^{(1)})}{\partial x^{2}}+\frac{\partial^{2}(E^{(1)})}{ \partial z^{2}}+2i\Bigg{[}k_{{}_{x}}\,\frac{\partial(E^{(1)})}{\partial x}+k _{{}_{z}}\,\frac{\partial(E^{(1)})}{\partial z}\Bigg{]}{=}-k_{{}_{0}}^{2}n_{ {}_{0}}n_{{}_{1}}\,\Big{[}\exp(iQr)+\exp(-iQr)\Big{]}. \tag{9}\] We will look for \(E^{(1)}\)(\(x\),\(z\)) in the form: \[E^{(1)}(x,z)=E_{{}_{1}}(z)\exp(iQ_{{}_{x}}x)+E_{{}_{-1}}(z)\exp(-iQ_{{}_{x}}x). \tag{10}\] From (9) and (10), we obtain an ordinary differential equation with constant coefficients for the wave amplitude \(E_{{}_{1}}(z)\) corresponding to the (\(+1\))th diffraction order as follows: \[\frac{d^{2}E_{{}_{1}}}{dz^{{}^{2}}}+2ik_{{}_{z}}\,\frac{dE_{{}_{1}}}{dz}-(Q_{{ }_{x}}^{2}+2k_{{}_{x}}Q_{{}_{x}})E_{{}_{1}}=-k_{{}_{0}}^{2}n_{{}_{0}}n_{{}_{1} }\exp(iQ_{{}_{z}}z), \tag{11}\] A general solution of (11) is: \[E_{1}=E^{(0)}G\exp(iQ_{z}z)+C_{1}\exp[iz(-k_{z}+k_{1z})]+C_{2}\exp[iz(-k_{z}-k_{1z })], \tag{12}\] where \[G=\frac{k_{0}^{2}n_{0}n_{1}}{Q^{2}+2(\mathbf{kQ})}. \tag{13}\] Here, the components of the wave vectors of the diffraction orders are determined by the expressions \[k_{1x}=k_{x}+Q_{x}\,.\] \[k_{1z}=[k^{2}-(k_{x}+Q_{x})^{2}]^{1/2}\,. \tag{14}\] Constants \(C_{1}\) and \(C_{2}\) in (12) must be determined from the boundary conditions. On the plane \(z=d\), the field \(E_{I}\)_(z)_ must be "matched" with the field of a plane wave in a homogeneous medium: \[E_{1}^{T}\exp[i(k_{1x}x+k_{1z}z)], \tag{15}\] corresponding to the 1st transmitted diffraction order. On the plane \(z\)=0, the field \(E_{I}\)_(z)_ must be "matched" with the field of the plane wave: \[E_{1}^{R}\exp[i(k_{1x}x-k_{1z}z)], \tag{16}\] corresponding to the 1st reflected diffraction order. "Matching" here means equating functions and their first derivatives with respect to z. This yields the following system of four equations. \[E_{1}^{R}=GE^{(0)}+C_{1}+C_{2}\,, \tag{17}\] \[-ik_{1z}E_{1}^{R}=i(k_{z}+Q_{z})GE^{(0)}+ik_{1z}C_{1}-ik_{1z}C_{2}\,,\] (18) \[E_{1}^{T}\exp(ik_{1z}d)=GE^{(0)}\exp[i(k_{z}+Q_{z})d]+C_{1}\exp( ik_{1z}d)+C_{2}\exp(-ik_{1z}d),\] (19) \[ik_{1z}E_{1}^{T}\exp(ik_{1z}d)=i(k_{z}+Q_{z})GE^{(0)}\exp[i(k_{z}+ Q_{z})d]+ik_{1z}C_{1}\exp(ik_{1z}d)-ik_{1z}C_{2}\exp(-ik_{1z}d), \tag{20}\] From the system of equations (17-20) constants \(C_{1}\), \(C_{2}\), \(E_{1}^{T}\), and \(E_{1}^{R}\) are found. In particular, the amplitude of the 1st transmitting diffraction order is equal to \[E_{1}^{T}=\frac{G(k_{1z}+k_{z}+Q_{z})\left\{\exp[i(-k_{1z}+k_{z}+Q_{z})d)]-1 \right\}}{2k_{1z}}. \tag{21}\] Comparison of expressions for the amplitude of the first diffraction order obtained using several theoretical approaches Let us compare, at small \(d\), the amplitudes of the 1st diffraction order, calculated by expression (5) at \(m\)=1, and by expression (21), obtained on the basis of perturbation theory. To do this, we expand both expressions to a series in \(d\), restricting ourselves to a linear term, denoting it as \({}^{RN}E_{1}^{T}\) and \({}^{PT}E_{1}^{T}\), respectively. \[{}^{RN}E_{1}^{T}= \frac{ik_{0}n_{\rm i}d}{2\cos\theta}\,, \tag{22}\] \[{}^{PT}E_{1}^{T}= \frac{ik_{0}n_{\rm i}d}{2\cos\theta_{1}}\,, \tag{23}\] where \(\cos\theta_{1}\) is the cosine of the slope of the wave vector of the 1st diffraction order to the \(z\)-axis. The denominator of (22) contains \(\cos\theta\), and the denominator (23) contains \(\cos\theta_{1}\). We now compare (22) and (23) with the expression obtained by Kogelnik [16] for a transmitting phase hologram under the exact fulfillment of the Bragg condition. It is known that when \(n_{1}\) is small, Kogelnik's formulas are also valid for small \(d\)[20]. Under these conditions, from Equation (42) in [16], we obtain the following expression for the amplitude of the diffracted wave. \[-i\Bigg{(}\frac{\cos\theta}{\cos\theta_{1}}\Bigg{)}^{1/2}\sin\Bigg{[}\frac{ \pi n_{\rm i}d}{\lambda(\cos\theta\cos\theta_{1})^{1/2}}\Bigg{]}. \tag{24}\] For a small \(d\), by replacing the sine function in (24) with its argument, we obtain (23) up to a phase factor. Let us check for expressions (22) and (23) the validity of the electrodynamic reciprocity theorem (Section 2.5 [15]), which, as applied to phase gratings, requires that when the wave vectors of the incident and diffracted waves are interchanged, and the sign of the grating vector is simultaneously changed, the diffraction efficiency does not change [12]. The diffraction efficiency \(\eta\) is given by [16] \[\eta=\frac{|\cos\theta_{1}\,|}{\cos\theta}E_{1}(d)E_{1}^{*}(d)\,. \tag{25}\] For expressions (22) and (23) we obtain, respectively: \[{}^{RN}\eta=\frac{k_{0}^{2}n_{1}^{2}d^{2}}{4\cos^{3}\theta}\,, \tag{26}\] \[{}^{PT}\eta=\frac{k_{0}^{2}n_{1}^{2}d^{2}}{4\cos\theta\cos\theta_{1}}\,. \tag{27}\] Obviously, expression (27) will not change when the angles \(0\) and \(\theta_{1}\) are interchanged, whereas the value of expression (26) will change. Thus, the expression obtained using perturbation theory satisfies the reciprocity theorem2, whereas the expression obtained based on the Raman-Nath expression does not. It is easy to see that, for expression (42) in [16], the reciprocity theorem is satisfied. Footnote 2: It is easy to show that the original expression (21) also satisfies the reciprocity theorem [12]. ## 5 Discussion A comparison of the expressions for the amplitude of the first diffraction order, obtained in the first approximation of the perturbation theory, in the two-wave approximation of the coupled-wave theory, and in the Raman-Nath theory showed that for a small thickness of the hologram, the first two expressions coincide with each other and differ from the third one. In addition, the reciprocity theorem is satisfied for the first two approaches but not for the third. We emphasize that the aforementioned difference in expressions is preserved for \(d\)\(\rightarrow\)0 and n\({}_{1}\)\(\rightarrow\)0. This difference becomes insignificant only when \(\Lambda\)\(\lambda\) increases, that is, when the condition of applicability of the Raman-Nath expression is satisfied. For the applicability of perturbation theory, this condition is not required, but in this case the amplitudes of the diffracted waves must be significantly lower than the amplitude of the incident wave. This allowed us to conclude that considering a thin hologram as a slide can lead to noticeable errors if the cosines of the incident and diffracted waves differ significantly. This conclusion is important not only for thin holograms but also for volume holograms when using the thin-grating decomposition method. It turns out that when using Expression (21) in the thin-grating decomposition method for the amplitudes of waves diffracted by each layer [2], the values of the amplitudes of the diffraction orders will be overestimated or underestimated, depending on the ratio of the cosines of the corresponding angles, which in both cases will be noticeable by the imbalance of energy. At the same time, when using Expression (23) in the calculations, the energy conservation law is fulfilled quite accurately [8]. Let us point to another interesting consequence of this consideration. Let the object field consist of several plane waves with amplitudes of \({}^{j}E\) and wave vectors \(k_{j}\) lying in the \(xz\)-plane with different angles \(\theta_{\rm j}\) along the \(z\)-axis. Then, with linear registration and ideal reconstruction, the components of the reconstructed field are proportional to \(\frac{1}{\cos\theta_{j}}\)\({}^{j}E\) according to Expression (23). In other words, the object field restored in the first diffraction order is slightly distorted compared to the original one. This deduction can be easily generalized to the case of a continuous spatial-frequency spectrum. ## 6 Conclusion The simplest consideration of the passage of a plane light wave through a thin hologram, like the passage of a light beam through a slide, leads to expressions for the amplitudes of diffraction orders coinciding with the expressions obtained in the Raman-Nath theory. We compared these expressions with the results obtained using other approaches. We obtained a solution for the task of light diffraction on a hologram of finite thickness with sinusoidal modulation of the refractive index in the first approximation of the perturbation theory. A rigorous formulation of the boundary value problem was used, taking into account the transmitted and reflected diffraction orders. The resulting expression for the first transmitted diffraction order was compared with a similar expression in Raman-Nath theory for hologram thickness \(d\) tending to zero. To achieve this, the expressions are expanded to a series in \(d\). The terms of the expansion of the first order in \(d\) differ from that in the case of using perturbation theory, the denominator is cos\(\theta_{\rm l}\), and in the case of using the Raman-Nath theory, cos\(\theta\). The difference between the expressions is preserved at d\(\rightarrow\)0 and n\({}_{1}\)\(\rightarrow\)0, and disappears only as \(\Lambda\)\(\lambda\) increases. It is shown that the expression obtained in the perturbation theory approximation, in contrast to the expression obtained from the Raman-Nath expression, first, for a small \(d\) coincides with the expression obtained by Kogelnik, and second, satisfies the electrodynamic reciprocity theorem. The use of expressions obtained using perturbation theory in the thin-grating decomposition method makes it possible to avoid energy imbalance. It is shown that when reconstructing an object field containing many plane waves with different inclination angles, the ratio of their amplitudes is somewhat distorted compared to the original one. The main conclusion is that considering a thin hologram as a slide can lead to noticeable errors if the cosines of the incident and diffracted waves differ significantly. ### Data Availability statement No data were generated or analyzed in the presented research. ## Funding This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. This research was conducted within the framework of the state task of Kotel'nikov Institute of Radio Engineering and Electronics of the Russian Academy of Sciences. ## Conflicts of interest The author has declared that no competing interests exist. ## Acknowledgment The author thanks P. A. Smolovich and A. P. Orlov for their help with the text editing.
2305.10775
Enhancing Speech Articulation Analysis using a Geometric Transformation of the X-ray Microbeam Dataset
Accurate analysis of speech articulation is crucial for speech analysis. However, X-Y coordinates of articulators strongly depend on the anatomy of the speakers and the variability of pellet placements, and existing methods for mapping anatomical landmarks in the X-ray Microbeam Dataset (XRMB) fail to capture the entire anatomy of the vocal tract. In this paper, we propose a new geometric transformation that improves the accuracy of these measurements. Our transformation maps anatomical landmarks' X-Y coordinates along the midsagittal plane onto six relative measures: Lip Aperture (LA), Lip Protusion (LP), Tongue Body Constriction Location (TTCL), Degree (TBCD), Tongue Tip Constriction Location (TTCL) and Degree (TTCD). Our novel contribution is the extension of the palate trace towards the inferred anterior pharyngeal line, which improves measurements of tongue body constriction.
Ahmed Adel Attia, Mark Tiede, Carol Y. Espy-Wilson
2023-05-18T07:34:17Z
http://arxiv.org/abs/2305.10775v3
# Enhancing Speech Articulation Analysis Using A Geometric Transformation ###### Abstract Accurate analysis of speech articulation is crucial for speech analysis. However, X-Y coordinates of articulators strongly depend on the anatomy of the speakers and variability of pellet placements, and existing methods for mapping anatomical landmarks in the X-ray Microbeam Dataset (XRMB) fail to capture the entire anatomy of the vocal tract. In this paper, we propose a new geometric transformation that improves the accuracy of these measurements. Our transformation maps anatomical landmarks' X-Y coordinates along the midsagittal plane onto six relative measures: Lip Aperture (LA), Lip Protrusion (LP), Tongue Body Construction Location (TBCL), Degree (TBCD), Tongue Tip Construction Location (TTCL), and Degree (TTCD). Our novel contribution is the extension of the palate trace towards the inferred anterior pharyngeal line, which improves measurements of tongue body constriction. Ahmed Adel Attia\({}^{1}\), Mark Tiede \({}^{2}\), Carol Y. Espy-Wilson\({}^{1}\)\({}^{1}\)University of Maryland College Park, MD, \({}^{2}\) Haskins Laboratories, New Haven, CT [email protected], [email protected], [email protected] **Index Terms**: speech inversion, speech analysis, X-ray microbeam, geometric transformation, data engineering ## 1 Introduction Articulatory data provides valuable insights into speech production and analysis. By recording the movement and position of articulators such as the lips, tongue and jaw during speech, researchers can gain a better understanding of how speech sounds are produced. There are several methods for collecting articulatory data, including the University of Wisconsin X-ray Microbeam, Electromagnetic Articulometry (EMA), and real-time Magnetic Resonance Imaging (rt-MRI). However, accurately analyzing articulatory data can be challenging due to variability in speaker anatomy and pellet placement. The positioning of pellets in the X-Y plane is closely linked to the speaker's anatomy, leading to significant variability in pellet positions among speakers for the same sound. Additionally, small differences in pellet placement can result in considerable variation. Speech production involves creating constrictions at various locations along the vocal tract by shaping the vocal tract filter using the articulators. Furthermore, the absolute positions of the articulators depend on the speaker's vocal tract anatomy. Therefore, quantifying vocal tract shape is best achieved by measuring the location and degree of these constrictions, which are relative measures, rather than the absolute X-Y positions of the pellets. These measures are called Tract Variables (TVs). TVs specify the salient features of the vocal tract area function more directly than the pellet trajectories [1]. Geometric transformations can be used to derive Task Dynamics Variables (TVs) from the absolute X-Y pellet positions. These variables, introduced in [2], provide information on the location and degree of constrictions in the vocal tract without requiring knowledge of the absolute positions of the articulators. The TV Lip Aperture (LA), for example, quantifies the degree of constriction at the lips without differentiating the contributions of the jaw, upper lip, or lower lip [3]. In this paper, we focus on the XRMB dataset. [3] describes a geometric transformation to obtain TVs from the XRMB Teltel Trajectories (TVs). According to the Task Dynamic model of speech production (TADA) [4], the hard palate and tongue body were approximated as circles using curve fitting techniques. Specifically, the hard palate was approximated as a large circle using curve fitting through the palate trace, and the tongue body was approximated as a smaller circle within the larger circle that approximates the palate. The tongue tip was modeled separately using the segment T2-T1. This modeling approach enabled the conversion of the pellet X-Y positions to six TV trajectories at each time step. These trajectories included LA, Lip Protrusion (LP), Tongue Body Construction Location (TBCL), Tongue Body Constriction Degree (TBCD), Tongue Tip Constriction Location (TTCL), and Tongue Tip Constriction Degree (TTCD). This transformation has been proven to be effective in modeling articulations in the vocal tract and has been used in various publications [5, 6, 7, 8]. Recently, [9] outlined an Autoencoder model to reconstruct 3.28 out of 3.4 hours of corrupted XRMB data. However, they reconstructed the X-Y pallet positions and applying [3]'s transformation to their reconstructed data proved to be difficult as only a high-level description of their transformations is outlined and no formulae or code was available. The transformation model described [3] also has several limitations. One major drawback is that the palatal trace in the XRMB dataset only covers a limited area and does not include the soft palate or extend to the pharyngeal wall. As a result, the model cannot accurately account for the role of these structures in speech production, and cannot model narrowings in the pharyngeal region. Another limitation of the model is that the circular arc used to model the palatal trace may not accurately represent the actual shape of the palate. To overcome these limitations, our paper proposes a new approach that incorporates the soft palate and anterior pharyngeal wall traces into the model. This will improve the accuracy of the tongue body constriction calculation, especially in back vowels. Our approach also differs from the previous model in that it utilizes the palatal trace as is from the XRMB dataset, without the need to fit a circular arc through it. By addressing these limitations, our model will provide a more accurate representation of speech production. We also describe our transformation in more detail so that our work can be reproducible, and be applied to [9]'s reconstructed XRMB dataset. ## 2 Dataset Description The XRMB dataset consists of simultaneously recorded audio and articulatory data. During the recording process, eight gold pellets were placed on specific articulators in each speaker, including the upper lip (UL), lower lip (LL), tongue tip (T1), tongue blade (T2), tongue dorsum (T3), tongue root (T4), mandible incisor (MNI), and parasagitally placed mandible molar (MNM). Speakers were given various tasks, such as reading passages or word lists, while their articulatory movements were tracked and recorded as X-Y coordinates. Data was sampled at different rates. To ensure consistency, all pellet trajectories were resampled at a rate of 145 samples per second. Multi-sentence recordings were segmented into individual sentence recordings. In some cases, articulatory recordings were marked as mistracked in the database, reducing the dataset to 46 speakers (21 males and 25 females) with approximately 4 hours of speech data. ## 3 Geometric Transformation In this section, we will provide a comprehensive overview of the proposed method for transforming the XRMB PTs into TVs, providing a detailed explanation of each articulator. ### Lips To describe the process of lip constriction, we use two different TV values: LA and LP. LA represents the distance between the upper lip (UL) and lower lip (LL) pallets in Euclidean space, while LP represents the horizontal offset of the upper lip from the origin. It is important to note that the origin of the X-Y plane is located at the tip of the maxillary incisors, and the x-axis is defined as the maxillary occlusal plane. This means that any measurements or calculations made in relation to lip constriction are made with reference to this specific orientation. \[LA[n]=||UL[n]-LL[n]|| \tag{1}\] \[LP[n]=UL_{x}[n] \tag{2}\] ### Tongue Body The tongue body is represented by a circle that is fitted through the pellets T2, T3, and T4. The degree of constriction in the tongue body is determined based on the proximity of this circle to the pallet trace that is extended towards the anterior pharyngeal wall. This method allows for an accurate representation of tongue constriction during speech production. In the XRMB dataset, posterior pharyngeal wall traces are available for all speakers. By shifting the posterior pharyngeal wall to the right by the thickness of the low retropalatal orophynx, we can infer the trace of the anterior pharyngeal wall. The average low retropalatal oropharyngeal thickness has been estimated to be approximately 0.58 cm for women and 0.56 cm for men [10]. To estimate the position of the soft palate, we extend the line between the two posterior samples in the pallet trace in the XRMB dataset until it intersects with the inferred anterior pharyngeal wall. Figure 3 illustrates the original and extended palatal trace, as well as the anterior and posterior pharyngeal wall traces, providing a visual representation of the methodology used in this process. By using these techniques, we can create precise computer models of speech production that accurately represent the positions of the tongue and soft palate. The tongue body constriction is described by TBCD and TBCL. [11] defines TBCL (and TTCL) as polar angular distance measures for the tongue body and tongue tip constrictors with respect to a reference line originating at the floor of the mouth. In our approach, we follow a similar methodology, but we use the center of a circle that is fitted through the original palatal trace as the reference point instead. It is important to note that this circle is not used in any calculations; only its center is utilized as a reference point for angular calculations. Therefore, TBCD can be calculated as the minimum distance between the extended pallet trace and the tongue body circle while TBCL represents the angle between the center of the palatal circle and the point on the tongue body that is closest to the extended pallet trace. Notice in Figure 2 how TBCD is less noisy than [3]'s TBCD and describes a narrower constriction around 1.3 seconds, which corresponds to the \(\Lambda\) sound in the word "mirage". Our model was more capable of capturing this pharyngeal narrowing in the back vowel. \[TBCD=min_{p\in epal}[min_{x\in TB_{circle}}||p-x||] \tag{3}\] Figure 3: Extended Palletal Trace With the Anterior Pharyngeal Wall For Speaker JW33. Figure 2: Our novel TVs vs [3] TVs. \[TBCL=tan^{-1}\frac{TB[argmin[TBCD]_{x}]-pc_{x}}{TB[argmin[TBCD]_{y}-pc_{y}} \tag{4}\] where \(epal\) is the extended pallet trace, \(TB[argmin[TBCD]\) is the point on the tongue body closest to the pallet trace, and \(pc\) is the center of the pallet circle. ### Tongue Tip The tongue tip is represented by the T1 pallet, and its constriction can be modeled by TTCD and TTCL. Similar to the tongue body, TTCD is the minimum distance between T1 and the extended pallet trace, while TTCL is the horizontal angle of the segment between the center of the pallet circle and T1. \[TTCD=min_{p\in epal}[\|p-T1\|] \tag{5}\] \[TBCL=tan^{-1}\frac{T1_{x}-pc_{x}}{T1_{y}-pc_{y}} \tag{6}\] ## 4 Experiments We conducted experiments to evaluate our novel geometric transformation by training a Speech Inversion (SI) system on two sets of TVs: our own and the ones used in [3]. The Bidirectional Gated Recurrent Neural Network (BiGRNN) SI system from [5] was used, along with Mel-Frequency Cepstral Coefficients (MFCCs) extracted from 2-second audio segments. Shorter segments were zero-padded. The dataset was divided into three sets: training (36 speakers), development (5 speakers), and testing (5 speakers, 3 males and 2 females), with no overlapping speakers to ensure "speaker-independent" training. The split ensured that the training and development/testing sets contained around 80% and nearly equal amounts of utterances, respectively. All models were built with the TensorFlow-Keras machine learning framework and trained on a NVIDIA Titan Xp GPU using MAE loss with the Adam optimizer. We also used early stopping with a patience of 10 epochs during the training process. Early stopping is a technique used to prevent overfitting in machine learning models by monitoring the validation loss during training. The training process is stopped when the validation loss stops improving after a certain number of epochs, called "patience". In our case, the training process would stop if the validation loss did not improve for 10 consecutive epochs. This technique helps to prevent the model from overfitting to the training data, which can lead to poor performance on new, unseen data. Both models were evaluated using the Pearson Product Moment Correlation (PPMC) score. Table 1 shows the PPMC score for models trained with both sets of TVs. The SI system scores 6 points higher with TBCD showing that the inferred soft pallet and the anterior pharyngeal wall yield a measure of the tongue body constriction that is more correlated with the data. On average, our TVs score 3 points more than previous works. ## 5 Conclusions And Future Works We introduce a novel geometric transformation to derive the Tongue Body constriction Location (TBCL) TVs from X-Y coordinates of articulatory trajectories in the XRMB dataset. Our approach involves incorporating an inferred outline of the soft palate and anterior pharyngeal wall, resulting in a more accurate measurement of tongue body constriction. However, we acknowledge that our current model, which represents the tongue body as a circular arc, has room for improvement and could benefit from further investigation. We ob \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|l|} \hline **TVs** & **LA** & **LP** & **TBCL** & **TBCD** & **TTCL** & **TTCD** & **Average** \\ \hline **Ours** & **0.6587** & **0.6068** & 0.6455 & **0.6259** & **0.5621** & 0.6817 & **0.6301** \\ **[3]** & 0.6239 & 0.5430 & **0.6823** & 0.5616 & 0.5271 & **0.6940** & 0.6054 \\ \hline \end{tabular} \end{table} Table 1: PPMC scores for models trained with our TVs and [3]’s TVs Figure 1: Transformation of XRMB PTs to TVs. Figure taken from [3] served that our TBCL TV is not continuous, which could explain why the Speech Inversion (SI) system performed better on predicting the TBCL in [3]. In future research, we plan to address these limitations.
2304.07120
Resource Allocation and Passive Beamforming for IRS-assisted URLLC Systems
In this correspondence, we investigate an intelligent reflective surface (IRS) assisted downlink ultra-reliable and low-latency communication (URLLC) system, where an access point (AP) sends short packets to multiple devices with the help of an IRS. Specifically, a performance comparison between the frequency division multiple access (FDMA) and time division multiple access (TDMA) is conducted for the considered system, from the perspective of average age of information (AoI). Aiming to minimize the maximum average AoI among all devices by jointly optimizing the resource allocation and passive beamforming. However, the formulated problem is difficult to solve due to the non-convex objective function and coupled variables. Thus, we propose an alternating optimization based algorithm by dividing the original problem into two sub-problems which can be efficiently solved. Simulation results show that TDMA can achieve lower AoI by exploiting the time-selective passive beamforming of IRS for maximizing the signal to noise ratio (SNR) of each device consecutively. Moreover, it also shows that as the length of information bits becomes sufficiently large as compared to the available bandwidth, the proposed FDMA transmission scheme becomes more favorable instead, due to the more effective utilization of bandwidth.
Yangyi Zhang, Xinrong Guan, Qingqing Wu, Zhi Ji, Yueming Cai
2023-04-14T13:23:59Z
http://arxiv.org/abs/2304.07120v2
# Resource Allocation and Passive Beamforming for IRS-assisted URLLC Systems ###### Abstract In this correspondence, we investigate an intelligent reflective surface (IRS) assisted downlink ultra-reliable and low-latency communication (URLLC) system, where an access point (AP) sends short packets to multiple devices with the help of an IRS. Specifically, a performance comparison between the frequency division multiple access (FDMA) and time division multiple access (TDMA) is conducted for the considered system, from the perspective of average age of information (AoI). Aiming to minimize the maximum average AoI among all devices by jointly optimizing the resource allocation and passive beamforming. However, the formulated problem is difficult to solve due to the non-convex objective function and coupled variables. Thus, we propose an alternating optimization based algorithm by dividing the original problem into two sub-problems which can be efficiently solved. Simulation results show that TDMA can achieve lower AoI by exploiting the time-selective passive beamforming of IRS for maximizing the signal to noise ratio (SNR) of each device consecutively. Moreover, it also shows that as the length of information bits becomes sufficiently large as compared to the available bandwidth, the proposed FDMA transmission scheme becomes more favorable instead, due to the more effective utilization of bandwidth. IRS, URLLC, FDMA, TDMA, AoI. ## I Introduction The strengthened ultra-reliable and low-latency communication (URLLC) proposed by 6G has more stringent requirements for reliability and latency to support services like Internet of Things (IoT) industry automation, self-driving car and telemedicine. Many of them require timely command information, for example, in IoT industry automation, command center will give command information according to its perception of current environment, and devices need to obtain latest command information for completing their works. However, due to the transmission delay, packet error, and other factors, the command information arrives with a lag time [1]. To evaluate the lag time of command information, a new metric named age of information (AoI) is proposed. Specifically, AoI is the interval from the generation time of command information to the current moment, which reflects the freshness of the current command information and thus the effectiveness of the operation [2]. To reduce the transmission time, command information are usually encoded into short packets at the transmitter, which however renders increased packet error rate (PER) at the receiver due to the finite blocklength. To tackle the above challenge, [3] proposed to encode messages of different users into an enlarged length packet, [4] introduced the automatic repeat request (ARQ) into short packet communication system, while [5] proposed to exploit collaborative relay technology to increase the signal to noise ratio (SNR) via best relay selection. However, the combination of user messages [3] and retransmission of messages [4] both increase the transmission delay, while the scheme based on collaborative relay [5] renders additional power and hardware cost. Recently, intelligent reflective surface (IRS) has attracted wide attention due to its potential of smartly reconfiguring the wireless environment and thus achieving high spectrum-efficiency and energy-efficiency [6]. Specifically, IRS is a uniform planar metasurface composed of a large number of passive reflecting elements. By dynamically adjusting the reflection amplitude and/or phase of each element, IRS can achieve signal enhancement and interference suppression cost-effectively [7, 8, 9]. Thanks to such capability, IRS can also be applied to improve reliability and data rate of short packet communication systems without causing additional latency and at low cost [10, 11]. However, the above works only focus on IRS-aided single user case, and cannot be straightforwardly extended to the IRS-aided multiuser URLLC systems, wherein the multiple access problem becomes extremely important but still remains unsolved. Motivated by the above, we conduct a performance comparison between the IRS assisted frequency division multiple access (FDMA) and IRS assisted time division multiple access (TDMA) for the multiuser URLLC system. Specifically, aiming to minimize the maximum average AoI, the resource allocation and passive beamforming are jointly optimized for both schemes. Note that this work is different from that in [12], which was based on the conventional infinite blocklength assumption, only considered a two-user case and simply adopted an equal time/bandwidth allocation strategy. Simulation results show that the proposed TDMA design can achieve better AoI performance than the FDMA design via dynamically tuning the passive beamforming for maximizing the SNR of each device consecutively. However, it also shows that as the available bandwidth decreases or the length of information bits increases, FDMA may outperform TDMA because in this case the bandwidth becomes the bottleneck and thus optimizing bandwidth allocation is more effective. ## II System model and problem formulation ### _Signal Model_ As shown in Fig. 1, we consider an IRS-assisted URLLC system where an AP intends to transmit \(K\) short packets to a set of \(K\) devices, respectively, with the help of an IRS consisting of \(M\) reflecting elements. The baseband equivalent channels of the AP-device \(k\) link, AP-IRS link and IRS-device \(k\) link are denoted by \(h_{d,k}\), \(\mathbf{h}_{ar}^{H}\in\mathbf{C}^{1\times M}\) and \(\mathbf{h}_{rk}^{H}\in\mathbf{C}^{1\times M}\), respectively. Assume that all channels undergo quasi-static fading, i.e., the channel coefficient remains constant within each coherence block and varies independently among different coherence blocks. To characterize the performance limit, we assume that perfect channel state information (CSI) of all links is perfectly known at the AP, e.g., by applying some efficient channel estimation methods [9]. Assume that \(\boldsymbol{\Phi}=diag\left(v_{1},v_{2},\cdots\cdots,v_{M}\right)\) represents the diagonal phase shift matrix of the IRS, where \(v_{m}=e^{j\theta_{m}}\), \(\theta_{m}\in[0,2\pi)\) represents the phase shift of the \(m\)-th IRS reflecting element [6]. As such, the composite AP-IRS-device \(k\) channel is then modeled as a concatenation of three components, namely, the AP-IRS link, IRS's reflection with phase shifts, and IRS-device \(k\) link, i.e., \(\mathbf{h}_{rk}^{H}\boldsymbol{\Phi}\mathbf{h}_{ar}^{*}\). The total available bandwidth, the length of each channel coherence block, the length of information bits for each device and the power spectrum density of the noise are denoted by \(B\), \(T\), \(D\) and \(\sigma_{0}^{2}\), respectively. And the transmission power of AP is assumed to be fixed as \(P\). ### _FDMA Transmission Scheme_ In FDMA transmission scheme, the AP communicates with the \(K\) devices simultaneously in each channel coherence block. Denoting the bandwidth and the power allocated to device \(k\) by \(B_{k}\) and \(P_{Fk}\), the signal received by device \(k\) is expressed as \[y_{Fk}=\left(h_{d,k}+\mathbf{h}_{rk}^{H}\boldsymbol{\Phi}\mathbf{h}_{ar}^{*} \right)\sqrt{P_{Fk}}s_{k}+n_{Fk}, \tag{1}\] where \(s_{k}\sim CN\left(0,1\right)\) is the signal transmitted from AP to device \(k\), \(n_{Fk}\sim CN\left(0,B_{k}\sigma_{0}^{2}\right)\) is the complex additive white Gaussian noise (AWGN). By denoting \(\mathbf{h}_{ark}=diag\left(\mathbf{h}_{rk}^{H}\right)\mathbf{h}_{ar}^{*}\) and \(\mathbf{v}^{H}=\left[v_{1},v_{2},\cdots\cdots,v_{M}\right]\), we have \(\mathbf{h}_{rk}^{H}\boldsymbol{\Phi}\mathbf{h}_{ar}^{*}=\mathbf{v}^{H} \mathbf{h}_{ark}\). Thus, the SNR of device \(k\) is expressed as \[\gamma_{Fk}=\frac{P_{Fk}\big{|}h_{d,k}+\mathbf{v}^{H}\mathbf{h}_{ark}\big{|}^{ 2}}{B_{k}\sigma_{0}^{2}}. \tag{2}\] The \(D\) bits information is encoded into a short packet with the blocklength of \(B_{k}T\), thus the packet error rate at device \(k\) can be expressed as [13] \[\varepsilon_{Fk}=Q\left[\frac{\ln\left(1+\gamma_{Fk}\right)-\ln 2\frac{D}{B_{k}T }}{\sqrt{1-\left(1+\gamma_{Fk}\right)^{-2}}/\sqrt{B_{k}T}}\right], \tag{3}\] where \(Q\left(x\right)=\int_{x}^{+\infty}\frac{1}{\sqrt{2\pi}}\exp\left(-\frac{1}{2}t ^{2}\right)dt\) is complementary cumulative distribution function. We consider the generate-at-will scheme for the generation of short packets for each device [14]. Assuming that AP generates packets immediately at the beginning of each coherence block, AoI refers to the interval from generation time of the latest valid packet to present moment. As shown in Fig. 2(a), taking the device 1 for example, AoI increases linearly over time until it successfully decodes a short packet. At this point, AoI is reset to \(T\), as the latest valid packet was generated one coherence block ago. Considering a long-term average of information freshness, average AoI is stable and suitable as a performance metric. According to [15], the average AoI of device \(k\) in FDMA scheme can be expressed as \[\overline{\Delta}_{Fk}=\frac{T}{2}\left(\frac{2}{1-\varepsilon_{Fk}}+1\right). \tag{4}\] Our goal is to minimize the maximum average AoI among all devices by jointly optimizing the power/bandwidth allocation and passive beamforming. Thus, the optimization problem can be formulated as \[\left(\mathbf{P}\mathbf{1}\right): \min_{\mathbf{B}_{k},\mathbf{P}_{Fk},\mathbf{v}}\max\left\{ \bar{\Delta}_{Fk}\left|k=1,2,\cdots\cdots,K\right.\right\}, \tag{5a}\] \[\mathrm{s.t.} \sum_{k=1}^{K}B_{k}=B,\] (5b) \[\sum_{k=1}^{K}P_{Fk}=P,\] (5c) \[\left|v_{m}\right|=1,\hskip 14.226378pt\forall m. \tag{5d}\] ### _TDMA Transmission Scheme_ In TDMA transmission scheme, the AP sends short packets to each device consecutively in each coherence block, while the time slot allocated to device \(k\) is denoted by \(t_{k}\). Thus, the signal received by device \(k\) is given by \[y_{Tk}=\left(h_{d,k}+\mathbf{h}_{rk}^{H}\boldsymbol{\Phi}\mathbf{h}_{ar}^{*} \right)\sqrt{P}s_{k}+n_{Tk}, \tag{6}\] where \(n_{Tk}\sim CN\left(0,B\sigma_{0}^{2}\right)\). Correspondingly, the SNR and the packet error rate can be respectively expressed as \[\gamma_{Tk}=\frac{P\big{|}h_{d,k}+\mathbf{v}^{H}\mathbf{h}_{ark}\big{|}^{2}}{B \sigma_{0}^{2}}, \tag{7}\] \[\varepsilon_{Tk}=Q\left[\frac{\ln\left(1+\gamma_{Tk}\right)-\ln 2\frac{D}{B_{k}}}{ \sqrt{1-\left(1+\gamma_{Tk}\right)^{-2}}/\sqrt{Bt_{k}}}\right]. \tag{8}\] AoI evolution in TDMA scheme is as shown in Fig. 2(b). By denoting \(T_{k}=\sum\limits_{j=1}^{k}t_{j}\), if device \(k\) successfully decodes a short packet, corresponding AoI will be reset to \(T_{k}\) as it takes a time interval of \(T_{k}\) since the beginning of the latest Fig. 1: IRS-assisted URLLC system. coherence block. Similarly, the average AoI of device \(k\) in TDMA scheme can be expressed as \[\overline{\Delta}_{Tk}=\frac{T}{2}\left(\frac{2}{1-\varepsilon_{Tk}}-1\right)+T_{ k}. \tag{9}\] As a result, the optimization problem is formulated as \[\left(\mathbf{P2}\right):\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! A set of lower half or upper half interval is selected recursively based on CVX checking for further search until the interval length is tolerated. #### Iii-B4 Overall Algorithm The overall algorithm for solving (P1) is given in Algorithm 1. As the objective value is not reduced in each iteration optimization process and there is an exact upper bound, Algorithm 1 always converges. If the obtained \(\mathbf{V}\) is not of rank-1, we can get an approximation of \(\mathbf{v}\) by Gaussian randomization as in [6]. ``` 1:Initialization: Initialize \(i=0\) and input \(\left\{{{{B_{k}}}^{\left(i\right)}},{{\left\{{{P_{Fk}}}\right\}}^{\left(i \right)}},{{\mathbf{V}}^{\left(i\right)}}\right\}\). 2:Repeat 3: For given \(\left\{{{\Pr_{Fk}}}\right\}^{\left(i\right)}\) and \({{\mathbf{V}}^{\left(i\right)}}\), optimize \(\left\{{{B_{k}}}\right\}^{\left(i+1\right)}\) 4: For given \(\left\{{{B_{k}}}\right\}^{\left(i+1\right)}\) and \({{\mathbf{V}}^{\left(i\right)}}\), optimize \(\left\{{{\Pr_{Fk}}}\right\}^{\left(i+1\right)}\). 5: For given \(\left\{{{B_{k}}}\right\}^{\left(i+1\right)}\) and \(\left\{{{\Pr_{Fk}}}\right\}^{\left(i+1\right)}\), optimize \(\mathbf{V}^{\left(i+1\right)}\). 6: Let \(i\gets i+1\). 7:Until the fractional increase of \(\max\left\{{\overline{\Delta}_{{Fk}}\left|{\forall k}\right.}\right\}\) is below a threshold. ``` **Algorithm 1** An alternating algorithm for solving (P1) #### Iii-B4 Optimalization In TDMA scheme, each channel coherence block is divided into \(K\) time slots for devices, in each of which the AP transmits a short packet to the serving device with the help of IRS. Note that in this transmission mode, the reflecting coefficients of IRS elements are dynamically tuned during each coherence block for maximizing the SNR at each device consecutively (the so-called time-selective passive beamforming as in [12]), unlike remaining unchanged in the FDMA case. #### Iii-B1 Optimizing \(\mathbf{v}\) for (P2) It can be seen from (8) and (9) that \(\overline{\Delta}_{{Tk}}\) decreases with the increase of \(\gamma_{{Tk}}\). To maximize the \(\gamma_{{Tk}}\), the passive beamforming vector of device \(k\) is given by \[\mathbf{v}_{k}^{H}=\left[{{v_{k1}},{v_{k2}},\cdots\cdots,{v_{{KM}}}}\right], \tag{15}\] where \({v_{{km}}}={{e^{{j{\theta_{{km}}}}}}}\), \({\theta_{{km}}}={{\angle h_{{d,k}}}}-{{\angle h_{{ark}}^{\left(m\right)}}}\), \({\angle h_{{d,k}}}\) is the phase of \(h_{{d,k}}\) and \({\angle h_{{ark}}^{\left(m\right)}}\) is the phase of the \(m\)-th element of \(\mathbf{h}_{{ark}}\). As such, \[\gamma_{{Tk}}^{\max}=\frac{{P{\left({{{\left|{{h_{{d,k}}}\right|}+\frac{{M}}{{ m}}}\left|{{h_{{ark}}^{\left(m\right)}}}\right|}}\right)^{2}}}}{{B{\sigma_{0}^{2}}}}. \tag{16}\] #### Iii-B2 Optimizing \(\left\{{{t_{k}}}\right\}\) for (P2) As (P2) is non-convex, it is intractable to solve the problem directly. Thus, we use the principle of selecting the best to fill the weakness. Specifically, by setting \({t_{1}}={t_{2}}=\cdots\cdots={t_{K}}=T/K\), the initialized \(\left\{{\overline{\Delta}_{{Tk}}\left|{\forall k}\right.}\right\}\) are thus obtained. We assume that the \(i\)-th and the \(j\)-th device achieves the maximum and minimum average AoI, respectively, i.e., \(\overline{\Delta}_{{Ti}}=\max\left\{{\overline{\Delta}_{{Tk}}\left|{\forall k }\right.}\right\}\) and \(\overline{\Delta}_{{Tj}}=\min\left\{{\overline{\Delta}_{{Tk}}\left|{\forall k }\right.}\right\}\). Then, find a \(\Delta T\) such that \({t_{i}}={t_{i}}+\Delta T\), \({t_{j}}={t_{j}}-\Delta T\), and thus \(\overline{\Delta}_{{Ti}}=\overline{\Delta}_{{Tj}}\). Repeat the above procedure until the maximum average AoI no longer reduced. The detailed algorithm is summarized as in Algorithm 2. ``` 1:Initialization: Initialize \(l=0\) and input \(\left\{{{t_{k}}}\right\}^{\left(l\right)}\). 2:Repeat 3: Calculate \(\left\{{\overline{\Delta}_{{Tk}}\left|{\forall k}\right.}\right\}^{\left(l \right)}\), find the device \(i^{\left(l\right)}\) whose average AoI is \(\max\left\{{\overline{\Delta}_{{Tk}}\left|{\forall k}\right.}\right\}^{\left(l \right)}\) and the device \(j^{\left(l\right)}\) whose average AoI is \(\min\left\{{\overline{\Delta}_{{Tk}}\left|{\forall k}\right.}\right\}^{\left(l \right)}\). 4: Find a \(\Delta T^{\left(l+1\right)}\) such that \({t_{i}^{\left(l+1\right)}}={t_{i}^{\left(l+1\right)}}+\Delta T^{\left(l+1\right)}\), \({t_{j}^{\left(l+1\right)}}={t_{j}^{\left(l+1\right)}}-\Delta T^{\left(l+1 \right)}\), and thus \(\overline{\Delta}_{{Ti}}^{\left(l+1\right)}=\overline{\Delta}_{{Tj}}^{\left(l +1\right)}\). 5: Let \(l\gets l+1\). 6:Until the fractional decrease of \(\max\left\{{\bar{\Delta}_{{Tk}}\left|{\forall k}\right.}\right\}\) is below a threshold. ``` **Algorithm 2** Optimizing \(\left\{{{t_{k}}}\right\}\) algorithm for solving (P2) ## IV Simulation results Unless otherwise specified, the parameters are set as follows: the AP and the IRS are located at (0, 0) and (120, 30) in meter (m), respectively, while the \(K=5\) devices are randomly distributed in a circle with the centre and radius are set as (120, 0) and \(R=10\) m. The channel between arbitrary two nodes is represented by \(h=\sqrt{L_{0}d^{-\alpha}}g\), where \(L_{0}=-30\) dB is path loss at the reference distance \(d_{0}=1\) m, \(d\) denotes the distance, \(\alpha\) denotes the corresponding path loss exponent and \(g\) is the small-scale fading component. Considering that IRS can be flexibly deployed at a place with less scattering environment and thus we set \(\alpha_{1}=3.5\) and \(\alpha_{2}=\alpha_{3}=2.5\) for the path loss exponent of AP-device link, AP-IRS link and IRS-device link, respectively. We assume that the transmission power of the AP is \(P=0\) dBm, the number of reflecting elements in IRS is \(M=80\), while the total available bandwidth, the length of each channel coherence block, the length of information bits for each device and the power spectrum density of the noise are set as \(B=1\) MHz, \(T=1\) ms, \(D=600\) bits and \(\sigma_{0}^{2}=-168\) dBm/Hz, respectively. In addition to the proposed two designs (FDMA, Proposed) and (TDMA, Proposed), the equal time/bandwidth allocation strategy in [12] (FDMA, Equal [12]) and (TDMA, Equal [12]) are also adopted for performance comparison. Fig.3(a) plots the maximal average AoI versus the number of reflecting elements, \(M\). It is observed that via optimizing the resource allocation, the proposed designs outperform the benchmarks in [12], wherein the time/frequency resource are equally allocated to the users. Moreover, it shows that the TDMA transmission scheme achieves lower AoI than the FDMA scheme. This is as expected because in the former case the SNR of the currently scheduled device can be maximized via dynamically adjusted IRS passive beamforming, which thus achieves lower PER for each device. It also should be noted that as the number of IRS elements increases, the performance gap between our proposed design and the benchmarks becomes smaller and then approaches zero. This is because as \(M\) increases, the passive beamforming gain and thus the SNR of each packet is sufficiently large for approximately achieving error-free transmission, which renders the optimization of time/bandwidth allocation unnecessary. Fig.3(b) shows the maximum average AoI versus the total bandwidth, \(B\). Similarly, it is observed the proposed designs achieve better AoI performance than the equal time/bandwidth allocation schemes as in [12]. More interestingly, it shows that for the case when the total bandwidth is small, the proposed FDMA transmission scheme even outperforms the proposed TDMA one, i.e., the latter is not always more favourable as shown in Fig. 3(a). This is because when frequency resource is extremely limited, optimizing the bandwidth allocation is more efficient than dynamically adjusting the passive beamforming for improving the performance of the worst user. However, as the bandwidth increases, such performance gain becomes marginal, while the time-varying passive beamforming associated with each scheduled user is more useful, and thus the proposed TDMA transmission scheme outperforms the FDMA one. Even more, as \(B\) increases up to sufficiently large (e.g., \(1000\) KHz as in the considered setup), both benchmark schemes achieve almost the same AoI performance as our proposed design. This is because in this case the AP has sufficient bandwidth to encode each device's packet, of which the blocklength is sufficiently long and thus the PER approaches 0, even without optimizing the time/bandwidth allocation. Fig.3(c) shows the maximum average AoI versus the length of information bits for each device, \(D\). It is first observed that when the length of information bits is small, all considered four schemes achieves almost the same maximum average AoI. The reason is that the blocklength in this case is sufficient large as compared to the limited information bits, and thus almost 0 PER is achieved. However, as the length of information bits increases, owing to the optimized resource allocation, our proposed schemes outperforms the benchmark ones and the performance gap enlarges. Moreover, as \(D\) increases up to larger than 850 bits, the proposed FDMA design outperforms the proposed TDMA design. This reveals that when bandwidth becomes a bottleneck for encoding such large information bits, optimizing the bandwidth allocation is more effective than dynamically adjusting the passive beamforming for decreasing the PER of the worst device. ## V Conclusions In this correspondence, we investigated the downlink multi-device URLLC system assisted by IRS, wherein the FDMA and TDMA transmission schemes are compared from the perspective of maximum average AoI. Specifically, an alternating optimization based algorithm was proposed to minimize the maximal average AoI by jointly optimizing resource allocation and reflective beamforming. In the future work, a more comprehensive comparison with other typical multiple access schemes will be conducted.
2308.10951
QCD Reference Frames and False Jet Individualism
In collider physics, the properties of hadronic jets are often measured as a function of their lab-frame momenta. However, jet fragmentation must occur in the particular rest frame of all color-connected particles. Since this frame need not be the lab frame, the fragmentation of a jet depends on the properties of its sibling objects. This non-factorizability of jets has consequences for jet techniques such as jet tagging, boosted boson measurements, and searches for physics Beyond the Standard Model. In this paper, we will describe the effect and show its impact as predicted by simulation.
Lawrence Lee, Charles Bell, John Lawless, Emery Nibigira
2023-08-21T18:00:11Z
http://arxiv.org/abs/2308.10951v1
# QCD Reference Frames and False Jet Individualism ###### Abstract In collider physics, the properties of hadronic jets are often measured as a function of their lab-frame momenta. However, jet fragmentation must occur in the particular rest frame of all color-connected particles. Since this frame need not be the lab frame, the fragmentation of a jet depends on the properties of its sibling objects. This non-factorizability of jets has consequences for jet techniques such as jet tagging, boosted boson measurements, and searches for physics Beyond the Standard Model. In this paper, we will describe the effect and show its impact as predicted by simulation. At particle colliders, collimated sprays of hadrons known as jets are commonly produced. Roughly, jet activity is the collider signature for parton production in scattering processes. QCD confinement forbids free particles from carrying color charge such that a fragmentation and hadronization process yields an observable jet of color-singlet hadrons. This process results in small angle particle production, which leads to the observed collimation of fragmentation products. Experiments measure the momenta and properties of these hadrons and cluster the reconstructed objects into jets using various algorithms [1; 2]. The properties of the aggregate jet objects are roughly used as a proxy for the properties of the initiating partons in some leading-order approximation [3]. In this approximation, an initiating parton will fragment into additional partons with energies as dictated by non-perturbative fragmentation functions. The overall particle multiplicity and the distribution of energy across these particles is stochastically determined by these fragmentation functions. Beyond the measurement of a four-momentum, modern collider experiments frequently measure the internal structure of a jet, usually to determine the "origin" of the jet. Classical methods to determine if a jet originated from heavy-flavor quarks or \(\tau\) leptons long predate the Large Hadron Collider (LHC) [4]. Within the LHC era, additional tools such as quark vs. gluon (\(q/g\)) tagging and the industry of jet substructure continue to use the distribution of jet constituents as a discriminating tool [5; 6; 7; 3]. Both angular and longitudinal distributions of energy flow can carry crucial information about the origin of a jet [8]. In the jet physics industry, it is common to consider a jet to be defined by its progenitor particle species and its momentum; _e.g._, a light quark jet of a particular momentum should have a set of properties drawn from the same distributions as every other light quark jet of that momentum scale. Under this jet individualism view, each jet is assumed to have a largely independent fragmentation process, modulo small-but-measurable colorflow effects [9; 10; 11]. We may obtain independent samples of jets in a particular momentum value from other physical processes in data to develop data-driven background estimates, building flavor- and momentum-dependent templates for jet structure distributions [12; 13; 14; 15]. In flavor-tagging, it is often stated that an experiment has a particular tagging performance for \(b\)-jets with a given momentum. In this paper, we will present an argument that, in retrospect, seems obvious: The ubiquitous experimental assumption that jets of a given flavor and momentum can be thought of as independent, identically-distributed objects is invalidated by a simple, first-principles consideration. We describe a hadronic jet's color rest frame dependence, example processes where this effect can be observed, and how different event generators model these effects. We then identify techniques where the effect is under-appreciated at modern experiments and explain a potential strategy to use this effect for process discrimination. Jet properties are often discussed as a function of the jet's momentum or energy as seen in the detector. We think about the fragmentation properties of an \(X\) GeV jet as a function of its kinematics. In collider data analysis, it is common to find "standard candles" of kinematically similar jets from well-understood processes for calibration. For example, we can calibrate jet measurements in well-balanced dijet events and use that understanding in searches for a new particle decaying to jets with similar kinematics [16; 17]. This way of thinking about jets relies on the assumption that kinematically similar jets of similar origin are all produced from the same underlying physics distributions. The angular distribution of particles, the energy sharing between them, and the total number of final state particles in a jet are assumed to be entirely dictated by the kinematics and origin of the jet. However, this jet individualism assumption and our ordinary language in jet physics use the lab frame momentum to discuss these properties. This assumption fails once requiring that observers in all frames must have a Lorentz-consistent view of these jets. Since one can boost into a frame where that lab-frame \(X\) GeV jet is now \(Y\) GeV, not all the properties of a jet can be solely defined by its lab-frame momentum. There must be a special frame in which a parton's frag mentation occurs. The only well-defined special frame for QCD fragmentation is the rest frame of all color-connected objects, or to leading-order, the rest frame of the relevant color dipoles. The clearest example of this effect is in a popular jet observable - the particle multiplicity [18; 19; 14; 20; 15]. While measuring this by counting the number of charged particle tracks within the jet is infrared and collinear (IRC) unsafe, it's a common observable for jet discrimination in practice. It's well established that the average \(n_{trk}\) of these jets is strongly dependent on momentum scale; jets at larger momentum contain more particles [19]. However, since observers in all Lorentz frames must agree on the _number_ of particles produced in the fragmentation process, there must be a preferred frame in which the fragmentation function is valid. To understand the particle multiplicity of a jet, not only is the jet's momentum needed, but all jets color-connected to its "initiating" partons need to be considered, as its the jet's momentum in the _"color rest frame"_ (or maybe _"center of color"_ frame) that dictates the fragmentation behavior. Instead of considering individual jets in the lab frame, jet fragmentation must be considered in the rest frame of all color-sibling particles. This Frame of Fragmentation and Showering (FFS) effect is at odds with many of the assumptions in modern experimental jet physics at the LHC where lab frame fragmentation is assumed and process- and color-flow-dependent effects are usually ignored [21]. This effect can lead to large differences in fragmentation patterns in very common final states. A simple illustration can be seen in the collision of two color-singlet fundamental particles, like those at lepton colliders. In multijet final states at symmetric \(e^{+}e^{-}\) colliders from pure QCD processes, the color rest frame is the same as the lab frame, since all outgoing partons are color connected and there is no net momentum in the lab frame. However, if two jets are produced in the decay of a \(Z\) boson, the color rest frame is the rest frame of the \(Z\), which may be significantly boosted with respect to the lab frame. The fragmentation of these jets is set by the energy scale \(m_{Z}/2\) and has no dependence on the energy measured by the detector or the boost of the \(Z\)[22]. The same is true for any hadronically-decaying color singlet. In analyses looking for boosted vector bosons \(V\) decaying hadronically, the signal jet fragmentation is set by \(m_{V}/2\), whereas multijet backgrounds most likely have complex color connections with the whole final state (and the beam remnant at hadron colliders), which leads to very different underlying fragmentation distributions. In the case of the "Mercedes" star topology that led to the discovery of the gluon, two quarks and one gluon are produced. The color octet gluon is color connected to both of the color triplet quarks. In such cases, the fragmentation frame is the rest frame of all three jet objects, which at a lepton collider, will be the same as the lab frame. The difference between these color topologies is illustrated in Figure 1. The effect may be even larger at hadron colliders like the LHC. In QCD multijet processes at hadron colliders, in addition to potential color connections between final state jets, color connection to high-energy beam remnants can lead to very highly boosted color rest frames. To investigate how the color rest frame dependence of jet fragmentation is handled by Monte Carlo generators, samples were produced with various parton shower models. MadGraph5_aMC@NLO 3.3.1 [23] was used to produce 50,000 parton-level e Figure 1: Illustration of some example color-flow topologies at a lepton collider. (Left) The collision of two leptons (purple) is producing two color singlets, each decaying to a \(q\bar{q}\) pair. The color rest frames are the frames where each of the color-connected \(q\bar{q}\) systems is at rest. (Right) Another collision of leptons producing a “Mercedes star” triplet event where the \(q\bar{q}\) system is recoiling off of a high momentum gluon. In this case, all three initial partons are color connected, and the color frame is the same as the lab frame. Cones schematically represent the fragmentation and hadronization process that gives a jet. \(e^{+}e^{-}\to jjj\) and \(e^{+}e^{-}\to ZZ\to jjjj\), both with a collision energy of \(\sqrt{s}=1\) TeV. In the first process, the color rest frame is coincident with the lab frame, such that jet observables show an expected lab-frame momentum dependence. In the second, the color rest frames are always the rest frames of the \(Z\) bosons such that some jet observables will be set by \(m_{Z}/2\) and not scale with the lab-frame momentum. These parton-level events were then showered using three different models: Pythia 8.306 [24; 25] using "simple showers," Vincia [26], and Herwig 7.2.2 [27; 28]. Jet clustering was performed at particle level using FastJet [29] as interfaced in the Delphes 3.5.1 [30] framework. These particle-level truth jets were clustered using the anti-kt algorithm with an \(R=0.4\) radius parameter [2]. To decouple the FFS effect from the fragmentation differences between quarks and gluons, gluon-initiated jets are not considered in these studies by requiring that, within the jet cone, the highest momentum parton in the particle record is not a gluon. For similar reasons, \(b\)-jets are also excluded. We define \(n_{x}\) as the minimum number of particles in a jet that give \(x\%\) of the jet's total momentum. Figure 2 shows \(n_{90}\), the minimum number of particles that could account for 90% of the jet's energy as a function of the lab-frame momentum of the jet. While \(n_{90}\) varies with the lab-frame momentum for \(e^{+}e^{-}\to jjj\), this dependence is significantly weaker in the \(e^{+}e^{-}\to ZZ\to jjjj\) case. Figure 2 also shows the lab-frame momentum dependence of the average multiplicity \(\langle n_{90}\rangle\), comparing the three shower models considered. Since each sample uses the same matrix element level events, statistical uncertainties can be ignored up to the parton shower level. All considered models demonstrate the FFS effect in the process dependence of the mean at high momentum. While small differences exist, the qualitative behaviors are similar. For jets with a measured momentum of 200 GeV, differences in \(\langle n_{90}\rangle\) as large as roughly 50% are predicted for different color topologies; this FFS effect can be sizable at typical collider energy scales. Despite the fact that this effect is modeled in modern simulations, the typical practicing collider experimentalist ignores it when designing experiments and analyses. The assumption that a light jet of 100 GeV is a light jet of 100 GeV will break down from a color-connection dependence effect. This has implications on many of the jet techniques used in modern collider physics. In particular, this FFS effect may have significant implications on how jet tagging is done. Jet tagging algorithms try to gain insight into the origin of a jet using the observable properties of its shower. In the design, training, calibration, and validation of these taggers, the jet individualism assumption is heavily used. The tagging method with the largest tradition in hadron collider physics is \(b\)-tagging. Simple \(b\)-taggers use the large lifetime of B hadrons to identify displaced decay products within a jet, by observing large impact parameter tracks or displaced track vertices. Modern \(b\)-taggers use much more information about the jet, often employing deep neural networks with a large amount of low-level information about the jet, such that these taggers may be sensitive to FFS effects [31; 32]. Training \(b\)-taggers at high \(p_{T}\) without controlling for the color rest frame may lead to incorrect interpretation of selection efficiencies or non-optimal performance. In Ref. [31], the ATLAS experiment describes their \(b\)-tagging training procedure: "... the new Run 2 \(b\)-tagging algorithm training strategy is based on the use of a hybrid sample composed of both the baseline \(t\bar{t}\) event sample and a dedicated sample of \(Z^{\prime}\) decaying into hadronic jet pairs." This \(Z^{\prime}\), referring to a heavy, color singlet, \(Z\)-like resonance, decays to \(b\bar{b}\) pairs in these samples. The \(Z^{\prime}\) mass is set to 4 TeV and a large range of jet momenta are populated. For nonzero \(Z^{\prime}\) momentum, the lab frame momentum of the jets and the color rest frame momentum will differ. The fragmentation will be pinned to the \(m_{Z^{\prime}}/2\) scale and may not represent \(b\)-jets from other pro Figure 2: (Top) Split violin plots showing the normalized \(n_{90}\) distributions, as modeled in Pythia using the “simple shower” model, for two \(ee\) collider processes as a function of lab-frame momentum. In \(ee\to jjj\) (blue, left), the color rest frame and lab frame are coincident, and the particle multiplicity is a function of the momentum. However, in \(ee\to ZZ\to jjjj\) (orange, right), in which jets obtain large momentum from boosted color rest frames, the dependence is significantly weaker. The means of each distribution are shown as markers. (Bottom) The mean \(\langle n_{90}\rangle\) is shown as a function of lab-frame jet momentum for multiple shower models. Herwig, Pythia, and Vincia show similar behaviors. All samples use the same matrix element events such that they are statistically identical up to the parton shower model. cesses at similar lab-frame momentum scales. Especially as modern taggers use more information about the jets in advanced machine learning techniques, these tools may become more sensitive to the FFS effect in ways that are difficult to control. Distinguishing quark-initiated and gluon-initiated jets is a popular technique at modern colliders [5; 14; 15; 33]. Early in the LHC, these \(q/g\) tagging techniques used the track multiplicity, jet width, and other characteristics to differentiate these jets, taking advantage of the gluon's larger color factor. Modern techniques include using lower level information with machine learning techniques. LHC experiments build templates of these variables in dedicated quark- and gluon-enriched control regions to inform taggers used in data analyses, particularly those looking for quark-dominated processes [34]. Despite some structural differences between quark and gluon jets, the differences are subtle and vary greatly with momentum [14; 15]. Since the fragmentation will take place in the color rest frame, jets from the decay of a boosted color singlet may be easily misidentified, particularly in decays involving gluons. These \(q/g\) discrimination efforts have assumed jet individualism, without regard to sibling jets. Community-wide \(q/g\) discrimination challenges have published shared ntuple datasets at the jet level and not the event level [35]. A host of ML techniques have been engineered to take the properties of individual jets as inputs. A recent experimental review of jet substructure at the LHC, Ref. [7], when discussing the jet inputs used for a \(q/g\) discriminator, uses the phrase "Since the distributions of these variables depend on \(\eta\), \(p_{T}\),...", which emphasizes the lab-frame, jet individualistic assumptions throughout the field [36]. As put clearly in Ref. [37], "... there are well-known caveats to this picture of jet generation, which go under the name of'sample dependence'." Such event-level effects are then argued to be small [14; 37] and subsequently ignored. "Here, we assume that sample-dependent effects can either be quantified or mitigated..." [37]. We argue that there exist sizable sample-dependent effects that are neither simple to quantify nor to mitigate. A particularly extreme example of how the FFS effect may affect the goals of \(q/g\) tagging efforts is in the tagging of jets from vector boson fusion (VBF) and scattering (VBS) processes. At hadron colliders, VBF/VBS processes give rise to two quark jets that are, at leading order, color connected to only the beam remnants, and to nothing else in the event. Backgrounds for analyses looking to measure these processes often give pure QCD (_i.e._, gluon enriched) jets such that \(q/g\) discrimination methods are attractive and often used [38; 39; 40]. However, each quark jet that emerges from the VBF/VBS process will fragment in the color rest frame of the highly boosted color dipole formed with the relevant beam remnant. At the LHC, these frames are highly boosted with respect to the lab frame, and in detailed fragmentation observables, these jets will look nothing like the typical jets used to train and characterize such taggers. To successfully \(q/g\)-tag VBF/VBS processes, dedicated template distributions would need to be derived or existing templates would need to be altered to account for the FFS effect and somehow control for the beam remnant momentum. None of which may be possible. On the other hand, the FFS effect could be exploited for process discrimination. In searches for boosted color singlets decaying to jets, the details of the fragmentation would look very different from a lab-frame fragmentation of the QCD background. This information could be exploited as an additional handle in such searches for Beyond the Standard Model (BSM) physics. Fragmentation differences have been exploited in the past without explicitly defining this source of the effect. Boosted \(V\) searches occasionally use track multiplicity as a discriminator between merged \(V\) jets and background QCD jets [41], which takes advantage of the difference between the \(m_{V}/2\) momentum scale dictating the \(V\)-jet fragmentation and the TeV momentum scale of the background. In boosted \(V\) decays, well-isolated quark jets of a few hundred GeV can be easily produced. As discussed above, such jets can have very different fragmentation profiles compared to those from background QCD processes. Using this difference can be an important additional handle in further enriching samples in interesting signal processes. The assumption that a hadronic jet can be described by its momentum and species is overly simplistic. Since the fragmentation occurs in a particular frame that need not be the lab frame, it's not meaningful to talk about the properties of jets individually as separate physics objects. Especially when studying the detailed substructure of jets, sample dependence effects can be significant, as shown here in simulation. While well understood to those that consider the theory of QCD, this effect removes the foundation of jet individualism used by many experimental tools in collider physics. The training of jet-by-jet taggers should consider the effect of boosted color rest frames, and the language around jet physics should be made more precise. This effect also represents an under-explored opportunity for discriminating jets from boosted color singlet decays, especially in BSM searches. We thank Tova Holmes, Karri Folan DiPetrillo, Jessie Shelton, Brian Shuve, Jesse Thaler, Zach Marshall, Nishita Desai, Kate Pachal, Max Swiatlowski, Rikab Gambhir, and Ben Thornberry for incredibly useful discussions and suggestions. We also thank Patrick Kirchgaesser, Giordon Stark and the mario-mapyde project, and Matthew Feickert and the scalifin project for very useful Docker images used throughout this study. This work has been supported by the Department of Energy, Office of Science, under Grant No. DE-SC0023321 and the National Science Foundation, under Award No. 2235028.
2305.18329
Perturbative Page Curve Induced by External Impulse
In this work, we extend the recent study of entropy dynamics induced by an external impulse in open quantum systems, where the entropy response follows the Page curve. For small system-bath coupling $\kappa$, we expect that the entropy first increases exponentially $\kappa^2 e^{\varkappa t}$ in the early-time regime $t\lesssim |\log \kappa|$ due to quantum many-body chaos, and then decreases as $~e^{-\lambda_0 t}$ with $\lambda_0 \propto \kappa^2$ due to the energy relaxation. These results are confirmed through explicit calculations using two methods: 1) generalized Boltzmann equation for systems with quasi-particles; 2) scramblon effective theory in the early-time regime and perturbation theory in the late-time regime for 0+1-d systems. We also prove that in the second stage, the entropy of the system is equal to the coarse-grained entropy.
Pengfei Zhang
2023-05-24T07:08:12Z
http://arxiv.org/abs/2305.18329v1
# Perturbative Page Curve Induced by External Impulse ###### Abstract In this work, we extend the recent study of entropy dynamics induced by an external impulse in open quantum systems, where the entropy response follows the Page curve. For small system-bath coupling \(\kappa\), we expect that the entropy first increases exponentially \(\kappa^{2}e^{\varkappa t}\) in the early-time regime \(t\lesssim|\log\kappa|\) due to quantum many-body chaos, and then decreases as \(\ e^{-\lambda_{0}t}\) with \(\lambda_{0}\propto\kappa^{2}\) due to the energy relaxation. These results are confirmed through explicit calculations using two methods: 1). generalized Boltzmann equation for systems with quasi-particles; 2). scramblon effective theory in the early-time regime and perturbation theory in the late-time regime for 0+1-d systems. We also prove that in the second stage, the entropy of the system is equal to the coarse-grained entropy. ###### Contents * 1 Introduction * 2 Entropy After an External Impulse * 2.1 The set-up * 2.2 Physical expectation and Green's functions * 3 Boltzmann Equation for Renyi Entropies * 3.1 The derivation * 3.2 Example: non-local fermionic systems * 4 Results for Large-N Models in 0+1-d * 4.1 Early-time scramblon effective theory * 4.2 Late-time perturbation theory * 5 Discussions Introduction Understanding the entropy of quantum many-body systems is a crucial topic in both high energy and condensed matter physics. Recent studies have highlighted the significance of the Page curve [1] in entropy dynamics. The Page curve describes the entanglement entropy of subsystem \(A\), averaged over random pure states. Mathematically, the result can be expressed as \(S_{A}^{\rm Page}=\min\{L_{A},L-L_{A}\}\ln d+O(d^{-|2L_{A}-L|})\), where \(L_{A}\) represents the subsystem size, \(L\) denotes the total system size, and \(d\) represents the local Hilbert space dimension. In high energy physics, the Page curve is connected to the evaporation process of pure-state black holes. As time progresses, the number of qubits in the black hole diminishes, and time \(t\) can be seen as analogous to the subsystem size \(L_{A}\). Consequently, one would expect the entropy of the black hole to initially increase and then decrease. However, Hawking's calculation yields a result that exhibits monotonic entropy growth. This discrepancy is known as the information paradox [2]. Recently, the information paradox has been resolved through the quantum generalization of the Ryu-Takayanagi formula [3, 4, 5, 6, 7, 8], incorporating the contributions of islands due to replica wormholes [9, 10, 11]. On the other hand, recent studies in condensed matter systems have focused on the entropy dynamics of initial states with short-range entanglement at infinite temperature [12, 13, 14, 15]. Under unitary evolutions generated by chaotic Hamiltonians [16], the steady state can be approximated as a random pure state, exhibiting a volume-law entanglement entropy in accordance with the Page curve. For states with finite energy density, quantum thermalization predicts that the slope of the curve should be replaced with the thermal entropy density [17, 18]. Subsequent studies have revealed that repeated measurements can induce transitions in the entanglement entropy of the steady state [19, 20, 21, 22, 23, 24, 25, 26, 27]. Unlike in holographic systems, where entropy can be geometrically understood, most of these works rely on numerical simulations of circuit models, while the available analytical techniques remain quite limited. This limitation is partly attributed to the nonlinear nature of both the Von Neumann and the Renyi entropies. Response theory is an important route to uncover the underlying physics in quantum many-body systems. This includes both the conventional linear response theory [28] and the more recent development of non-Hermitian linear response for open quantum systems [29]. Consequently, there is a growing interest in developing entropy response theory for general quantum systems, with the potential for analytical solutions. Pioneering works in this direction include [30, 31]. In particular, in [30], the authors investigate the impact of an external impulse on the entropy of open systems prepared in the thermal ensemble. This initial perturbation gives rise to excitations that are progressively absorbed by the bath, analogous to black hole evaporation. As a result, the entropy response exhibits characteristics similar to the Page curve, leading to the notion of a "perturbative Page curve". In their study, the authors compute the early-time exponential growth of the von Neumann entropy by establishing a connection with "branched" out-of-time-order correlators. In this work, we further extend the study beyond the regime of exponential growth by employing a general non-Markovian model of the bath. The paper is organized as follows: In Section 2, we begin by describing the setup of our problem, where we apply an external impulse to open quantum systems in thermal equilibrium. We then elucidate the two stages of entropy dynamics by establishing their connection to two-point functions on the entropy contour. In Section 3, we develop an approach based on the Boltzmann equation to compute the corresponding two-point functions. This method is applicable to systems with quasi-particles. Additionally, we present numerical results for a specific example in the context of 0+1-dimensional systems. Subsequently, in Section 4, we extend our study to more general 0+1-dimensional systems, which may describe non-Fermi liquids lacking quasi-particles. Finally, in Section 5, we conclude the paper with discussions on future directions for research. ## 2 Entropy After an External Impulse In this section, we describe the protocol considered in this paper, which contains applying an external impulse to open quantum systems in thermal equilibrium. We relate the presence of a perturbative Page curve to two-point functions on the entropy contour, which is dominated by quantum many-body chaos in the early-time regime, and by energy relaxation in the late-time regime. ### The set-up We first describe the set-up proposed in [30] which exhibits the perturbative Page curve. We consider a quantum system described by some Hamiltonian \(H_{s}\). The system is coupled to a large external heat bath \(b\) with Hamiltonian \(H_{b}\) through a coupling term \(H_{sb}\): \[H_{sb}=\kappa\sum_{i=1}^{N}O_{s}^{i}O_{b}^{i}. \tag{1}\] Here \(O_{s}^{i}/O_{b}^{i}\) is an operator on system/bath. The total Hamiltonian \(H\) reads \(H=H_{s}+H_{b}+H_{sb}\). We consider preparing the total system in a thermal ensemble at finite inverse temperature \(\beta\), decribed by the density matrix \(\rho_{0}=Z^{-1}e^{-\beta H}\) with \(Z=\mbox{tr }e^{-\beta H}\). This can be equivalently understood as the late-time phase of an initial thermofield double (TFD) state, in which the entropy of system \(s\) saturates [30]. We are interested in the effect of an external impulse on the entropy \(S\) of system \(s\). We model the impulse by introducing a perturbation \[\rho(t=0,\epsilon)=e^{-i\epsilon X}\rho_{0}e^{i\epsilon X}, \tag{2}\] Here \(X\) is a Hermitian operator acting on the system \(s\). The total system then evolves as \[\rho(t,\epsilon)=e^{-iHt}e^{-i\epsilon X}\rho_{0}e^{i\epsilon X}e^{iHt}. \tag{3}\] The \(n\)-th Renyi entropy of system \(s\) can be computed as \[S^{(n)}(t,\epsilon)=\frac{1}{1-n}\log\big{[}\mbox{tr}_{s}\ \rho_{s}(t,\epsilon)^{n} \big{]},\qquad\rho_{s}(t,\epsilon)=\mbox{tr}_{b}\ \rho(t,\epsilon). \tag{4}\] In particular, the Von Neumann entropy \(S(t,\epsilon)\) is determined by taking the limit of \(n\to 1\). Without the perturbation, the entropy \(S^{(n)}(t,0)=S_{0}^{(n)}\) is the thermal subsystem entropy, which includes both the thermal entropy of system \(s\), and the entanglement entropy between \(s\) and \(b\). We are interested in the effect of small perturbation \(\epsilon\ll 1\) on the entropy \(S^{(n)}(t,\epsilon)\). Similar to [30], we focus on the second order derivative \[\delta S^{(n)}(t)=\left.\frac{1}{2}\frac{\partial^{2}}{\partial\epsilon^{2}}S ^{(n)}(t,\epsilon)\right|_{\epsilon=0}. \tag{5}\] ### Physical expectation and Green's functions As described in [30], the evolution of \(\delta S^{(n)}(t)\) contains two stages. After applying the unitary operator \(e^{-i\epsilon X}\), the entropy begins to increase smoothly since the perturbation creates excitations that entangle the system and bath through the system-bath coupling \(H_{sb}\). Studies relate that such entropy dynamics to OTOCs [32, 33], which leads to an exponential growth \(\sim\kappa^{2}e^{\varkappa t}\) at early time [30]. Here \(\varkappa\) is the quantum Lyapunov exponent. The saturation occurs when \(\varkappa t\sim-\log\kappa\). Afterwards, the dynamics is dominate by the relaxation of the energy, and the entropy is equal to the coarse-grained entropy. In the long-time limit, the total system reaches the thermal equilibrium again at inverse temperature \(\beta\). As a result, \(S^{(n)}(\infty,x)=S^{(n)}(0,x)=S^{(n)}_{0}\). In this subsection, we hope to understand how these two stages appear mathematically for \(\delta S^{(n)}(t)\). For conciseness, we focus on the second Renyi entropy, which corresponds to \(n=2\). The generalization to arbitrary \(n\) is straightforward. We first consider utilizing the path-integral approach for the unperturbed entropy \(S^{(2)}(t,0)\). A pictorial representation reads \[\exp\left(-S^{(2)}(t,0)\right)=\mathrm{tr}_{s}\big{[}\mathrm{tr}_{b}\ e^{-iHt} \rho_{0}e^{iHt}\big{]}^{2}=\raisebox{-19.916929pt}{\includegraphics[scale=0. 0]{fig/2-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 expect (9) unless we are interested in two-point functions near the final time \(t\). This is consistent with the numerical result in [34]. One can understand (9) by viewing the contour (6) as an evolution from right to the left: Initially, the system is described by some initial state specified by correlation functions near time \(t\). The system is then evolved backward to time \(0\), with a coupling to bath. Physically, we expect the coupling to bath thermalizes the system, which leads to the equality (9). Such a picture will be refined in later sections to derive a semi-classical Boltzmann for two-point functions on the entropy contour, which is valid for weakly interacting systems. (9) suggests that at \(t\to\infty\) we have (10) This leads to \(\delta S^{(2)}(\infty)=0\), consistent with the previous expectation. The perturbative Page curve appears when two-point functions evolve from (8) to (10). The decay of,, is a consequence of quantum many-body chaos. As an example, to the \(\kappa^{2}\) order, receives the contribution from (11) which contains OTOC of system \(s\). We thus expect: (12) Figure 1: A sketch of the perturbative Page curve as a function of evolution time \(t\) for small system bath coupling \(\kappa\) and arbitrary \(n\). Here we choose some typical energy scales of system \(s\) as the unit. For \(t\ll-\log\kappa\), the curve is dominated by the early-time OTOC [30], which leads to an exponential growth \(\sim\kappa^{2}e^{\varkappa t}\) with quantum Lyapunov exponent \(\varkappa\). In the long-time limit, it is governed by the energy relaxation \(\sim e^{-\kappa^{2}t}\). As a result, we have \(\delta S^{(2)}\propto\kappa^{2}e^{\varkappa t}\) for \(\varkappa t\ll-\log\kappa\). Similar contributions have been calculated using toy models in [30]. When \(\kappa^{2}e^{\varkappa t}\sim O(1)\), we should sum up terms with multiple scramblons as in [35, 36, 37], which leads to the decay of,,. On the other hand, the evolution of,, is dominated by the thermalization process. We expect the energy relaxation has the smallest relaxation rate \(\lambda_{0}\sim\kappa^{2}\). As a result, we expect that (13) This leads to the exponential decay of \(\delta S^{(2)}(t)\) in the second stage. In the following sections of this paper, we will try to give a more quantitative description of above discussions using generalized Boltzmann equations for systems with quasi-particles, or perturbative calculations for 0+1-d large-\(N\) systems. ## 3 Boltzmann Equation for Renyi Entropies For systems with quasi-particles, the thermalization process is usually studied by semi-classical Boltzmann equations. In particular, hydrodynamical parameters, such as diffusion constant, heat conductivity, and viscosity, can be computed under the Boltzmann equation approximation [38]. In quantum systems, Boltzmann equations can be derived using the Schwinger-Dyson equation on the Keldysh contour with the assumption that the quantum distribution function is slow varying [39]. Later, it is realized that OTO-correlations can also be studied using generalized Boltzmann equations on doubled Keldysh contour [40], which is a non-linear version of the kinetic equation at weak coupling [41, 42]. In this section, motivated by the backward evolution picture, we derive a set of new Boltzmann equations on the entropy contour (6). These equations naturally capture both quantum many-body chaos and quantum thermalization, which gives a complete description of the perturbative Page curve for Renyi entropies in systems with quasi-particles. We will mainly focus on \(n=2\). Generalizations to \(n\geqslant 3\) are straightforward, while we find no simple approach to take the limit of \(n\to 1\). ### The derivation Our derivation below is generally valid for systems at weak coupling, with arbitrary flavors of particles and minor differences between bosons and fermions. Nevertheless, a concrete model can be helpful. We consider a \(D\)-dimensional fermionic system with an SYK-like interaction \[H_{s}=\sum_{k,i}\epsilon_{s}(k)c^{i}_{s}(k)^{\dagger}c^{i}_{s}(k)+\frac{1}{4V} \sum_{ijkl}\sum_{\{k_{a}\}}J_{ijkl}c^{i}_{s}(k_{4})^{\dagger}c^{j}_{s}(k_{3})^ {\dagger}c^{k}_{s}(k_{2})c^{l}_{s}(k_{1}). \tag{14}\] Here \(i,j,k,l=1,2,...,N\), \(k_{4}=-k_{1}-k_{2}-k_{3}\) due to the momentum conservation, and random interaction strength \(J_{ijkl}\) are independent random Gaussian variables with \(\overline{|J_{ijkl}|^{2}}=2J^{2}/N^{3}\). Similarly, we model the bath \(b\) as free fermions with SYK-like couplings to the system \(s\) \[H_{b}+H_{sb}=\sum_{k,m}\epsilon_{b}(k)c^{m}_{b}(k)^{\dagger}c^{m}_{b}(k)+\frac {1}{2V}\sum_{impq}\sum_{\{k_{a}\}}\Big{[}\kappa_{impq}c^{i}_{s}(k_{4})^{ \dagger}c^{m}_{b}(k_{3})^{\dagger}c^{p}_{b}(k_{2})c^{q}_{b}(k_{1})+\text{H.C.} \Big{]}. \tag{15}\] Here \(m,p,q=1,2,...,M\), and the random system-bath interactions satisfy \(\overline{|\kappa_{imnp}|^{2}}=2\kappa^{2}/M^{3}\). We assume \(M\gg N\) and the bath correlation functions receive no correction from finite system-bath coupling \(\kappa\). We choose the initial perturbation as \(c_{s}^{1}(k)+c_{s}^{1}(k)^{\dagger}\), and (7) becomes a summation over fermion two-point functions. We begin with a useful identity that shows the causality of the Renyi entropy contour, implying the possibility of a differential equation description. Diagrammatically, the identity states that \[\raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{figs.eps}}\quad= \quad\raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{figs.eps}}. \tag{16}\] This is because the evolution commutes with the initial thermal density matrix, and is an analog of the cancellation between forward and backward evolutions on the Keldysh contour. Consequently, we only need to study two-point functions on a contour with a long evolution time and focus on branches with real-time evolutions. We label these branches by forward/backward index \(\eta=u/d\) and replica index \(n=1,2\), which is combined into \(\alpha=\eta_{n}\) for conciseness. This is illustrated in (16). Motivated by the ordering of the contour, we take the convention to represent fields as a vector with the ordering \((d_{1},u_{1},d_{2},u_{2})\). The Green's function \(G_{\alpha\beta}(k;t_{1},t_{2})=\langle\psi^{i}_{s,\alpha}(k,t_{1})\overline{ \psi}^{i}_{s,\beta}(k,t_{2})\rangle\) is introduced with fermionic fields \(\psi^{i}_{\eta,\alpha}\) of the system \(s\). For spatially homogeneous initial conditions, the Schwinger-Dyson equation in the momentum space reads \[\begin{split}(-1)^{\alpha}(\partial_{t_{1}}+i\epsilon_{s}(k))G_ {\alpha\beta}-\Sigma_{\alpha\gamma}\circ G_{\gamma\beta}&=I_{ \alpha\beta},\\ (-1)^{\beta}(-\partial_{t_{2}}+i\epsilon_{s}(k))G_{\alpha\beta}-G_ {\alpha\gamma}\circ\Sigma_{\gamma\beta}&=I_{\alpha\beta}.\end{split} \tag{17}\] Here we have introduced \((-1)^{\alpha}=1\) for \(u_{n}\) branches and \((-1)^{\alpha}=-1\) for \(d_{n}\) branches. The self-energy is given by melon diagrams \[\begin{split}\Sigma_{\alpha\beta}(k,t_{1},t_{2})=-\mathcal{P}_{ \alpha\beta}\int\frac{dk_{1}^{D}}{(2\pi)^{D}}\frac{dk_{2}^{D}}{(2\pi)^{D}}& \Big{[}J^{2}G_{\alpha\beta}(k_{2};t_{1},t_{2})G_{\alpha\beta}(k_ {3};t_{1},t_{2})G_{\beta\alpha}(k_{1};t_{2},t_{1})\\ &\qquad+\kappa^{2}\widetilde{G}_{\alpha\beta}(k_{2};t_{1},t_{2}) \widetilde{G}_{\alpha\beta}(k_{3};t_{1},t_{2})\widetilde{G}_{\beta\alpha}(k_ {1};t_{2},t_{1})\Big{]}.\end{split} \tag{18}\] Here \(k_{3}=k_{1}+k-k_{2}\) by momentum conservation. \(\mathcal{P}_{\alpha\beta}=-1\) if both \(\alpha\) and \(\beta\) are \(u\) or \(d\) contours. Otherwise, we have \(\mathcal{P}_{\alpha\beta}=1\). \(\widetilde{G}_{\alpha\beta}(k;t_{1},t_{2})\) is the bath Green's function, introduced similarly to \(G_{\alpha\beta}(k;t_{1},t_{2})\), and is diagonal in the replica space. The Boltzmann equation is an approximation of the Schwinger-Dyson equation [39]. The first step is to parametrize Green's functions \(G_{\alpha\beta}\) by introducing quantum distribution functions. We use the motivation from the early-time (8) and the late-time limit (10). In the early-time regime, we can show that \[G_{\alpha\beta}=e^{-i\epsilon_{s}(k)t_{12}}\begin{pmatrix}\theta(t_{21})-n_{2 \beta}(k)&-w_{2\beta}(k)&-w_{2\beta}(k)&1-n_{2\beta}(k)\\ w_{2\beta}(k)&\theta(t_{12})-n_{2\beta}(k)&n_{2\beta}(k)&-w_{2\beta}(k)\\ w_{2\beta}(k)&1-n_{2\beta}(k)&\theta(t_{21})-n_{2\beta}(k)&-w_{2\beta}(k)\\ n_{2\beta}(k)&w_{2\beta}(k)&w_{2\beta}(k)&\theta(t_{12})-n_{2\beta}(k)\\ \end{pmatrix}. \tag{19}\] Here \(n_{2\beta}(k)=[e^{2\beta(\epsilon_{s}(k)-\mu)}+1]^{-1}\) is the Fermi-Dirac distribution with inverse temperature \(2\beta\) and \(w_{2\beta}=[2\cosh(\beta(\epsilon_{s}(k)-\mu))]^{-1}\). On the other hand, in the late-time limit, the relation (9) gives \[G_{\alpha\beta}=e^{-i\epsilon_{s}(k)t_{12}}\begin{pmatrix}\theta(t_{21})-n_{ \beta}(k)&-1+n_{\beta}(k)&0&0\\ n_{\beta}(k)&\theta(t_{12})-n_{\beta}(k)&0&0\\ 0&0&\theta(t_{21})-n_{\beta}(k)&-1+n_{\beta}(k)\\ 0&0&n_{\beta}(k)&\theta(t_{12})-n_{\beta}(k)\end{pmatrix}. \tag{20}\] Furthermore, the bath Green's function \(\widetilde{G}_{\alpha\beta}\) can be obtained by replacing \(\epsilon_{s}(k)\) with \(\epsilon_{b}(k)\) in (20). Comparing Green's functions in two limits, we find the evolution of Green's functions can be viewed as the evolution of \(n_{2\beta}(k)\) and \(w_{2\beta}(k)\). After reducing the number of variables using the symmetry between branches, we parametrize the Green's function at finite center-of-mass time \(T=\frac{t_{1}+t_{2}}{2}\) as \[G_{\alpha\beta}=e^{-i\epsilon_{s}(k)t_{12}}\begin{pmatrix}\theta(t_{21})-f_{0 }(k,T)&-f_{2}(k,T)&-f_{3}(k,T)&f_{5}(k,T)\\ f_{1}(k,T)&\theta(t_{12})-f_{0}(k,T)&f_{4}(k,T)&-f_{3}(k,T)\\ f_{3}(k,T)&f_{5}(k,T)&\theta(t_{21})-f_{0}(k,T)&-f_{2}(k,T)\\ f_{4}(k,T)&f_{3}(k,T)&f_{1}(k,T)&\theta(t_{12})-f_{0}(k,T)\end{pmatrix}. \tag{21}\] Compared to distribution functions on the (doubled) Keldysh contour, there are three different distribution functions even if we focus on a single contour, which reflects the non-trivial twist operator on the system \(s\). To derive the Boltzmann equation for \(f_{z}(k,t)\) (\(z=0,1,...,5\)), we start by taking the superposition of (17), which gives \[\partial_{T}G_{\alpha\beta}(k,t_{1},t_{2})=\int_{3}\;(-1)^{\alpha}\Sigma_{ \alpha\gamma}(k,t_{1},t_{3})G_{\gamma\beta}(k,t_{3},t_{2})-(-1)^{\beta}G_{ \alpha\gamma}(k,t_{1},t_{3})\Sigma_{\gamma\beta}(k,t_{3},t_{2}). \tag{22}\] We further set \(t_{1}=t_{2}\). When \(f_{z}\) are slow varying, we make approximations to the integral over \(t_{3}\) by completing the integral with \(f_{z}(k,\frac{t_{1}+t_{3}}{2})\approx f_{z}(k,\frac{t_{1}+t_{2}}{2})\). This leads to \[\begin{split}\partial_{t}G_{\alpha\beta}(k,t,t)=\int_{-\infty}^{ \infty}dt_{r}&\Bigg{[}(-1)^{\alpha}\Sigma_{\alpha\gamma}\left(k,t+ \frac{t_{r}}{2},t-\frac{t_{r}}{2}\right)G_{\gamma\beta}\left(k,t-\frac{t_{r}}{ 2},t+\frac{t_{r}}{2}\right)\\ &-(-1)^{\beta}G_{\alpha\gamma}\left(k,t+\frac{t_{r}}{2},t-\frac{t _{r}}{2}\right)\Sigma_{\gamma\beta}\left(k,t-\frac{t_{r}}{2},t+\frac{t_{r}}{2} \right)\Bigg{]}.\end{split} \tag{23}\] The L.H.S. is just the time derivative of distribution functions \(f_{z}\). Now using (18) and (21), it is straightforward to show that the integral over time \(t\) only imposes the energy conservation of the scattering process1, which appears in Boltzmann equations [39, 40]. After tedious by straightforward calculations, we find the result read Footnote 1: One may worry about the additional \(\theta(t)\) function appearing in the diagonal elements of \(G_{\alpha\beta}\). Fortunately, their contributions of \(G_{u_{n}u_{n}}\) and \(G_{d_{n}d_{n}}\) always cancel out. \[\partial_{t}f_{z}(k,t)=J^{2}\int d\mathcal{M}_{s}\;\mathrm{St}_{z}[f_{z^{\prime }}(k,t)]+\kappa^{2}\int d\mathcal{M}_{b}\;\widetilde{\mathrm{St}}_{z}[f_{z^{ \prime}}(k,t)]. \tag{24}\] Here \(d\mathcal{M}_{\xi}\) is the phase space integral with momentum and energy conservation \[d\mathcal{M}_{\xi}\equiv\prod_{i=1}^{3}\left[\frac{dk_{i}^{D}}{(2\pi)^{D}}\right] \ (2\pi)^{D+1}\delta^{(D)}(k+k_{1}-k_{2}-k_{3})\delta(\epsilon_{s}(k)+\epsilon_{ \xi}(k_{1})-\epsilon_{\xi}(k_{2})-\epsilon_{\xi}(k_{3})). \tag{25}\] \(\mathrm{St}_{z}[f]\) are combinations of distribution functions \[\mathrm{St}_{0}[f]= f_{1}(k_{2})f_{1}(k_{3})f_{2}(k)f_{2}(k_{1})-f_{1}(k)f_{1}(k_{1 })f_{2}(k_{2})f_{2}(k_{3}) \tag{26}\] \[+f_{4}(k_{2})f_{4}(k_{3})f_{5}(k)f_{5}(k_{1})-f_{4}(k)f_{4}(k_{1}) f_{5}(k_{2})f_{5}(k_{3}),\] \[\mathrm{St}_{1}[f]= f_{1}(k)(f_{0}(k_{1})(f_{0}(k_{2})(1-2f_{0}(k_{3}))+f_{0}(k_{3})- 1)+f_{0}(k_{2})f_{0}(k_{3}))\] \[+(1-2f_{0}(k))f_{1}(k_{2})f_{1}(k_{3})f_{2}(k_{1})-2f_{3}(k)f_{4}( k_{2})f_{4}(k_{3})f_{5}(k_{1})\] \[+2f_{3}(k_{1})f_{3}(k_{2})f_{3}(k_{3})f_{4}(k),\] \[\mathrm{St}_{2}[f]= f_{2}(k)(-(f_{0}(k_{1})(f_{0}(k_{2})(1-2f_{0}(k_{3}))+f_{0}(k_{3})- 1)+f_{0}(k_{2})f_{0}(k_{3})))\] \[+(2f_{0}(k)-1)f_{1}(k_{1})f_{2}(k_{2})f_{2}(k_{3})-2f_{3}(k)f_{4} (k_{1})f_{5}(k_{2})f_{5}(k_{3})\] \[+2f_{3}(k_{1})f_{3}(k_{2})f_{3}(k_{3})f_{5}(k),\] \[\mathrm{St}_{3}[f]= f_{1}(k_{1})f_{2}(k_{2})f_{2}(k_{3})f_{4}(k)+f_{1}(k_{2})f_{1}(k_{ 3})f_{2}(k_{1})f_{5}(k)\] \[-f_{1}(k)f_{4}(k_{1})f_{5}(k_{2})f_{5}(k_{3})-f_{2}(k)f_{4}(k_{2} )f_{4}(k_{3})f_{5}(k_{1}),\] \[\mathrm{St}_{4}[f]= f_{4}(k)(f_{0}(k_{1})(-2f_{0}(k_{2})f_{0}(k_{3})+f_{0}(k_{2})+f_{ 0}(k_{3})-1)+f_{0}(k_{2})f_{0}(k_{3}))\] \[+(1-2f_{0}(k))f_{4}(k_{2})f_{4}(k_{3})f_{5}(k_{1})+2f_{1}(k_{2})f _{1}(k_{3})f_{2}(k_{1})f_{3}(k)\] \[-2f_{1}(k)f_{3}(k_{1})f_{3}(k_{2})f_{3}(k_{3}),\] \[\mathrm{St}_{5}[f]= f_{5}(k)(f_{0}(k_{1})((2f_{0}(k_{2})-1)f_{0}(k_{3})-f_{0}(k_{2})+ 1)-f_{0}(k_{2})f_{0}(k_{3}))\] \[+(2f_{0}(k)-1)f_{4}(k_{1})f_{5}(k_{2})f_{5}(k_{3})+2f_{1}(k_{1})f _{2}(k_{2})f_{2}(k_{3})f_{3}(k)\] \[-2f_{2}(k)f_{3}(k_{1})f_{3}(k_{2})f_{3}(k_{3}).\] Here we keep the \(t\) dependence implicit for conciseness. \(\widetilde{\mathrm{St}}_{z}[f]\) can be obtained by replacing \(f_{3}(k_{i})=f_{4}(k_{i})=f_{5}(k_{i})=0\) and \(f_{0}(k_{i})=f_{1}(k_{i})=1-f_{2}(k_{i})=n_{\beta}(k_{i})\) with dispersion \(\epsilon_{b}(k)\). This is the main result of this section. The equation (24) should be evolved backwardly in time, with initial conditions determined by comparing (19) and (21). By using (7) and (16), the perturbation of the second Renyi entropy is given by \[\delta S^{(2)}(t)=2-2(f_{1}(k,-t)+f_{2}(k,-t))-2f_{4}(k,-t)-2f_{5}(k,-t)+4f_{3}( k,-t). \tag{27}\] Here we relabel the time \(t\) such that the twist operator is inserted at \(t=0\). This is illustrated in (9). Here are some comments: * It is straightforward to check that when \(f_{3}=f_{4}=f_{5}=0\) and \(f_{0}=f_{1}=1-f_{2}=f(k,t)\), these set of equations are reduced to the traditional Boltzmann equation, with \[\mathrm{St}_{0}=\mathrm{St}_{1}=-\mathrm{St}_{2}=f(k_{2})f(k_{3})(1-f(k_{1}))(1- f(k))-(k,k_{1})\leftrightarrow(k_{2},k_{3}).\] (28) In particular, \(f(k)=n_{\beta}(k)\) is the steady-state solution. This is consistent with the expectation of the late-time solution (20). * Without the coupling to the bath (\(\kappa=0\)), the contour (16) is equivalent to doubled Keldysh contour [40]. As a result, the initial value (19) is a unstable steady state at \(\kappa=0\). A finite \(\kappa\) serves as an initial perturbation that induces the OTO-correlations. This is consistent with the perturbative analysis (11). * The contribution of \(\widetilde{\mathrm{St}}_{z}[f]\) is linear in \(f_{z}[k,t]\). This is a consequence of the choice of our system-bath coupling (15), which contains a single operator in system \(s\). More general couplings lead to a non-linear system-bath scattering \(\widetilde{\mathrm{St}}_{z}[f]\). ### Example: non-local fermionic systems In general, such non-linear integro-differential equations should be studied numerically to determine the perturbative Page curve. In this subsection, we focus on the simplest \(D=0\) case as an example, where all momentum indices drop out. We further choose \(\epsilon_{s}=\epsilon_{b}=\epsilon\) to ensure the energy conservation in the scattering process. An concrete example is the SYK model at high temperature. As observed in [43], the energy conservation contains a divergence for \(D=0\), which should be regularized by the quasi-particle lifetime \(\Gamma^{-1}\). Without diving into the details, here we simply introduce \[\widetilde{J}^{2}\equiv J^{2}\int d\mathcal{M}_{s}\sim J^{2}/\Gamma,\qquad \widetilde{\kappa}^{2}\equiv\kappa^{2}\int d\mathcal{M}_{b}\sim\kappa^{2}/\Gamma. \tag{29}\] We first consider the \(J=0\) case. With our system-bath coupling (15), it is known that the system \(s\) is not many-body chaotic [44]. As a result, the evolution of \(f_{z}(t)\) is only driven by relaxation due to the bath \(b\). Thanks to the linearity of the Boltzmann equation, analytical solutions can be obtained in the closed-form: \[\begin{split} f_{0}(t)&=\frac{-(w-1)e^{\lambda_{0}t }+w+w^{-1}}{\left(w+1\right)\left(w+w^{-1}\right)},\qquad f_{1}(t)=\frac{(1-w^ {-1})e^{\lambda_{0}t}+w+w^{-1}}{\left(w+1\right)\left(w+w^{-1}\right)},\\ f_{2}(t)&=wf_{0}(t),\qquad\qquad f_{3}(t)=wf_{4}( t)=w^{-1}f_{5}(t)=\frac{e^{\lambda_{0}t}}{w+w^{-1}}.\end{split} \tag{30}\] Figure 2: Results for the perturbative Page curve in the \(D=0\) example with \(\beta\epsilon=1\): (a). The analytical solution of the Boltzmann equation \(f_{z}(t)\) for \(\widetilde{J}/\widetilde{\varkappa}=0\); (b). The numerical solution of the Boltzmann equation \(f_{z}(t)\) for \(\widetilde{J}/\widetilde{\varkappa}=2\); (c). The entropy \(\delta S^{(2)}(t)\) as a function of time \(t\) for different \(\widetilde{J}/\widetilde{\kappa}\). The result shows the separation of time scale for \(\widetilde{\kappa}\ll\widetilde{J}\) as in Figure 1. Here we have defined \(w=e^{\beta\epsilon}\) and \(\lambda_{0}=\widetilde{\kappa}^{2}/(w+w^{-1}+2)\). The result shows that \(f_{z}(t)\) decay exponentially for negative \(t\), to the thermalized value (20). However, direct calculation shows that we have \(\delta S^{(2)}(t)=0\). This is consistent with the fact that the initial increase of \(\delta S^{(2)}(t)\) is driven by many-body chaos, which is absent for \(J=0\). Now we consider finite \(\widetilde{J}/\widetilde{\kappa}\) by performing a numerical study of the Boltzmann equation (24). For \(\widetilde{J}\gg\widetilde{\kappa}\), \(f_{3}(t),f_{4}(t),f_{5}(t)\), which are off-diagonal in the replica space, now decays much more rapid comparing to the relaxation due to the coupling to bath \(b\), as shown in Figure 2 (b). As a result, the entropy shows a rapid increase, followed by slow relaxation, consistent with the sketch in Figure 1. ## 4 Results for Large-N Models in 0+1-d In the previous section, we analyzed \(\delta S^{(n)}(t)\) by deriving Boltzmann equations for systems with quasi-particles. In this section, we instead consider general systems with all-to-all interactions, which may describe non-Fermi liquids without quasi-particles [45]. A concrete example of such system \(s\) is the celebrated SYK model at low temperature [46, 47, 48]2. We also keep the choice of operators \(X\), \(O_{s}^{i}\) general. In the early-time regime, we derive general results for arbitrary \(n\) including both the exponential growth and the saturation of \(\delta S^{(n)}(t)\). However, the analytical continuation to \(n=1\) turns out to be hard. In the late-time regime, we present results for both Renyi entropies and the Von Neumann entropy. Footnote 2: The entropy in SYK-like models has also been studied in [49, 50, 51, 52, 53, 54, 55, 56, 57]. ### Early-time scramblon effective theory We firstly consider the early-time regime which is dominated by the decay of inter-replica two-point functions \(\circled{4}\), \(\circled{5}\), \(\circled{6}\) due to many-body chaos. For sufficiently small \(\kappa\), we can safely assume all intr-replica Green's functions are the same as \(\kappa=0\), and only interactions induced by exchanging scramblons are important, which allows us to compute inter-replica two-point functions using the scramblon effective theory developed in [35, 36]. For simplicity, we will perform the calculation for a bosonic system, but the final result also works for fermionic systems. We first consider integrating out the bath on the entropy contour with \(n\) replicas. This leads to a pictorial representation: \[\exp\left[-(n-1)S^{(n)}(t,0)\right]\quad=\quad\raisebox{-28.452756pt}{ \includegraphics[width=14.226378pt]{fig/2.eps}}\quad. \tag{31}\] Here we take \(n=4\) as an example. The Lorentzian time is directed towards the center of the diagram. The dashed lines represent the interaction induced by integrating out bath \(b\) (we neglect terms that are not relevant to OTO-correlations as in [30]), which reads \[S_{\rm int}=\kappa^{2}\sum_{i=1}^{N}\sum_{k=0}^{n-1}\int dt_{1}dt_{2}\ O_{s}^{i}( t_{1}-i(k+1)\beta^{-})O_{s}^{i}(t_{2}-ik\beta^{+})\widetilde{G}_{ud}(t_{12}). \tag{32}\] Here \(\beta^{\pm}=\beta\pm 0^{+}\). We begin with the computation of, which contains two \(X\) operators in different replicas. Generalizing to general \(n\), we find (33) The corresponding contribution to entropy reads \(\delta S_{\oplus}^{(n)}=-\frac{n}{n-1}\). (33) can be computed using the scramblon effective theory [35]: The initial insertion of two \(X\) operators in the past create a perturbation propagating forwardly in time, with a distribution of the perturbation strength \(y\) described by \(h_{X}^{\rm A}(y,-i\tau)_{n\beta}\). Such a perturbation is observed by pairs of \(O_{s}\) operators in the future, which changes their two-point functions from \(G_{O_{s}}(t_{12})_{n\beta}\) to \(f_{O_{s}}^{\rm R}\left(C^{-1}e^{i\delta}e^{\varkappa\frac{t_{1}+t_{2}}{2}}y, t_{12}\right)_{n\beta}\). Here \(C\propto N\) is the coefficient of OTOC and \(\delta\) is fixed by the imaginary-time configuration [58, 59]3. This leads to the result Footnote 3: To be precise, here \(\varkappa=\varkappa(n\beta)\) and \(C=C(n\beta)\) are defined at inverse temperature \(n\beta\). \[\begin{split}\textcircled{A}&=\sum_{k=1}^{n-1}\int_ {0}^{\infty}dy\ h_{X}^{\rm A}(y,-i(k+1)\beta)_{n\beta}\exp(-S_{\oplus}^{k}), \\ &\frac{S_{\oplus}^{k}}{\kappa^{2}N}=\int_{12}\widetilde{G}_{ud}(t _{12})\left[2G_{O_{s}}(t_{12}-i\beta)_{n\beta}-f_{O_{s}}^{\rm R}\left(\frac{e^ {i\delta_{4}}e^{\varkappa\frac{t_{1}+t_{2}}{2}}y}{C},t_{12}-i\beta\right)_{n \beta}-(\delta_{4}\to-\delta_{4})\right].\end{split} \tag{34}\] Here we have introduced \(\delta_{4}=\frac{\varkappa\beta}{2}(\frac{n}{2}-k)\). To take the limit of \(N\to\infty\), we expand \(f^{\rm R/A}(z,t)=G(t)-z\Upsilon^{\rm R/A}(t)+O(z^{2})\)[35]. The result reads \[\frac{S_{\oplus}^{k}}{\kappa^{2}}=y\frac{2N\cos\delta_{4}(k)}{C}\int_{0}^{t} dt_{1}\int_{0}^{t}dt_{2}\ \widetilde{G}_{ud}(t_{12})e^{\varkappa\frac{t_{1}+t_{2}}{2}}\Upsilon_{O_{s}}^{ \rm R}(t_{12}-i\beta)_{n\beta}\equiv y\cos\delta_{4}(k)\Lambda(t). \tag{35}\] Here we can estimate \(\Lambda(t)\) as \[\begin{split}\Lambda(t)&\approx\frac{2N}{C}\int_{- \infty}^{\infty}dt_{12}\int_{0}^{t}dT\ e^{\varkappa T}\widetilde{G}_{ud}(t_{12 })\Upsilon_{O_{s}}^{\rm R}(t_{12}-i\beta)_{n\beta}\\ &=\frac{2N}{\varkappa C}(e^{\varkappa t}-1)\int_{-\infty}^{\infty }dt_{12}\ \widetilde{G}_{ud}(t_{12})\Upsilon_{O_{s}}^{\rm R}(t_{12}-i\beta)_{n\beta}, \end{split} \tag{36}\] which grows exponentially in time with exponent \(\varkappa\). The integral over \(y\) becomes a Laplace transform of \(h_{X}^{\rm A}\). Using the definition that \(f^{\rm R/A}(z,t)=\int_{0}^{\infty}dy\ h^{\rm R/A}(y,t)e^{-zy}\)[35], we find \[\textcircled{A}=\sum_{k=1}^{n-1}f_{X}^{\rm A}(\kappa^{2}\cos\delta_{4}(k) \Lambda(t),-i(k+1)\beta)_{n\beta}. \tag{37}\] The generalizations of and can be computed similarly, which contributes to the entropy as \(\delta S^{(n)}_{\leavevmode\hbox to17.6pt{\vbox to17.6pt{\pgfpicture\makeatletter\hbox to 0. 0pt{\pgfsys@beginscope{}\definecolor{pgfstrokecolor}{rgb}{0,0,0} \pgfsys@color@rgb@stroke{0}{0}{0}{}\pgfsys@color@rgb@fill{0}{0}{0}{} \pgfsys@setlinewidth{0.4pt}{}\nullfont\hbox to 0.0pt{\pgfsys@beginscope{} {{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{} {}{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{} {}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{} {}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{} {}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{} {}{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{} {{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{} {{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{} {{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{} {{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{} {{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{} {{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{} {{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{} {{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{ }{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{} {{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{ }{{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{ }{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{ }{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{ }{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{ }{{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{ }{{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{ }{{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{ }{{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{ }{{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{ }{{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{{}{}{ }{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{ }{{}{}{}{}{{}{}{}{}{}{}{{}{}{}{}{}{}{{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{ }{{}{}{}{{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{ }{{}{}{{}{}{}{}{{}{}{}{}{}{}{{}{}{}{}{}{{}{}{}{}{}{}{{}{}{}{}{}{{}{}{}{}{}{ {}{}{{}{}{}{}{}{{}{}{}{}{{}{}{}{}{}{{}{}{}{{}{}{}{}{}{{}{}{}{}{{}{}{}{}{}{}{{} {}{{}{}{}{{}{}{}{}{}{}{}{{}{}{}{}{}{{}{}{}{}{{}{}{}{}{}{}{{}{}{}{}{}{{}{}{} {{}{}{}{{}{}{}{}{}{}{{}{}{}{}{{}{}{}{}{}{{}{}{}{}{}{{}{}{}{}{{}{}{}{}{}{}{}{{} {}{{}{}{}{{}{}{}{}{{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{{}{}{}{{}{}{}{}{}{}{{} {}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{}{{}{} {{}{}{{}{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{}{}{{} {}{{}{{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{{}{}{}{}{{}{}{}{}{}{{}{}{}{}{{} {{}{{}{}{}{}{{}{}{}{{}{}{}{{}{{}{}{}{{}{}{}{{}{}{}{}{}{{}{}{{} {}{{}{{}{}{}{}{{}{}{}{{}{}{{}{}{{}{}{{}{}{}{{}{}{}{{}{}{{}{}{} {{}{{}{{}{}{}{{}{}{{}{}{}{{}{}{{}{{}{{}{}{{}{{}{}{{}{{}{}{{}{}{{}{ }{{{}{}{{{}{}{{{}{}{{}{{}{}{{}{{}{{{}{}{{}{{{}{{}{{}{}{{}{}{{}{{}{}{{ }{{{{{{{{{{{{{{{{{{{}{}{}{{{{{{{}{}{{{{}{}{{{}{{{{{{{{{{}{}{}{}{}{}{}{{}{{{{{}{}{{{}{{}{{{}{{}{{{}{{}{{{} {} {}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\} Nevertheless, we should keep in mind that the initial value \(G_{ud}=\langle X(-i\beta)X(0)\rangle_{n\beta}\) due to the presence of other replicas. This is essential to see not the full late-time entropy matches the coarse-grained entropy: \[\left.\partial_{n}\hbox{\hbox to 0.0pt{\hbox{\circled{3}}}\vrule height 6.45pt width 0.4pt depth 0.0pt\kern-3.0pt{\hbox{ \circled{3}}}}\right|_{n=1}=\left.\partial_{n}\langle Xe^{iHt}e^{-(n-1)\beta H_{s}}e^{-iHt}X \rangle_{\beta}\right|_{n=1}=\beta\langle XH_{s}(t)X\rangle. \tag{43}\] Adding similar contributions from 1 and 2, we find \(\delta S(t)=\beta\delta E(t)\). In the remaining part of this subsection, we will study the perturbation of \(G_{ud}\) due to the system-bath coupling to the \(\kappa^{2}\) order. The result validates (13) and fixes the energy relaxation rate \(\lambda_{0}\). To the order of \(\kappa^{2}\), we have \[\delta\hbox{\hbox to 0.0pt{\hbox{\circled{3}}}\vrule height 6.45pt width 0.4pt depth 0.0pt\kern-3.0pt{\hbox{ \circled{3}}}}=\kappa^{2}\sum_{i}\sum_{\alpha\beta}{\cal P}_{\alpha\beta} \int_{0}^{t}dt_{1}\int_{0}^{t}dt_{2}\ \langle X_{u}(0)X_{d}(0)O^{i}_{s,\alpha}(t_{1})O^{i}_{s, \beta}(t_{2})\rangle_{c,n\beta}\langle O^{i}_{b,\alpha}(t_{1})O^{i}_{b,\beta}( t_{2})\rangle_{n\beta} \tag{44}\] Here \(\alpha,\beta=u/d\). For different \((\alpha,\beta)\), the real-time connected four-point functions in (44) can be expressed by the imaginary-time ordered four-point function \({\cal F}(t_{1},t_{2};t_{3},t_{4})_{n\beta}\) as \[(u,u)= \theta(t_{12}){\cal F}(t_{1}-i\beta^{+},t_{2}-i\beta;-i\beta^{-}, 0)_{n\beta}+\theta(t_{21}){\cal F}(t_{2}-i\beta^{+},t_{1}-i\beta;-i\beta^{-}, 0)_{n\beta}, \tag{45}\] \[(d,d)= \theta(t_{12}){\cal F}(t_{2}-in\beta,t_{1}-in\beta^{-};-i\beta,0 )_{n\beta}+\theta(t_{21}){\cal F}(t_{1}-in\beta,t_{2}-in\beta^{-};-i\beta,0)_ {n\beta},\] \[(u,d)= {\cal F}(t_{2}-in\beta,t_{1}-i\beta;-i\beta^{-},0)_{n\beta}, \hskip 28.452756pt(d,u)={\cal F}(t_{1}-in\beta,t_{2}-i\beta;-i\beta^{-},0)_{n \beta}.\] Summing up all these contributions, the result reads \[\delta\hbox{\hbox to 0.0pt{\hbox{\circled{3}}}\vrule height 6.45pt width 0.4pt depth 0.0pt\kern-3.0pt{\hbox{ \circled{3}}}\vrule height 6.45pt width 0.4pt depth 0.0pt\kern-3.0pt{\hbox{ \circled{3}}}}= 2\kappa^{2}\int_{t_{2}>t_{1}}\widetilde{G}_{ud}(t_{12})\left({\cal F}(t_{2} -in\beta,t_{1}-i\beta;-i\beta^{-},0)_{n\beta}-{\cal F}(t_{2}-i\beta^{+},t_{1} -i\beta;-i\beta^{-},0)_{n\beta}\right) \tag{46}\] \[+2\kappa^{2}\int_{t_{1}>t_{2}}\widetilde{G}_{ud}(t_{12})\left({ \cal F}(t_{2}-in\beta,t_{1}-i\beta;-i\beta^{-},0)_{n\beta}-{\cal F}(t_{2}-in \beta^{+},t_{1}-in\beta;-i\beta^{-},0)_{n\beta}\right).\] This result vanishes in the limit of \(n\to 1\), which leads to a finite contribution to the Von Neumann entropy. Expanding near \(n=1\), we find \[\frac{\delta\hbox{\hbox to 0.0pt{\hbox{\circled{3}}}\vrule height 6.45pt width 0.4pt depth 0.0pt\kern-3.0pt{\hbox{ \circled{3}}}\vrule height 6.45pt width 0.4pt depth 0.0pt\kern-3.0pt{\hbox{ \circled{3}}}}}{n-1}= 2\kappa^{2}(-i\beta)\int dt_{1}dt_{2}\ \partial_{t_{1}}\widetilde{G}_{ud}(t_{12}){\cal F}(t_{2}-i\beta^{+},t_{1}-i \beta;-i\beta^{-},0)_{\beta}. \tag{47}\] To proceed, we assume that the energy fluctuation contributes to the connected four-point function as \[{\cal F}(t_{1},t_{2};t_{3},t_{4})_{\beta}\approx C_{\beta}\partial_{\beta}G_ {O_{s}}(t_{12})\partial_{\beta}G_{X}(t_{34}),\qquad\mbox{for}\quad t_{1},t_{2} \gg t_{3},t_{4}. \tag{48}\] Physically, the insertion of \(X\) operators creates an energy increase, which is probed by \(O_{s}\) operators. It has been shown that (48) is valid in the conformal limit of the SYK model. This leads to \[\frac{\delta\hbox{\hbox to 0.0pt{\hbox{\circled{3}}}\vrule height 6.45pt width 0.4pt depth 0.0pt\kern-3.0pt{\hbox{ \circled{3}}}\vrule height 6.45pt width 0.4pt depth 0.0pt\kern-3.0pt{\hbox{ \circled{3}}}}}{n-1} \approx 2\kappa^{2}(-i\beta)t\int_{-\infty}^{\infty}dt_{12}\ \partial_{t_{1}}\widetilde{G}_{ud}(t_{12})C_{\beta}\partial_{\beta}G_{O_{s}}(t_{ 21})\left.\partial_{\beta}G_{X}(t_{34})\right|_{t_{34}->-i\beta} \tag{49}\] \[\equiv-\lambda_{0}t\beta\ \partial_{\beta}G_{X}(t_{34})\big{|}_{t_{34}->-i \beta}.\] Noticing that \[\langle X(-i\beta)X(0)\rangle_{n\beta}-\langle X^{2}\rangle_{\beta}=-(n-1) \beta\langle HX(-i\beta)X(0)\rangle_{\beta}=(n-1)\beta\left.\partial_{\beta}G_ {X}(t_{34})\right|_{t_{34}->-i\beta},\] the result (49) matches the expansion of (13) for \(\lambda_{0}t\ll 1\). We thus identify \[\lambda_{0}=2\kappa^{2}i\int_{-\infty}^{\infty}dt_{12}\ \partial_{t_{1}}\widetilde{G}_{ud}(t_{12})C_{\beta} \partial_{\beta}G_{O_{s}}(t_{21}). \tag{50}\] Discussions In this work, we study the perturbative Page curve in open quantum systems induced by an external impulse. The entropy dynamics exhibit a separation of time scales for small system-bath coupling, with an early-time exponential growth due to quantum many-body chaos, and a late-time relaxation due to the system-bath coupling. To provide a quantitative description, we first derive the generalized Boltzmann equation for the perturbation of Renyi entropies, which is valid for systems with quasi-particles. The effects of many-body chaos and energy relaxation are naturally encoded in distinct quantum distribution functions. To understand strongly correlated systems, we further study general large-\(N\) systems in 0+1-d, where quasi-particles may be absent. In the early-time regime, we use the scramblon effective theory to derive the increase of Renyi entropies. In the late-time regime, we perform a perturbative analysis to determine the relaxation rate and demonstrate that the entropy tracks the coarse-grained entropy. There are several potential extensions to the current work: Firstly the stability of the Boltzmann equation is of vital importance. It would be worthwhile to investigate whether some analog of \(H\)-theorems can be derived for generalized Boltzmann equations on the entropy contour. Another intriguing question is whether analogous Boltzmann equations exist for perturbations of Von Neumann entropies, which necessitates considering the limit as \(n\to 1\). Exploring the evolution of generalized Boltzmann equations in systems where Renyi entropies can be experimentally measured, such as the Bose-Hubbard model [61], could yield interesting insights. Finally, applying generalized Boltzmann equations to systems with repeated measurements may lead to a new perspective of the measurement-induced phase transitions. ## Acknowledgment We thank Yu Chen, Pouria Dadras, Ruihua Fan, and Alexei Kitaev for helpful discussions. _Note Added._ When completing this work, we became aware of a related study on the entanglement dynamics, independently conducted by Yu Chen [62].
2306.07981
Feature Engineering-Based Detection of Buffer Overflow Vulnerability in Source Code Using Neural Networks
One of the most significant challenges in the field of software code auditing is the presence of vulnerabilities in software source code. Every year, more and more software flaws are discovered, either internally in proprietary code or publicly disclosed. These flaws are highly likely to be exploited and can lead to system compromise, data leakage, or denial of service. To create a large-scale machine learning system for function level vulnerability identification, we utilized a sizable dataset of C and C++ open-source code containing millions of functions with potential buffer overflow exploits. We have developed an efficient and scalable vulnerability detection method based on neural network models that learn features extracted from the source codes. The source code is first converted into an intermediate representation to remove unnecessary components and shorten dependencies. We maintain the semantic and syntactic information using state of the art word embedding algorithms such as GloVe and fastText. The embedded vectors are subsequently fed into neural networks such as LSTM, BiLSTM, LSTM Autoencoder, word2vec, BERT, and GPT2 to classify the possible vulnerabilities. We maintain the semantic and syntactic information using state of the art word embedding algorithms such as GloVe and fastText. The embedded vectors are subsequently fed into neural networks such as LSTM, BiLSTM, LSTM Autoencoder, word2vec, BERT, and GPT2 to classify the possible vulnerabilities. Furthermore, we have proposed a neural network model that can overcome issues associated with traditional neural networks. We have used evaluation metrics such as F1 score, precision, recall, accuracy, and total execution time to measure the performance. We have conducted a comparative analysis between results derived from features containing a minimal text representation and semantic and syntactic information.
Mst Shapna Akter, Hossain Shahriar, Juan Rodriguez Cardenas, Sheikh Iqbal Ahamed, Alfredo Cuzzocrea
2023-06-01T01:44:49Z
http://arxiv.org/abs/2306.07981v1
Feature Engineering-Based Detection of Buffer Overflow Vulnerability in Source Code Using Neural Networks ###### Abstract One of the most significant challenges in the field of software code auditing is the presence of vulnerabilities in software source code. Every year, more and more software flaws are discovered, either internally in proprietary code or publicly disclosed. These flaws are highly likely to be exploited and can lead to system compromise, data leakage, or denial of service. To create a large-scale machine learning system for function-level vulnerability identification, we utilized a sizable dataset of C and C++ open-source code containing millions of functions with potential buffer overflow exploits. We have developed an efficient and scalable vulnerability detection method based on neural network models that learn features extracted from the source codes. The source code is first converted into an intermediate representation to remove unnecessary components and shorten dependencies. We maintain the semantic and syntactic information using state-of-the-art word embedding algorithms such as GloVe and fastText. The embedded vectors are subsequently fed into neural networks such as LSTM, BiLSTM, LSTM-Autoencoder, word2vec, BERT, and GPT-2 to classify the possible vulnerabilities. Furthermore, we have proposed a neural network model that can overcome issues associated with traditional neural networks. We have used evaluation metrics such as F1 score, precision, recall, accuracy, and total execution time to measure the performance. We have conducted a comparative analysis between results derived from features containing a minimal text representation and semantic and syntactic information. We have found that all neural network models provide higher accuracy when we use semantic and syntactic information as features. However, this approach requires more execution time due to the added complexity of the word embedding algorithm. Moreover, our proposed model provides higher accuracy than LSTM, BiLSTM, LSTM-Autoencoder, word2vec and BERT models, and the same accuracy as the GPT-2 model with greater efficiency. **Keywords**: Cyber Security; Vulnerability Detection; Neural Networks; Feature Extraction; ## I Introduction Security in the digital realm is becoming increasingly important, but there is a significant threat to cyberspace from invasion. Attackers can breach systems and applications due to security vulnerabilities caused by hidden software defects. Internally, proprietary programming contains thousands of these flaws each year [1]. For example, the ransomware Wannacry swept the globe by using a flaw in the Windows server message block protocol [2]. According to the Microsoft Security Response Center, there was an industry-wide surge in high-severity vulnerabilities of 41.7% in the first half of 2015. This represents the greatest proportion of software vulnerabilities in at least three years, accounting for 41.8% [3]. Furthermore, according to a Frost and Sullivan analysis released in 2018, severe and high severity vulnerabilities increased from 693 in 2016 to 929 in 2017, with Google Project Zero coming in second place in terms of disclosing such flaws. On August 14, 2019, Intel issued a warning about a high-severity vulnerability in the software it uses to identify the specifications of Intel processors in Windows PCs [4]. The paper claims that these defects, including information leaking and denial of service assaults, might substantially affect software systems. Although the company issued an update to remedy the problems, attackers can still use these vulnerabilities to escalate their privileges on a machine that has already been compromised. In June 2021, a vulnerability in the Windows Print Spooler service was discovered that allowed attackers to execute code remotely. The vulnerability, known as PrintNightmare, was caused by a buffer overflow and affected multiple versions of Windows in 2021 [5]. Microsoft released a patch to address the issue, but reports later emerged that the patch was incomplete and still left systems vulnerable. To reduce losses, early vulnerability detection is a good technique. The proliferation of open-source software and code reuse makes these vulnerabilities susceptible to rapid propagation. Source code analysis tools are already available; however, they often only identify a small subset of potential problems based on pre-established rules. Software vulnerabilities can be found using a technique called vulnerability detection. Conventional vulnerability detection employs static and dynamic techniques [6]. Static approaches evaluate source code or executable code without launching any programs, such as data flow analysis, symbol execution [7], and theorem proving [8]. Static approaches can be used early in software development and have excellent coverage rates, but they have a significant false positive rate. By executing the program, dynamic approaches like fuzzy testing and dynamic symbol execution can confirm or ascertain the nature of the software. Dynamic methods depend on the coverage of test cases, which results in a low recall despite their low false positive rate and ease of implementation. The advancement of machine learning technology incorporates new approaches to address the limitations of conventional approaches. One of the key research directions is to develop intelligent source code-based vulnerability detection systems. It can be divided into three categories: using software engineering metrics, anomaly detection, and weak pattern learning [9]. Initially, software engineering measures, including software complexity [10], developer activity [11], and code commits [12] were investigated to train a machine learning model. This strategy was motivated by the idea that software becomes more susceptible as it becomes more complicated, but accuracy and recall need to be improved. Allamanis et al. [13] have shown that the syntactic and semantic information in the code increases the detection accuracy in anomaly detection. Moreover, one work has shown the detection of the anomaly using fully-fledged codes [14]. It reveals previously unidentified weaknesses, but false positive and false negative rates are high. Another work has shown an approach with clean and vulnerable samples to learn vulnerable patterns [15]. This method performs very well but relies on the quality of the dataset. In our work, we propose a solution for detecting software buffer overflow vulnerability using neural networks such as Simple RNN, LSTM, BiLSTM, word2vec, BERT, GPT2, and LSTM-Autoencoder. We first transform source code samples into the minimum intermediate representations through a tokenizer provided by the Keras library. Later, we extract semantic features using word embedding algorithms such as GloVe and fastText. After finishing the data preprocessing stage, we feed the input representation to the neural networks for classification. Moreover, we develop a neural network that works best among all the models. All the models have been evaluated using evaluation metrics such as f1 score, precision, recall, accuracy, and total execution time. The following is a summary of our contributions: 1. Extracting semantic and syntactic features using GloVe and fastText. 2. Vulnerability Detection in Source Code using LSTM, BiLSTM, LSTM-Autoencoder, word2vec, BERT, and GPT-2 with an minimal intermediate feature representation of the texts. 3. Vulnerability Detection in Source Code using LSTM, BiLSTM, LSTM-Autoencoder, word2vec, BERT, and GPT-2 with semantic and syntactic features. 4. Proposal of a neural network that outperforms the results derived from existing models. Comparison between results derived from neural networks trained with a minimal intermediate feature representation of the texts and semantic and syntactic features. The rest of the paper is organized as follows: we provide a brief background study on software vulnerability detection in section 2. Then we explain the methods we followed for our experimental research in section 3. The results derived from the experiment are demonstrated in Section 4. Finally, section 5 concludes the paper. ## II Literature Review Researchers are interested in the recently developed machine learning strategy for identifying and preventing software and cybersecurity vulnerabilities in order to address the shortcomings of conventional static and dynamic code analysis techniques. Various machine learning techniques, including naive bayes, logistic regression, recurrent neural networks (RNN), decision trees, and support vector machines are successfully used for classifying software security activities like malware, ransomware, and network intrusion detection. We have examined machine learning-related papers that have been applied to the software security domain. Previously, Zeng et al. [16] reviewed software vulnerability analysis and discovery using deep learning techniques. They found four game-changing methods that contributed most to software vulnerability detection using deep learning techniques. These concepts are automatic semantic feature extraction using deep learning models, end-to-end solutions for detecting buffer overflow vulnerabilities, applying a bidirectional Long Short-Term Memory (BiLSTM) model for vulnerability detection, and deep learning-based vulnerability detectors for binary code. Zhou et al. [17] proposed a method called graph neural network for vulnerability identification with function-level granularity to address the issue of information loss during the representation learning process. They transformed the samples into a code property graph format. Then, a graph neural network made up of a convolutional layer and a gated graph recurrent layer learned the vulnerable programming pattern. This method improves the detection of intra-procedural vulnerabilities. However, they did not address inter-procedural vulnerabilities. Iorga et al. [18] demonstrated a process for early detection of cyber vulnerabilities from Twitter, building a corpus of 650 annotated tweets related to cybersecurity articles. They used the BERT model and transfer learning model for identifying cyber vulnerabilities from the articles. The BERT model shows 91% accuracy, which they found adequate for identifying relevant posts or news articles. Sauerwein et al. [19] presented an approach for automated classification of attackers' TTPs by combining NLP with ML techniques. They extracted the attackers' TTPs from unstructured text. To extract the TTPs, they used a combination of NLP with ML techniques. They assessed all potential combinations of the specified NLP and ML approaches with 156 processing pipelines and an automatically generated training set. They found that tokenization, POS tagging, IoC replacement, lemmatization, one-hot encoding, binary relevance, and support vector machine performed best for the classification of techniques and tactics. Harer et al. [20] created a dataset composed of millions of open-source functions annotated with results from static analysis. The performance of source-based models is then compared against approaches applied to artifacts extracted from the build process, with source-based methods coming out on top. The best performance is found when combining characteristics learned by deep models with tree-based models. They evaluated the use of deep neural network models alongside more conventional models like random forests. Finally, their best model achieved an area under the ROC curve of 0.87 and an area under the precision-recall curve of 0.49. Pistoia et al. [21] surveyed static analysis methods for identifying security vulnerabilities in software systems. They discussed three topics that have been linked to security vulnerability sources: application programming interface conformance, information flow, and access control. They addressed static analysis methods for stack-based access control and role-based access control separately since access control systems can be divided into these two main types. They reviewed some effective static analysis techniques, including the Mandatory Access Rights Certification of Objects (MARCO) algorithm, the Enterprise Security Policy Evaluation (ESPE) algorithm, the Static Analysis for Validation of Enterprise Security (SAVES) algorithm, and Hammer, Krinke, and Snelting's algorithm. However, static analysis produces false positive results and relies on predefined rules. For new errors, the static analysis method is unsuitable, as it cannot recognize and detect them. ## III Methodology From the standpoint of source code, the majority of flaws originate in critical processes that pose security risks, such as functions, assignments, or control statements. Adversaries can directly or indirectly affect these crucial operations by manipulating factors or circumstances. To successfully understand patterns of security vulnerabilities from code, neural network models must be trained on a large number of instances. In this study, we analyze the lowest level of codes in software package functions, capable of capturing vulnerable flows. We utilized a sizable dataset containing millions of function-level examples of C and C++ code from the SATE IV Juliet Test Suite, the Debian Linux distribution, and open-source Git repositories on GitHub, as mentioned in Russell's work [22]. Our project employs the CWE-119 vulnerability feature, which indicates issues related to buffer overflow vulnerability. Buffer overflow occurs when data written to a buffer exceeds its length, overwriting storage units outside the buffer. According to a 2019 Common Weakness Enumeration report, buffer overflow vulnerability has become the most adversely affected issue. Although we focus on buffer overflow, our method can identify other vulnerabilities. Figure 1 illustrates an intra-procedural buffer overflow vulnerability. Our dataset is divided into three subfolders--train, validation, and test--each containing a CSV file with 100,000, 20,000, and 10,000 data instances, respectively. The CSV files store text data and corresponding labels, allowing systematic evaluation of the model's performance and adaptability throughout the learning process. We analyzed the dataset and found some common words (shown in Table 1) with their corresponding counts. The visualization of common words in the dataset provides a preliminary understanding of what kind of important features the dataset might have. #### Iii-1 Data Preprocessing In this study, we conducted a series of data preprocessing techniques to prepare our dataset for the neural networks. The data preprocessing steps we employed include tokenization, stop word removal, stemming, lemmatization, and the use of pre-trained embeddings. Initially, we performed tokenization, which is the process of breaking down the source code into smaller units called tokens. Tokens represent the basic units of analysis for computational purposes in natural language processing tasks. For this process, we utilized the Keras tokenizer, which provides methods such as tokenize() and detokenize() to process plain text and separate words [23]. Following tokenization, we applied stop word removal, stemming, and lemmatization techniques to further preprocess the tokens. Stop word removal eliminates common words that do not provide significant information, while stemming and lemmatization normalize the tokens by reducing them to their root form. These techniques help in reducing noise and improving the efficiency of the neural networks. We first converted the tokens into numerical representations using minimal intermediate representation with the Keras tokenizer. The Keras tokenizer assigns a unique integer index to each token in the vocabulary and represents the source code as a sequence of these integer indices. This representation is more efficient than one-hot encoding, as it does not involve creating large, sparse vectors. However, it still lacks semantic information about the tokens. To further enhance the representation of the source code tokens and better capture semantic and syntactic information, we utilized pre-trained embeddings, namely GloVe and fastText. We stacked GloVe \begin{table} \begin{tabular}{|l|l|l|} \hline index & Common\_words & Count \\ \hline 0 & = & 505570 \\ \hline 1 & if & 151663 \\ \hline 2 & \{\}\(\backslash\)n & 113301 \\ \hline 3 & \(==\) & 92654 \\ \hline 4 & return & 77438 \\ \hline 5 & * & 71897 \\ \hline 6 & the & 71595 \\ \hline 7 & \(\backslash\)n & 63182 \\ \hline 9 & int & 53673 \\ \hline 10 & /* & 51910 \\ \hline 11 & i & 43703 \\ \hline 12 & */\(\backslash\)n & 43591 \\ \hline 13 & + & 41855 \\ \hline 14 & to & 39072 \\ \hline 15 & \& 36180 \\ \hline 16 & for & 35849 \\ \hline 17 & \(\backslash\)n\(\backslash\)n & 34017 \\ \hline 18 & char & 33334 \\ \hline 19 & else & 31358 \\ \hline \end{tabular} \end{table} TABLE I: Most common words and their frequencies Fig. 1: An example of buffer overflow vulnerability. and fastText embeddings together for extracting the semantic and syntactic information from the source code. Both of these embeddings have demonstrated strong performance in various NLP tasks and can effectively capture the relationships between words in the source code. GloVe is an unsupervised learning algorithm that generates vector representations of words based on global word-word co-occurrence statistics from a corpus [24]. FastText, an extension of the skip-gram method, generates character n-grams of varying lengths for each word and learns weights for each n-gram, as well as the entire word token, allowing the model to capture the meaning of suffixes, prefixes, and short words [25]. We separately fed the minimal intermediate representation with Keras tokenizer and the semantic and syntactic representations derived from GloVe and fastText into our neural network models. This approach allowed us to compare the performance of the models when using different input representations, helping us identify the most effective method for detecting security vulnerabilities in the source code. ### _Classification Models_ In this section, we discuss various classification models that were utilized in our study. These models include Simple RNN, LSTM, BiLSTM, LSTM-Autoencoder, Word2vec, BERT, and GPT-2. These models are designed to work with different types of data, such as text, time series, and sequences, and have been widely employed in natural language processing and other related tasks. ### _Simple Recurrent Neural Network (RNN)_ The Simple Recurrent Neural Network (RNN) is a type of artificial neural network that can model sequential data by utilizing a directed graph and temporally dynamic behavior. RNNs consist of an input layer, a hidden layer, and an output layer [26]. These networks have a memory state added to each neuron, allowing them to capture temporal dependencies in the data. The dimensionality of the input layer in our Simple Recurrent Neural Network (RNN) model is determined based on the input data features. The hidden layer consists of 256 units, which use memory states to capture temporal dependencies in the data. We use the hyperbolic tangent (tanh) activation function in the hidden layer to introduce non-linearity into the model. We chose this activation function due to its ability to handle vanishing gradients more effectively compared to other activation functions like sigmoid. The output layer of the Simple RNN model is designed to generate predictions based on the processed input data. The number of units in the output layer corresponds to the number of classes, which is two. We use an appropriate activation function, such as sigmoid for binary classification, in the output layer to generate probability scores for each class. To optimize the model, we choose the Binary Cross entropy loss function and employ the Adam optimization algorithm. We set hyperparameters such as learning rate to 0.001, batch size to 32, and the number of training epochs to 50. ### _Long short-term memory (LSTM)_ The Long Short-Term Memory (LSTM) is a type of recurrent neural network designed to solve the vanishing and exploding gradient problem of traditional RNNs.It was first proposed by Hochreiter and Schmidhuber [27]. Using this model for sequential datasets is effective, as it can handle single data points. It follows the Simple RNN model's design and is an extended version of that model [28, 29]. Our LSTM model consists of an input layer that determines the dimensionality of the input data features. We incorporated three hidden layers, each containing 128 memory cells that can capture long-term dependencies in the input sequence. The output of each LSTM layer is fed into a dropout layer with a dropout rate of 0.2 to prevent overfitting. The final output of the last LSTM layer is fed into a dense layer with two units and a sigmoid activation function to produce the final binary classification output. The LSTM cell comprises three gates: the input gate, forget gate, and output gate, which regulate the flow of information into and out of the cell. To introduce non-linearity into the model, we use the hyperbolic tangent (tanh) activation function in the LSTM cell. Furthermore, we utilize the Rectified Linear Unit (ReLU) activation function in the output layer to generate non-negative predictions. We optimize the LSTM model using the Binary Cross-Entropy loss function and Adam optimization algorithm. The model's hyperparameters include a learning rate of 0.001, batch size of 32, and 50 training epochs. ### _Bidirectional Long short-term memory (BiLSTM)_ The Bidirectional Long Short-Term Memory (BiLSTM) is a type of recurrent neural network that enhances the capabilities of the traditional LSTM by introducing bidirectional processing of the input sequence. It was first proposed by Graves [30]. This idea sets it apart from the LSTM model, which can learn patterns from the past to the future [31].Our BiLSTM model comprises an input layer that determines the dimensionality of the input data features. We have incorporated three hidden layers, each containing 128 memory cells that can capture long-term dependencies in the input sequence. The output of each BiLSTM layer is fed into a dropout layer with a dropout rate of 0.2 to prevent overfitting. The final output of the last BiLSTM layer is fed into a dense layer with two units and a sigmoid activation function to produce the final binary classification output. The BiLSTM cell has two sets of three gates, namely the input gate, forget gate, and output gate, one set that processes the input sequence in the forward direction and another set that processes the input sequence in the backward direction. This bidirectional processing allows the model to capture dependencies in both the past and future context of the input sequence. To introduce non-linearity into the model, we use the hyperbolic tangent (tanh) activation function in the BiLSTM cell. Furthermore, we utilize the Rectified Linear Unit (ReLU) activation function in the output layer to generate non-negative predictions. We optimize the BiLSTM model using the Binary Cross-Entropy loss function and Adam optimization algorithm. The model's hyperparameters include a learning rate of 0.001, batch size of 32, and 50 training epochs. ### _LSTM-Autoencoder_ The LSTM-Autoencoder is a variant of the Long Short-Term Memory (LSTM) model that utilizes an autoencoder architecture. The LSTM-Autoencoder is designed to read input sequences, encode sequences, decode sequences, and reconstruct sequences for a given sequential dataset, referred to as encoder-decoder [32]. Its performance is estimated based on how well the model can recreate the sequence. LSTM autoencoder can be used on video, text, audio, and time-series sequence data. The model accepts a series of various lengths of inputs and outputs for various purposes, such as translating from one language to another. The series is transformed into a vector representation by the encoder, and the vector is transformed back into a sequence of outputs or texts by the decoder. The meaning of the outputs is maintained in the vector representation. In this model, we have an input layer that determines the dimensionality of the input data features. The LSTM encoder layer contains 128 memory cells that can capture long-term dependencies in the input sequence. The LSTM decoder layer has the same number of memory cells as the encoder layer, which allows the model to reconstruct the input sequence. To introduce non-linearity into the model, we use the hyperbolic tangent (tanh) activation function in the LSTM cells. Additionally, we utilize the Mean Squared Error (MSE) loss function to calculate the reconstruction loss of the autoencoder. The model's hyperparameters include a learning rate of 0.001, batch size of 32, and 50 training epochs. To evaluate the performance of the LSTM-Autoencoder, we calculate the reconstruction error between the input and reconstructed sequence. The lower the reconstruction error, the better the model's ability to capture the input sequence's structure. ### _Word2vec_ Word2vec is a word embedding model specifically designed for working with textual data. Word embedding is a technique for representing words that allows computer programs to understand words with similar meanings. By employing a neural network model to map words into vectors of real numbers, word2vec is capable of capturing significant accurate syntactic and semantic word relationships. After training, the two-layer neural network can recognize synonymous terms and suggest new words for incomplete phrases [33]. Our Word2vec model comprises an input layer that takes in the one-hot encoded words and a single hidden layer containing a specified number of neurons, which represent the latent dimensions of the word embeddings. We utilize the Skip-gram architecture with negative sampling to train the Word2vec model. In this architecture, the model predicts the surrounding words given a target word or predicts the target word given surrounding words. The negative sampling technique helps to efficiently train the model by reducing the computation required to update the weights of the model. The output layer is not used in the Word2vec model, and the trained weights of the hidden layer represent the learned word embeddings. These embeddings can be used in various downstream NLP tasks such as text classification, sentiment analysis, and machine translation. To optimize the model, we use the Stochastic Gradient Descent (SGD) optimization algorithm with an initial learning rate of 0.025 and decrease the learning rate linearly over time to 0.001. We set the batch size to 128 and the number of training epochs to 5. ### _Bert_ BERT (Bidirectional Encoder Representations from Transformers) is a state-of-the-art pre-trained language model developed by Google. BERT is a bidirectional transformer-based architecture that can capture the context of a word in a sentence by looking at the surrounding words [34]. The BERT model consists of 12 transformer blocks for the base version and 24 transformer blocks for the large version. Each transformer block has a multi-head attention mechanism and a feed-forward neural network, making it capable of modeling long-term dependencies in the input sequence. In our implementation of BERT, we utilized the pre-trained BERT model and fine-tuned it on our specific NLP task. We utilized the pre-trained BERT model with 12 transformer blocks, 12 attention heads, and 110 million parameters. We added a dense layer with 2 units and a sigmoid activation function to perform binary classification. We utilized the Binary Cross-Entropy loss function and Adam optimization algorithm to optimize the model. We set the learning rate to 2e-5 and the batch size to 32. To fine-tune the pre-trained BERT model, we trained it on our specific NLP task using a training set of 100,000 instances and a validation set of 20,000 instances. We trained the model for 3 epochs and evaluated its performance on a separate test set, which constists of 10,000 instances. ### _Gpt-2_ GPT-2 (Generative Pre-trained Transformer 2) is a state-of-the-art language model developed by OpenAI. It is a transformer-based language model that can generate coherent and fluent text in a wide range of styles and topics [35]. GPT-2 has a large number of parameters, with the base version having 117 million parameters, and the largest version having 1.5 billion parameters. In our implementation of GPT-2, we utilized the pre-trained GPT-2 model to generate text for our specific NLP task. We fine-tuned the pre-trained GPT-2 model on a large corpus of text relevant to our task to improve its performance. We used the GPT-2 model with 117 million parameters for our task. To fine-tune the pre-trained GPT-2 model, we used a training set of 100,000 instances and a validation set of 20,000 instances. We fine-tuned the model for 3 epochs and evaluated its performance on a separate test set. We used the perplexity metric to evaluate the performance of the model. We utilized the Adam optimization algorithm with a learning rate of 1e-5 and a batch size of 32 to optimize the model. ### _Proposed Model_ We propose a stacking ensemble learning approach to improve the performance of our system. Stacking ensemble is an advanced machine learning technique that combines multiple heterogeneous weak learners (base models) to form a single stronger model (meta-learner). In this approach, the base models' predictions are used as input to the meta-learner, which ultimately makes the final prediction. The meta-learner used in this case is a logistic regression model, while the base models consist of Simple RNN, LSTM, BiLSTM, word2vec, and LSTM-Autoencoder. These models are trained with one-dimensional data as input. Since the predicted dataset from Level 0 already contains the expected values' probabilities, the meta-learner can provide accurate probabilities from Level 0. To mitigate overfitting, the meta-learner is trained using both the validation dataset and the outputs. The final result is the level-1 prediction. The architecture is divided into two levels, Level 0 and Level 1, as shown in Figure 2. Level 0 consists of Simple RNN, LSTM, BiLSTM, word2vec, and LSTM-Autoencoder models. After learning the data patterns, each of the base models generates predictions simultaneously. All models in Level 0 contribute equally to the overall model performance. Level 1, also referred to as the meta-learner, is built using logistic regression. The meta-learner at Level 1 is fed the Level 0 predicted outputs as input. Based on the Level 0 predictions, the meta-learner calculates the best weighted outputs. A "meta-learner" is a model that can quickly learn a pattern or adapt to different datasets with a small amount of training data. It learns patterns from the outputs generated by the five base models. As a result, the model can effectively learn completely new data and produce acceptable output. The meta-learner's parameters are a combination of the parameters of the five neural networks in the base models. Mathematically, the stacking ensemble learning approach can be represented as follows: Let \(\boldsymbol{M}\) be the number of base models, \(p_{i}^{m}\) be the probability of the positive class for the \(i\)-th sample predicted by the \(m\)-th base model, and \(\boldsymbol{w}_{m}\) be the weight assigned to the \(m\)-th base model. The weighted probability \(\boldsymbol{\mu}^{weighted}\) for the \(i\)-th sample can be computed as: \[p_{i}^{weighted}=\sum_{m=1}^{\boldsymbol{\mathcal{X}}}\boldsymbol{w}_{m}\cdot p _{i}^{m}\] The weights \(\boldsymbol{w}_{m}\) are determined by the meta-learner using the Level 0 predictions and the validation data. The final prediction \(\boldsymbol{y}_{t}^{final}\) for the \(i\)-th sample can be computed using the logistic function: \[\boldsymbol{y}_{t}^{final}=\frac{1}{1+e^{-(p_{i}^{weighted})}}\] By using a diverse set of base models, we can mitigate the limitations of traditional stacking ensemble approaches that employ similar base models, leading to similar predictions. If a single base model performs poorly on the dataset, there is a high likelihood that the final output will also be inferior. Conversely, with a diverse set of base models, the strengths and weaknesses of individual models complement each other, which results in a more robust and accurate overall model. This is because each base model is able to capture different aspects or patterns in the data, thereby reducing the reliance on any single model's performance. Additionally, the meta-learner can effectively combine these diverse predictions to generate a more accurate and stable final prediction, minimizing the impact of individual model biases or errors. In conclusion, the utilization of heterogeneous base models in a stacking ensemble approach provides a more resilient and powerful predictive model, capable of handling various types of data and delivering superior performance compared to traditional ensemble methods. Fig. 2: Proposed stacking ensemble learning architecture. ``` **Function**stacking_ensemble(data, train_ratio, val_ratio, test_ratio) // InitializeLevel0basemodels simple_rnn & SimpleRNN(); lstm & LSTM(); bi_lstm & BiLSTM(); lstm_autoencoder & LSTMAutoencoder(); word2vec_model & Word2Vec(); models & [simple_rnn, lstm, bi_lstm, lstm_autoencoder, word2vec_model]; // InitializeLevel1meta-learner meta_learner & LogisticRegression() //Splitthedataintotraining, validation, andtestingsets X_train, X_val, X_test, y_train, y_val, y_test & data_split(data, train_ratio, val_ratio, test_ratio) //TrainLevel0basemodels foreachmodelinmodelsdo model.fit(X_train, y_train) // MakepredictionswithLevel0base models Level0outputs & list() foreachmodelinmodels pred & model.predict(X_val) Level0outputs.append(pred) // ConcatenateLevel0outputs Level0outputs_combined & concatenate(Level0outputs) // TrainLevel1meta-learner meta learner.fit(Level0outputs combined, y_val) // MakefinalpredictionswithLevel1 meta-learner meta-learner Level0_test_outputs & list() foreachmodelinmodelsdo test_pred & model.predict(X_test) Level0_test_outputs.append(test_pred) // ConcatenateLevel0test outputs Level0_test_outputs_combined & concatenate(Level0test_outputs) //GenerateLevel1finalpredictions final_predictions & meta_learner.predict(Level0_test_outputs) returnfinal_predictions ``` **Algorithm 1** Proposed Stacking Ensemble Learning Algorithm. ## IV Evaluation metrics In order to assess the performance of the Neural Networks and our proposed stacking ensemble model, we have employed a range of evaluation metrics that provide insight into various aspects of model performance. These metrics include precision, recall, F1-score, accuracy, and execution time. Each of these metrics contributes to a comprehensive understanding of the model's effectiveness, generalization, and efficiency [36, 37, 38]. Below, we provide a brief description of each evaluation metric: ### _Precision_ Precision is a measure of the accuracy of the positive predictions made by the model. It is calculated as the ratio of true positive predictions to the sum of true positive and false positive predictions. In other words, it quantifies the proportion of correct positive predictions among all the instances predicted as positive. A higher precision value indicates that the model is better at identifying relevant instances and minimizing false positive predictions. \[\text{Precision}=\frac{\text{True\ Positives}}{\text{True\ Positives}+\text{False\ Positives}} \tag{1}\] ### _Recall_ Recall, also known as sensitivity or true positive rate, measures the proportion of actual positive instances that are correctly identified by the model. It is calculated as the ratio of true positive predictions to the sum of true positive and false negative predictions. A higher recall value indicates that the model is better at detecting positive instances and minimizing false negative predictions. \[\text{Recall}=\frac{\text{True\ Positives}}{\text{True\ Positives}+\text{False\ Negatives}} \tag{2}\] ### _F1-score_ F1-score is the harmonic mean of precision and recall, and it provides a balanced measure of both metrics. It is particularly useful when dealing with imbalanced datasets, where one class is significantly more prevalent than the other. The F1-score ranges from 0 to 1, with a higher value indicating better overall performance of the model in terms of both precision and recall. \[\text{F1-score}=2\cdot\frac{\text{Precision}\cdot\text{Recall}}{\text{ Precision}+\text{Recall}} \tag{3}\] ### _Accuracy_ Accuracy is a widely-used metric that quantifies the proportion of correct predictions made by the model, both positive and negative, relative to the total number of instances. It provides an overall indication of the model's performance, but it may not be a reliable metric when dealing with imbalanced datasets, as it can be biased towards the majority class. \[\text{Accuracy}=\frac{\text{True\ Positives}+\text{True\ Negatives}}{\text{Total\ Instances}} \tag{4}\] ### _Execution Time_ Execution time is a measure of the computational efficiency of the model. It refers to the amount of time required to train the model and make predictions. A shorter execution time indicates that the model is more efficient, which can be particularly important in real-world applications where time constraints are critical. By evaluating the execution time, we can assess the trade-offs between model performance and computational resources. These evaluation metrics provide a comprehensive and robust assessment of our neural network and proposed model's performance. By considering multiple aspects of performance, we can ensure that our model is not only accurate but also efficient, generalizable, and reliable across various datasets and application scenarios. ## V Result and Discussion In this study, we investigated the role of semantic and syntactic features in vulnerability prediction for CWE-119, focusing on buffer overflow vulnerabilities. We began by converting the text dataset into a minimal intermediate representation using a tokenizer provided by the Keras library. This basic representation assigns a numerical value to each word without considering semantic information. Since the meaning of code is often better captured by considering the context of multiple words, we employed state-of-the-art word embedding algorithms--GloVe and fastText--to extract semantic features from function-level codes. These features were then fed into neural network models for vulnerability prediction. We used 100,000 instances for training, 20,000 for validation, and 10,000 for testing. Our evaluation metrics included accuracy, precision, recall, and F1 score, with a focus on minimizing false positives and false negatives. We trained seven neural network models (Simple RNN, LSTM, BiLSTM, word2vec, BERT, GPT-2, and LSTM-Autoencoder) and our proposed stacking ensemble neural network model. Our ensemble learning model outperformed single models, achieving the highest accuracy in vulnerability prediction. Table 2 presents the results of vulnerable source code classification using different neural network models without word embedding algorithms. The Simple RNN model achieves an accuracy of 0.89, precision of 0.88, recall of 0.88, and F1 score of 0.92, with an execution time of 42 minutes and 8 seconds. The LSTM model has slightly better performance with an accuracy of 0.90, precision of 0.90, recall of 0.90, and F1 score of 0.92, and takes 29 minutes and 48 seconds to run. The BiLSTM model shows further improvement, obtaining an accuracy of 0.91, precision of 0.93, recall of 0.90, and F1 score of 0.87, but requires 2 hours and 5 minutes for execution. The Word2vec model yields an accuracy of 0.89, precision of 0.92, recall of 0.95, and F1 score of 0.93, with a runtime of 40 minutes and 2 seconds. The LSTM Autoencoder model has an accuracy of 0.91, precision of 0.93, recall of 0.94, and F1 score of 0.94, taking 53 minutes and 13 seconds for execution. The BERT model performs better with an accuracy of 0.92, precision of 0.93, recall of 0.93, and F1 score of 0.95, but requires 2 hours and 38 minutes to run. The GPT-2 model has an accuracy of 0.92, precision of 0.97, recall of 0.98, and F1 score of 0.97, with a considerably longer execution time of 7 hours and 48 minutes. Lastly, the proposed model outperforms the other models with an accuracy of 0.94, precision of 0.99, recall of 0.98, and F1 score of 0.99, and takes 2 hours and 31 minutes to execute. Table 3 shows the results when using GloVe and FastText embeddings. In general, the performance of all models improved when using these embeddings. The Simple RNN, LSTM, BiLSTM, and Word2vec models show a similar trend in improvement, with their respective accuracies increasing to 0.92, 0.92, 0.93, and 0.94. The LSTM Autoencoder model's performance slightly decreased with an accuracy of 0.90. The BERT, GPT-2, and proposed models maintain their superior performance, with accuracies of 0.94, 0.95, and 0.95, respectively. The execution times for all models vary, with the proposed model having a runtime of 2 hours and 46 minutes. Figure 3 shows the performance metrics for different neural network models on vulnerable source code without using any word embedding algorithms. The models considered are Simple RNN, LSTM, BiLSTM, Word2vec, LSTMAutoencoder, BERT, GPT-2, and a proposed model. The metrics considered are accuracy, precision, recall, and F1 score. The results demonstrate that the proposed model outperforms all other models in terms of accuracy and F1 score, achieving an accuracy of 0.94 and an F1 score of 0.99. The execution time of the proposed model is also relatively fast compared to other models, taking only 2 hours and 31 minutes. Figure 4 presents the classification results of the same neural network models on vulnerable source code using GloVe and fastText word embedding algorithms. The results demonstrate that all models achieved higher accuracy and F1 score compared to the results in Figure 3. The proposed model continues to perform the best with an accuracy of 0.95 and an F1 score of 0.99. However, the execution time of the proposed model is longer compared to Figure 3, taking 2 hours and 46 minutes. These figures provide a clear comparison of the performance of different neural network models and highlight the effectiveness of using word embedding algorithms for improving the classification results of vulnerable source code. The proposed model performs well in both scenarios, showing its potential as a reliable classification model. In Table 4, we present a comparison analysis between our proposed model and previous works in the domain of vulnerability detection. The table highlights the differences in terms of the purpose of each study, the data used, whether semantic or syntactic feature extraction was performed, the highest performance achieved, and if efficiency measurements were conducted. Lorga et al. [18] aimed at vulnerability detection using Twitter text data, but they did not perform semantic or syntactic feature extraction. Their model achieved an accuracy of 94.96%, and they did not provide any efficiency measurements. Similarly, Foret et al. [39] worked on vulnerability detection using news articles without incorporating semantic or syntactic features, resulting in an 87% accuracy. No efficiency measurement analysis was conducted in their work as well. Harer et al. [20] and Russell et al. [22] both focused on vulnerability detection in source code but did not consider semantic or syntactic feature extraction. Their models achieved F1-scores of 49.99% and 56.6%, respectively, without any efficiency measurement analysis. Behzadan et al. [40] also worked on vulnerability detection in source code without extracting semantic or syntactic features. They reported an accuracy of 94.72%, but no efficiency measurement analysis was performed. Our proposed model targets vulnerability detection in source code and incorporates semantic and syntactic feature extraction using GloVe and fastText embeddings. As a result, our model achieves the highest accuracy of 95% compared to the previous works. Moreover, we contribute to efficient measurement analysis and perform an in-depth analysis of the features that were not considered in previous studies. This comprehensive approach allows us to better understand the factors influencing the performance of vulnerability detection models and develop more effective methods for detecting security vulnerabilities in source code. ## VI Conclusion Our research aims to detect implementation vulnerabilities early in the development cycle by leveraging the power of neural networks. We have collected a large dataset of open-source C and C++ code and developed a scalable and efficient vulnerability detection method based on various neural network models. We compared the performance of different models, including Simple RNN, LSTM, BiLSTM, LSTM-Autoencoder, \begin{table} \begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline Models & Accuracy & precision & Recall & F1 Score & Execution Time \\ \hline Simple RNN & 0.89 & 0.88 & 0.88 & 0.92 & 42min 8s \\ \hline LSTM & 0.90 & 0.90 & 0.90 & 0.92 & 29min 48s \\ \hline BiLSTM & 0.91 & 0.93 & 0.90 & 0.87 & 2h 5min \\ \hline Word2vec & 0.89 & 0.92 & 0.95 & 0.93 & 40min 2s \\ \hline LSTMAuteenorder & 0.91 & 0.93 & 0.94 & 0.94 & 53min 13s \\ \hline BERT & 0.92 & 0.93 & 0.93 & 0.95 & 2h 38min \\ \hline Gpt2 & 0.92 & 0.97 & 0.98 & 0.97 & 7h 48min \\ \hline Proposed Model & 0.94 & 0.99 & 0.98 & 0.99 & 2h 31min \\ \hline \end{tabular} \end{table} TABLE II: Vulnerable Source code Classification results using different Neural network models with no word embedding algorithms \begin{table} \begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline Previous authors & Purpose & Data & Semantic or Syntactic feature extraction? & Highest percentage & Efficiency Measurement? \\ \hline Lorga et al. [18] & Vulnerability detection & Twitter text data & No & 94.96\% (Accuracy) & No \\ \hline Foret et al. [39] & Vulnerability detection & News Articles & No & 87\% (Accuracy) & No \\ \hline Harer et al. [20] & Vulnerability detection & Source code & No & 49.99\% (F1-score) & No \\ \hline Russell et al. [22] & Vulnerability detection & Source code & No & 56.6\% (F1-score) & No \\ \hline Behzadan et al. [40] & Vulnerability detection & Source code & No & 94.72\% (Accuracy) & No \\ \hline **Our Proposed Model** & **Vulnerability detection** & **Source code** & **Yes** & **95\%** (Accuracy) & **Yes** \\ \hline \end{tabular} \end{table} TABLE IV: Comparative analysis with previous work \begin{table} \begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline Models & Accuracy & precision & Recall & F1 Score & Execution time \\ \hline Simple RNN & 0.92 & 0.93 & 0.93 & 0.97 & 42min 8s \\ \hline LSTM & 0.92 & 0.93 & 0.95 & 0.97 & 33min 13s \\ \hline BiLSTM & 0.93 & 0.96 & 0.96 & 0.99 & 45min 3s \\ \hline Word2vec & 0.94 & 1.00 & 0.98 & 0.99 & 42min 56s \\ \hline LSTMAuteenorder & 0.90 & 0.93 & 0.94 & 0.95 & 59min 53s \\ \hline BERT & 0.94 & 0.95 & 0.95 & 0.99 & 5h 16min \\ \hline Gpt2 & 0.95 & 0.97 & 0.98 & 0.99 & 8h 33min \\ \hline Proposed Model & 0.95 & 0.97 & 0.98 & 0.99 & 2h 46min \\ \hline \end{tabular} \end{table} TABLE III: Vulnerable Source code Classification results using different Neural network models with embedding algorithms GloVe + fastText Word2Vec, BERT, and GPT-2, and found that models with semantic and syntactic information extracted using state-of-the-art word embedding algorithms such as GloVe and FastText outperform those with a minimal text representation. Our proposed neural network model has shown to provide higher accuracy with greater efficiency than the other models evaluated. We have also analyzed the execution time of the models and proposed a trade-off between accuracy and efficiency. Overall, our research contributes to the development of large-scale machine learning systems for function-level vulnerability identification in source code auditing. ## Acknowledgement The work is supported by the National Science Foundation under NSF Award #2209638, #2100115, #2209637, #2100134, #1663350. Any opinions, findings, recommendations, expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.
2308.11497
The Global Network of Cavities to Search for Gravitational Waves (GravNet): A novel scheme to hunt gravitational waves signatures from the early universe
The idea of searching for gravitational waves using cavities in strong magnetic fields has recently received significant attention. Specifically, discussions focus on cavities with relatively small volumes, which are currently employed in the search for axions. In this context, we propose a novel experimental scheme that enables the detection of gravitational waves in the GHz regime, which could originate, for example, from primordial black hole mergers. The scheme is based on synchronous measurements of cavity signals from multiple devices operating in magnetic fields at distant locations. While gravitational wave signatures might be detectable in individual cavities, distinguishing them from noise is highly challenging. By analyzing the correlation among signals from several, possibly geographically separated cavities, it is not only possible to significantly enhance the signal-to-noise ratio but also to investigate the source of those gravitational wave signatures. In the context of this proposal, a first demonstration experiment with one superconducting cavity is currently conducted, which is the basis of the proposed data-analysis approaches. On this basis the prospects of GravNet (Global Network of Cavities to Search for Gravitational Waves) are outlined in the paper.
Kristof Schmieden, Matthias Schott
2023-08-22T15:19:16Z
http://arxiv.org/abs/2308.11497v1
# The Global Network of Cavities to Search for Gravitational Waves ###### Abstract The idea of searching for gravitational waves using cavities in strong magnetic fields has recently received significant attention. Specifically, discussions focus on cavities with relatively small volumes, which are currently employed in the search for axions. In this context, we propose a novel experimental scheme that enables the detection of gravitational waves in the GHz regime, which could originate, for example, from primordial black hole mergers. The scheme is based on synchronous measurements of cavity signals from multiple devices operating in magnetic fields at distant locations. While gravitational wave signatures might be detectable in individual cavities, distinguishing them from noise is highly challenging. By analyzing the correlation among signals from several, possibly geographically separated cavities, it is not only possible to significantly enhance the signal-to-noise ratio but also to investigate the source of those gravitational wave signatures. In the context of this proposal, a first demonstration experiment with one superconducting cavity is currently conducted, which is the basis of the proposed data-analysis approaches. On this basis the prospects of GravNet (Global Network of Cavities to Search for Gravitational Waves) are outlined in the paper. ###### Contents * 1 Introduction * 2 Sources of high frequency gravitational waves * 3 Basic Considerations on Cavity Designs * 4 GravNet as Resonant Cavity Experiment (GravNet-1) * 5 GravNet as Photon Counting Experiment (GravNet-2) * 6 Conclusion ## 1 Introduction The detection of gravitational waves (GWs) by the LIGO and Virgo interferometers [1] marked the beginning of a new era in astronomy. Gravitational waves, with frequencies spanning from super massive binary black hole systems in the nHz regime to kHz for compact binary objects and up to GHz for GWs from the cosmic gravitational wave background [2], are an essential part of our understanding of the universe. Interferometers, like LIGO and Virgo, have proven to be highly successful in detecting GWs, and future generations, such as the Einstein Telescope [3], are in the design phase. An alternative concept for GW detection exploits their coupling to the electromagnetic field, using radio frequency cavities, either pumped or placed in a static magnetic field. Recently, the latter approach has been discussed in more detail [4; 5; 6], especially in the context of searches for axion-like particles [7; 8; 9]. The basic principle behind the cavity-based experiments is simple: a gravitational wave distorts the cavity's shape, altering the magnetic flux through the cavity and generating an electric signal that can be detected. Additional, the GW couples directly to the EM field via the inverse Gertsenshtein effect. Hence, a gravitational wave that is passing through a cavity within a static magnetic field, creates an effective current in Maxwell's equations, leading to an electromagnetic field that oscillates at the same frequency as the gravitational wave. The induced electromagnetic field can be resonantly enhanced using microwave cavities and the generated radio frequency power detected. The sensitivity of such experiments depends on the GW frequency, incoming direction, the cavity's resonance frequencies, and the external magnetic field strength. The sensitivity to gravitational waves using a cavity-based experiment has been derived in [4] and can be summarised by the signal power \[P_{sig}=\frac{1}{2}Q\omega_{g}^{3}V^{5/3}(\eta_{n}h_{0}B_{0})^{2}\frac{1}{\mu_ {0}c^{2}}, \tag{1}\] with \(\omega_{g}\) denoting the GW frequency and \(h_{0}\) the magnitude of the GW strain. The cavity is described by its volume \(V\), its quality factor \(Q\) as well as the external magnetic field \(B_{0}\). The dimensionless coupling constant \(\eta_{n}\) is given by the overlap of the induced current with the excited cavity mode: \[\eta_{n}=\frac{|\int_{V}d^{3}xE_{n}^{*}\cdot\hat{j}_{+,\chi}|}{V^{1/2}(\int_{V} d^{3}x|E_{n}|^{2})^{1/2}}, \tag{2}\] where \(E_{n}\) denotes a resonant mode of the cavity and \(\hat{j}_{+,\times}\) describe spatially-dependent dimensionless functions for the spatial profile and polarization of the GW signal. We refer to [4] for further details. It is important to note that the sensitivity increases quadratically with the magnetic field strength \(B_{0}\) and to the power of 5/3 with the cavity volume V. Unlike axion searches, where each axion mass corresponds to a single resonance frequency, GW signals may be broadband and not localized at a specific frequency. Even more importantly, GW are expected to be coherent over large distances, in contrast to axion halo signatures. This opens up a completely new approach to search for GW: instead of constructing one dedicated cavity based experiment, a global network of several cavity experiments over the whole world could coherently combine their measurements and boost significantly the sensitivity. In particular, it is sufficient to use off-shelf high field magnet systems, which can be also operated at smaller laboratories. We present in the following the basic idea of a Global Network of Cavities to Search for Gravitational Waves (GravNet). After a brief discussion on the potential sources of the high frequency gravitational waves (Section 2), we briefly discuss the baseline cavity experiment SUPAX (Section 3), which will be used to estimate the overall baseline sensitivity of GravNet. Extensions of GravNet are summarized in Section 4. ## 2 Sources of high frequency gravitational waves Various sources have the potential to produce high-frequency gravitational waves (GWs), and they have been studied extensively [4]. Notably, mergers involving sub-solar mass objects, including primordial black holes (PBHs) and other exotic compact objects, are typical examples. Additionally, GWs can be generated by boson clouds via black hole superradiance. Also exotic compact objects, such as boson and fermion stars, gravitino stars, gravistars, and dark matter blobs, can have significantly lower masses than a solar mass, enabling them to emit GWs at high frequencies. During the inspiral phase of a binary merger, the frequency of the emitted GWs increases as the compact objects approach each other. There exists an upper bound on the GW frequency emitted by a binary in a quasi-circular orbit, which corresponds to the frequency at the innermost stable circular orbit (ISCO). For a binary with equal-mass compact objects (\(M_{b}\)) during the early stages of the merger, the GW frequency evolves as: \[\omega_{g}\approx 14\,\mathrm{GHz}\times\left(\frac{10^{-6}M_{\odot}}{M_{b}} \right)\left(\frac{r_{\mathrm{ISCO}}}{r_{b}}\right)^{3/2},\] where \(M_{\odot}\) denotes the solar mass. Therefore, only very light binaries, such as sub-Earth mass PBHs, can generate GW signals in the GHz regime well before reaching the ISCO. However, GW signals from such light binaries become highly transient near the ISCO, as the emitted GW frequency evolves over time according to: \[\frac{d\omega_{g}}{dt}\approx\left(\frac{M_{b}}{r_{b}}\right)^{11/6}.\] This time dependence limits the sensitivity of resonant experiments. The number of orbital cycles (\(N_{\mathrm{cyc}}\)) a binary spends emitting GWs within the resonator bandwidth \(\omega_{g}/Q\) for frequencies in the GHz regime can be estimated as: \[N_{\mathrm{cyc}}\approx 10^{-3}\times\left(\frac{10^{6}M_{\odot}}{M_{b}} \right)^{5/3}\left(\frac{10^{5}}{Q}\right).\] Requiring at least \(N_{\mathrm{cyc}}>10^{5}\) for the detection of a GW signal implies \(M_{b}<10^{-11}M_{\odot}\). Remarkably, this is the regime where PBHs could constitute a significant fraction of the cosmological dark matter abundance [4]. The dependence of the time the PBH merger signal spends within a cavity bandwidth is shown in Figure 5. However, other sources such as GWs from superradiance of boson clouds in the vicinity of PBHs and other more exotic objects are non-transient and remain at constant frequencies over long time intervals, order of months, allowing for long measurement times [10]. The typical distance of high frequency gravitational wave sources is in the order of 1 pc or more. Hence the GW wavefront at earth can be assumed to be coherent across distances at earth, in particular when the origin of the wave can be determined, as the phase-differences at different experimental locations at earth can be calculated. ## 3 Basic Considerations on Cavity Designs In order to estimate the expected signal over noise ratio for a typical cavity of the GravNet proposal, we use the existing numbers of the SUPAX experiment, which is currently in operation at the University of Mainz [11]. The SUPAX cavity and its readout will be discussed in section 3.1. We then discuss the advantages of combining several cavities in a coherent manner in section 3.2, before we discuss single photon detection approaches in cavities in section 3.3. ### The SUPAX Cavity Design SUPAX is an experiment which aims for the search for dark photons and axion-like particles with masses around \(34\,\mathrm{\mu eV}\) using a cavity in a high magnetic field [11]. SUPAX uses a rectangular cavity which is suspended from the top of a liquid helium (LHe) dewar and immersed in the LHe bath, as illustrated in Figure 3. The dimensions of the cavity are carefully chosen to maximize its effective volume while ensuring that the final assembly fits within the designated cryostat, which has a diameter of \(52\,\mathrm{mm}\). Additionally, the mode next to the investigated \(\mathrm{TM}_{010}\) mode is kept at least \(>100\,\mathrm{MHz}\) away. This leads to inner dimensions of the cavity of \(150\times 22.8\times 30\mathrm{mm}^{3}\). These dimensions yield a resonance frequency of: \[f=8.47\,\mathrm{GHz}\ \ \Rightarrow\ \ m_{A^{\prime}}=2\pi\hbar\cdot f\approx 35\, \mathrm{\mu eV}.\] at room temperature. The cavity depicted in Figure 3, was milled from a solid copper block. The unloaded quality factor is measured to be \(Q_{0}=40726\pm 4250\) in good agreement with the simulation result of \(Q_{0}^{\mathrm{sim}}=40059\pm 128\). The readout is based on a real-time spectrum analyser which is capable of streaming IQ time-series data with a bandwidth of \(40\mathrm{MHz}\) around the \(8\) GHz centre frequency to a readout computer. For the purpose of the axion (dark photon) search the data is immediately converted into the frequency domain and the resulting spectra stored for offline analysis. To study the offline combination of multiple cavity signals the IQ data is stored. The next generation cavity of the SUPAX experiment is coated with niobium-nitrite superconductor. This will lead to an expected improvement of the quality factor by a factor of \(10\) to \(100\). The RF behaviour of the coating in the magnetic field is currently under investigation. Figure 1: Left: Maximum feasible integration time given by the duration the frequency change is less than \(f_{0}/Q\), assuming \(Q=10^{6}\), for a PBH merger. Right: typical strains of PBH mergers in dependence of their mass for distances of \(1\mathrm{pc}\) and \(10\mathrm{pc}\) from the earth. Figure 3: Schematic overview of the SUPAX experimental setup. Figure 2: Cavity of the SUPAX experiment with a length of \(16\mathrm{cm}\). The magnet to be used is a fast-ramping superconducting solenoid from CRYOGENIC LTD with a central field of up to 14T over a length of 20cm1. The room temperature bore has a diameter of 89 mm which limits the diameter of the used cryostat. Footnote 1: The magnet is currently installed at the Helmholtz institute at Mainz by the group of Prof. Budker ### Combination of several cavities with a coherent signal Typical distances of sources of high frequency gravitational waves are 1 pc or more, hence their sources can be thought of point-like at earth, yielding coherent GW wave-fronts. Before discussing the combination of locally separated cavities, we discuss first the combination of N cavity signals, recorded at the same location. This gives a factor of N improvement in the SNR over the individual cavity, and not as trivially assumed an improvement of \(1/\sqrt{N}\). To understand this feature, we follow the discussion in [12] and consider an N-port power combiner/di-vider as in Figure 4. The middle picture shows how N voltages with frequency \(\omega\) and phase differences \(\phi\) combine. Assuming the same phase and amplitude at each input the output Voltage behaves like \(V_{out}=\sqrt{N}V_{in}\), hence the power scales linearly with \(N\). Having \(N\) cavities distributed across earth, one needs at least three cavities which define the incoming direction of the gravitational wave. This defines the phase-difference at any further cavity, thus yielding an increase on the SNR by \(N-3\) for \(N\) similar cavities that are distributed across earth. As one single setup will not be sensitive enough to reliably detect a GW, the direction of the incoming GW, and hence the phase alignment of the various setups, will be stepped through all possible directions in the offline data analysis. ### Single Photon Counting in Cavities Single photon counting in cavities is a cutting-edge technique that has revolutionized the field of quantum optics and precision measurements. It involves detecting and quantifying individual photons that are confined within cavities. For RF photons the problem is much more difficult than for optical wavelengths as the energy per photon is lower. However, recently sensitivity to single RF photons was experimentally shown and several further techniques are promising to reach single RF photon sensitivity soon. Among the currently discussed detection methods are current biased Josephson Junctions (CBJJ) [13], Kerr Josephson parametric amplifiers [14] and transmon Q-bits [15].While CBJJs do not yet reach single photon sensitivity, the latter two approaches do with the transmon Q-bit showing a detection efficiency of 43% for 7 GHz photons at a dark count rate below 90 \(Hz\). Similar to the resonant cavity readout, one would also expect a huge advantage when combining several cavities, which are distributed across the earth. Assuming a single photon detection efficiency of 50% and an improved background rate of 10Hz for one cavity one would get \(3\times 10^{8}\) background events per year, while a signal is expected only once per year. Since the background is uncorrelated between independent cavities, one can require a coincidence signal across N cavities. Assuming a timing resolution of 0.2 ms the background is suppressed by a factor of \((1/0.2,\mathrm{ms}\times 10\,Hz)^{N-1}=500^{N-1}\). Requiring at least N=5 cavities to record a coincident signal suppresses the background therefore by \(\approx 6\times 10^{10}\), i.e. yielding a background free search. The problem is that 5 coincident cavities also have a resulting signal efficiency of \(0.5^{5}\approx 0.03\) corresponding to an average time for one recorded signal event of 32 years. However, if one combines e.g. \(N>5\) cavities, the likelihood to record a signal in at least 5 cavities increases according to a binomial likelihood distribution. Clearly, at least three cavities are required to define the direction of the incoming GW, i.e. to define the coincidence window for any other cavity. Assuming \(N=20\), of which \(N=17\) are actually available for the analysis, and a single signal efficiency of \(p=0.5\), a total signal efficiency of \(>90\%\) is be expected. The optimisation of the coincidence setup is detailed in section 5. ## 4 GravNet as Resonant Cavity Experiment (GravNet-1) The first implementation of GravNet will most likely be based on the usage of the resonant cavity approach, as the underlying technologies are all well tested and understood. ### Setup As discussed in the previous section, typical commercially available (and therefore cost-efficient) high-field magnets have a cylindrical volume of a high constant magnetic field. Typical dimensions range between radii of 2 to 5 cm with a height of 10 to 40 cm. The coupling constant of GW signals to the cavity is in the order of \(\approx 0.1\) for cylindrical cavities, but 1 for spherical cavities. Assuming a constant magnetic field in a cylindrical volume of \(r=4\) cm and \(h=24\) cm, one can either fit one cylindrical cavity with those dimensions or three spherical cavities of \(r=4\) cm. Comparing the effect on \(P_{sig}\) between one cylindrical and one spherical cavity, this implies a factor on \(P_{sig}^{\mathrm{cylinder}}/P_{sig}^{\mathrm{spherical}}\) of \(\approx 12\) due to larger volume of the cylinder, but a factor of \(\eta^{2}=0.01\) due to the coupling. The overall effect on \(P_{sig}\) is roughly up to a factor 10 in favor of a spherical cavity. Using three coupled cavities instead will increase the effective volume three fold and the SNR by a factor of 6, as can be seen from eq. 1. According to those considerations, we foresee three cavities for one experimental setup of GravNet, each with a radius of 4 cm, placed in a constant magnetic field of 14 T with a height of 24 cm. Finally, we assume that the readout system for GravNet will be based on Josephson Parametric Amplifiers (JPAs) or similar low noise amplifiers with system temperatures of 0.1 K. An overview of all critical parameters of one experimental GravNet setup is given in Table 1. ### Sensitivities We think it is realistic to assume that the final GravNet experiment will combine \(N=10\) individual experimental locations across the globe, each hosting three cavities as detailed in the previous section. Assuming 1s integration time and the sensitivity for each setup as listed in Table 1, the sensitivity on \(h_{0}\) will improve by a factor \(\sqrt{N}\) to \(h_{0}<1.7\cdot 10^{-23}\). This requires a phase aligned combination of the time-series data of each of the setups, yielding a linear increase in the SNR with increasing number of setups. To this end the direction of the incoming GW has to be known to be able to calculate the relative phase differences between the setups depending on their geographic location. This can be achieved by assuming any direction of the GW, combining the data and searching for signal and then scanning through both angles defining the direction of the GW. The sensitivity can be trivially increased by increasing the integration time of the cavity signals. While integration times over hours are feasible, this requires a GW source with stable frequency over at least the intended integration time. Bosonic clouds exhibiting gravitational superradiance a superb candidate. Integrating for two hours instead of 1 second leads to an improved by one order of magnitude on the GW strain, reaching \(h_{0}<5.6\cdot 10^{-24}\) for a single setup and consequently about \(1.7\cdot 10^{-24}\) for ten setups. The dependence of the sensitivity on \(h_{0}\) on the integration time is shown in Figure 5. ### Possible Extensions It should be noted that one could in principle also combine different cavity layouts and magnet systems into GravNet, which are then testing different resonance frequencies at different sensitivities. Since such a setup \begin{table} \begin{tabular}{c|c|c} Setup & Supax & GravNet \\ \hline Shape & cyl. & spher. \\ \(f_{0}\) [GHz] & 8.3 & 5.0 \\ Volume [l] & 0.128 & 0.21 \\ \(Q_{0}\) & 39600 & \(10^{6}\) \\ \(\eta\) & 0.08 & 0.6 \\ \(T_{\text{sys}}\) [K] & 5 & 0.1 \\ \(B\) [T] & & 14 \\ int. time & & 1 s \\ n cavities & 1 & 3 \\ \hline noise power [W] & \(1.5\cdot 10^{-21}\,W\) & \(6.2\cdot 10^{-23}\,W\) \\ \(h_{0}(P_{\text{sig}}=P_{\text{noise}})\) & \(7.1\cdot 10^{-21}\) & \(5.2\cdot 10^{-23}\) \\ \end{tabular} \end{table} Table 1: Parameters of the experimental setup defining the signal and noise power. The measured values were obtained using the Supax Cu cavity in LHe. The expected values assume a superconducting, spherical cavity with 4 cm radius. Figure 4: Left: Equivalent circuit for one mode of a single-port cavity, Middle: Schematic of an N-Port Power Combiner / Divider, Right: Equivalent circuit for one mode of a two-port cavity Figure 5: Shown in the sensitivity on the GW strain \(h_{0}\) in dependence on the integration time for the resonant cavity setup with parameters assumed as shown in Table 1. would certainly introduce a certain model dependence in particular when searching for transient signals, its final efficiency still has to be studied in detail. ## 5 GravNet as Photon Counting Experiment (GravNet-2) ### Setup As discussed in Section 3.3, the shape of the cavity does not increase the likelihood of a conversion, but only the active volume of the cavity within the magnetic field is relevant. Given that the cost driving factor is always the magnet system, but not the design of the cavities, we assume the same magnet setup as in GravNet-1 but assume two independent cylindrical cavities with dimensions of \(r=4\,cm\) and \(h=12\,cm\) instead of three spherical cavities. While the volume increases the sensitivity with \(V^{5/3}\), one gains significantly more due to the Binomial probabilities, discussed in Section 3.3. ### Sensitivities Similar to GravNet-1, we assume again N=10 different experimental setups, with a total of \(N=20\) operational single and independent cavities, as depicted in Figure 7. The cavities operate at a resonance frequency around 5GHz and exhibit a volume of \(0.6\,l\) each. The single RF photon detection efficiency is taken to be 50%, a dark count rate of \(10\,Hz\) and a time resolution of 0.2 ms are assumed, as discussed in section 3, the following sensitivities are expected. A naive sensitivity estimate yields, assuming a coincidence time window of 0.2 ms and each setup consisting of 2 independent cavities, a coincidence dark count rate of 1.2 counts per minute. Requiring a coincidence of 5 cavities in total a dark count rate of 1 in 190 years is expected. The efficiency to detect the coincident production of RF photons in at least 5 out of 20 cavities is calculated using the binomial distribution with \(n=20\) and \(p=0.5\) where \(P(x\geq 5)=99.4\%\). The question is however how this translates to a sensitivity on the GW strain \(h_{0}\). The photon flux from thermal noise at 0.1 K and a sensitive bandwidth of 1 kHz is about 10 photons per second at a photon energy of 5 GHz. At 1 MHz sensitive bandwidth would increase the photon flux to 400 Hz. Decreasing the temperature to 0.01 K would reduces the thermal photon flux by one order of magnitude. Hence, we assume for the following calculation a photon flux of 10 Hz from thermal radiation and a negligible contribution to the dark count rate from the detector itself. Clearly, to be able to discriminate the thermal noise photons to a signal from a BPH merging event a coincidence measurement is needed, as indicated above. The photon flux \(\Phi\) generated by a GW can be estimated by dividing the signal power by the photon energy \(\Phi=P_{sig}/h\nu\). Using eq. 1 and assuming \(Q_{0}=10^{6}\) and \(\eta=0.1\) the photon flux generated in one cavity in dependence on the GW strain is shown in Fig. 8. Two cavity dimensions are shown: GravNet-a and GravNet-b, whose parameters are summarized in Table 2. The smaller cavity (GravNet-a) shows a signal photon flux comparable to the thermal noise of \(10\,Hz\) at \(h_{0}=1.7\times 10^{-21}\) while the larger cavity (GravNet-b) reaches that flux at \(h_{0}=1.6\times 10^{-24}\). The target rate of accidental coincidences (ac) from the thermal noise are set to one per year. This defines the length of the allowed coincidence window \(\Delta t\) in dependence on the number of required coincidences \(k\) and the background rate \(bkg\): \[1/\Delta t=(\mathrm{bkg\cdot secPerYear})^{1/(k-1)}\cdot\mathrm{bkg}\] Figure 6: Schematic drawing one experimental GravNet-1 setup with resonant cavities. Figure 7: Schematic drawing one experimental GravNet-2 setup as single photon experiment. This dependence is shown in Figure 9. Knowing the needed coincidence interval we can continue and calculate the efficiency to detect one photon from a GW in k detectors within the coincidence window. The result is shown for various assumptions on the signal photon flux in Figure 10, assuming 20 independent detectors in total. A photon flux of \(\Phi=30\,Hz\) is not reliable detected any more, while for a photon flux of \(\Phi=40\,Hz\) a detection efficiency of 1 is still reached using a coincidence of 18 out of 20 cavities with a coincidence window of \(31\,ms.\) This photon flux corresponds to a GW strain of \(h_{0}=3\times 10^{-22}\) (\(h_{0}=3\times 10^{-24}\)) for the GravNet-a (GravNet-b) setups, respectively, as is shown in Figure 8. It is worth stressing the fundamental difference of a counting experiment to the resonance based experiment. In the resonant cavity excitation experiment one relies on the signal to be monochromatic w.r.t. to the cavity bandwidth, at least long enough to ring up the cavity so that the maximal RF energy can be read out, which is still below a ms for very high Q cavities. Note that typical integration times for such setups is O(100) seconds, and even in the sensitivity estimate in Section 4 1s integrating time is assumed. Then a phase aligned combination of the signals from multiple cavities is required, which may be challenging. Contrary, in the counting experiment we rely on simultaneous measurements of single photons within a time interval of \(ms\) making the experiment sensitive to very short, transient signals. The cavity signals don't need to be phase aligned and even measurements from setups with variations in frequencies can easily be combined. Using order of 20 individual detectors the experiment becomes essentially background free given the parameters of the detectors as stated above, and can be sensitive to strains down to \(10^{-22}\) to \(10^{-24},\) depending on the size of the conversion volume. Those results are summarized in Figure 11 in comparison to other existing experiments. ## 6 Conclusion In this paper we propose to setup a world-wide network of cavity based experiments to search for high frequency gravitational waves (GravNet) with frequencies in the 5 GHz regime. Assuming ten participating laboratories, sensitivities on the gravitational wave strain of \(h_{0}<10^{-24}\) for monochromatic sources and long integration times and \(h_{0}<10^{-22}\) for \(~{}10\,ms\) transient signals can be reached, depending on the chosen experimental setup. Using larger magnet setups with lower field would even extend the sensitive to transient signals to \(h_{0}<10^{-24}.\) This would allow for a first search of several exotic sources of high frequency gravitational waves as well as for primordial black hole models with masses larger than \(3\times 10^{-7}\,M_{\odot}.\) In particular it should be noted, that the experimental concept is based on commercial magnet systems, which are rather cost efficient compared to specialized dedicated high field magnets. Moreover, a worldwide collaboration such as GravNet would share costs automatically with local lab-based experiments and boost the R&D effort within many groups, since new developments at one site can be easily integrated in all other experimental locations. This paper should be seen as a starting point for discussions towards a collaborative world-wide effort in the context of gravitational wave physics beyond the regime of classic astrophysical objects. ## Acknowledgement We thank Pedro Schwaller, Sebastian Schenk and Tim Schneemann for the helpful discussions and comments during the preparation of this paper. This work would have not been possible without the continuous support from the PRISMA+ Cluster of Excellence at the University of Mainz. \begin{table} \begin{tabular}{c|c|c|} Setup & GravNet-a & GravNet-b \\ \hline radius & 40 mm & 40 cm \\ length & 12cm & 50 cm \\ Volume [\(m^{3}\)] & \(6\times 10^{-4}\) & 0.25 \\ \(Q_{0}\) & \(10^{6}\) & \(10^{5}\) \\ \(T_{\text{sys}}\) [K] & 0.1 & 0.1 \\ \(B\) [T] & 14 & 9 \\ \hline noise power [W] & \(4.4\cdot 10^{-23}\,W\) & \(4.4\cdot 10^{-23}\,W\) \\ \(h_{0}(P_{\text{sig}}=P_{\text{noise}})\) & \(1.6\cdot 10^{-22}\) & \(3.4\cdot 10^{-24}\) \\ \(\gamma\)-flux [1/s] & 10 & 10 \\ \end{tabular} \end{table} Table 2: Parameters of the experimental setup defining the signal and noise power. The measured values were obtained using the Supax Cu cavity in LHe. The expected values assume a superconducting, spherical cavity with 4 cm radius. Figure 8: Photon flux in dependence of the GW strain \(h_{0}\) at 5 GHz for GravNet a and b setups detailed in Table 2.